A Precision Task Pipeline Built for Scale

When you effortlessly scroll through tens of thousands of photos in Synology Photos or share project images with team members, the experience feels seamless. But have you ever wondered how the backend system works to deliver such smooth performance?

Think of Synology Photos’ backend as a sophisticated factory, with its core being an efficient pipeline called the “Task Center.” Every second, thousands of tasks flow through this pipeline: phone backups, photo similarity calculations, file information processing, thumbnail generation, face and object recognition, geographic location updates, and more.

In such a busy and complex system, how does Synology Photos ensure precise division of labor and smooth operation, preventing simple actions from paralyzing the entire system? The answer lies in our carefully designed engineering practices.

Practice One: Asynchronous Processing and Priority Scheduling—Putting Critical Tasks First

An efficient factory can’t stop because of one time-consuming component. Similarly, your interface—the “main lane”—should never be blocked by any backend task.

In software terms, each click sends a server request through what engineers call an API (Application Programming Interface). We design all potentially time-consuming operations to process asynchronously. This means your action immediately enters our pipeline queue, and the API instantly returns a “task accepted” response, allowing your interface to remain responsive.

But asynchrony is just the foundation. The true brilliance of this pipeline is its priority scheduling. All processing tasks vary in importance and urgency. For example, what you care about most might be seeing your photos as quickly as possible. Therefore, the pipeline is designed so that once the most fundamental file indexing and thumbnail generation are completed, the photo immediately appears in your Synology Photos interface, with the server actively returning this update to your browser without requiring manual refresh.

More intensive tasks like facial recognition and location processing receive lower priority and run when system resources are available. This design ensures you get the main experience (viewing photos) immediately while still delivering rich features (searching by map or people) in the background—creating the perfect balance between responsiveness and functionality.

Practice Two: Granular Task Decomposition—Ensuring Reliability for Extreme Operations

What constitutes an extreme operation? A prime example is deleting a folder containing 7 million photos.

Such massive operations, if handled poorly, can exhaust system memory and become impossible to complete. To ensure reliability, we employ a fundamental database concept—the Transaction.

A Transaction is a set of operations that “either all succeed or all fail.” It acts like a safety mechanism, ensuring data integrity during modifications. If we designed “deleting 7 million photos” as a single Transaction, we’d face serious problems:

  1. Resource depletion: The system would need to load all 7 million deletion records into memory at once, likely causing memory exhaustion and immediate task failure.

  2. Long-term locking: During the hours this massive Transaction would run, relevant database sections would remain locked, preventing any access to this data.

  3. High failure risk: If anything unexpected happens while processing (like a NAS restart), hours of work would be lost, with everything reverting to the starting point.

Our solution breaks this enormous task into “micro Transactions.” The challenge is finding the right granularity for these divisions:

  • Too Large (e.g., 100,000 photos per Transaction): Though this reduces database commits, each execution still takes considerable time. Failures still cause substantial work loss, and extended database locking affects other operations.

  • Too Fine (e.g., 1 photo per Transaction): While individual failures cause minimal loss, the constant stream of database commits creates excessive disk activity, potentially slowing overall performance as the system handles these small requests.

Our Solution: Experience and Flexibility

Finding the optimal balance requires extensive testing, experience, and task-specific adjustments. Deletion tasks need different Transaction granularity than metadata updates. After careful testing, we’ve determined optimal batch sizes for various operations. For example, during package updates, every 100 old data updates form an independent Transaction.

This approach delivers two key benefits:

  1. Reliability: Each “micro Transaction” operates independently. If an issue occurs, it only affects the current small batch. Processing can resume from the interruption point once the problem is resolved.

  2. Real-time progress feedback: With photo deletion, as data is removed in batches, you see this progress reflected immediately in your interface. Instead of waiting long periods for photos to suddenly disappear, you watch them smoothly vanish in groups. This provides clear visual feedback and confirms the system is working effectively.

Conclusion: An Unblocked Precision Pipeline for Your Data

Synology Photos’ stability and efficiency come from our systematic approach to task processing. Through asynchronous design, we eliminate waiting; through micro Transactions, we ensure reliability and responsiveness even for the most demanding operations.

We believe the best user experience is one where you never notice the complex mechanisms working behind the scenes. Our team continuously refines this architecture to maintain a reliable, efficient pipeline that handles your growing photo collection with consistent performance and stability.



Source link