When you search for something and the results appear almost instantly, you might find yourself wondering, “How did they do that?” Achieving low-latency response times for what can seem like an endless array of possibilities is no simple task, but it’s something that our software engineers strive to accomplish every day.
Let’s take a closer look at what makes Synology Drive function with such efficiency and how it provides you with lightning-fast results.
Building a Foundation for Scale and Reliability
Before an application can be fast, it must be built on a solid foundation. In the software world, this means creating an architectural blueprint—such as the database schema and data integrity mechanisms—that can scale to handle massive amounts of data without collapsing under its own weight, while ensuring the data itself is always protected.
Moving Folders with a Large Number of Files
A common pain point for users can be trying to move a massive project folder containing hundreds of thousands of files. You drag and drop it, and then… you wait. The system becomes sluggish as it struggles to process such a huge structural change.
The root cause of this slowdown was the previous database design, which included what our engineers called a “fat table.” This table meticulously mapped the relationship between every parent folder and every child item and stored the full text path for each one. As data grew, this table would grow almost exponentially.
To solve this, we performed major surgery on Drive’s database, moving to a lean, id-based tree structure and eliminating the fat table entirely. Now, each file and folder only knows its own name and the ID of its direct parent. This fundamental redesign transformed what was a minutes-long, resource-intensive operation into a single, near-instantaneous one, allowing Drive to handle enormous datasets without buckling.
Balancing Data Safety with Performance
To ensure your data is safe at all times, even during power failures, every single file change or update must be recorded. The engineering challenge is that this constant recording, if not managed carefully, can cause significant performance “hiccups,” especially when many files are being updated at once. Our previous approach relied on the database engine’s default, automatic settings for its Write-Ahead Log (WAL)—a safety feature that records changes before they are made permanent. The problem with this automatic process is that during intense activity, it can trigger too frequently, causing performance stalls.
To overcome this, we implemented a custom checkpointing strategy. By taking manual control of this process, we can group thousands of small, individual writes into efficient batches. This allows us to take full advantage of the WAL’s dual benefits—its rock-solid data durability and its ability to handle many concurrent users—without being penalized by the performance overhead of the default settings. This single change was a major contributor to a 20x or greater improvement in indexing speed in our internal tests for Drive.
Accelerating Core Application Logic
With a solid foundation in place, the next step is to optimize the application’s internal logic. This involves a continuous process of profiling our own code to identify and eliminate bottlenecks that can affect the user experience.
Ensuring a Snappy, Responsive User Interface
When a user performs an action in the Drive web portal—like selecting a group of files to delete or share—how do we ensure the interface responds instantly, without any frustrating lag?
Upon analyzing these common user interactions, we discovered that the bottleneck wasn’t always in the database itself, but in how the application handled its own internal data for the background tasks associated with that action. Every time you perform an action, Drive needs to “package up” information about that task for internal processes to handle. Our original method for this “packaging” (a process called serialization) was creating a lot of overhead, which could make the UI feel sluggish.
As part of our continuous improvement cycle, we re-architected how this core data is handled, optimizing the process from the ground up. The result was a significant improvement in UI responsiveness. In one test, the performance of a common action—selecting and deleting 100 files from the web portal—improved by more than 2.2 times. This isn’t just about deleting files faster; it’s a testament to how optimizing these fundamental, internal processes leads to a user experience that feels consistently faster and more fluid across the board.
Platform Integration: A Performance Multiplier
A well-built application is powerful, but its true potential is unlocked when it’s integrated perfectly with the platform it runs on. This is where Drive leverages the unique capabilities of DiskStation Manager (DSM) and the Btrfs filesystem.
Eliminating Delays After a Restart
In the past, when restarting the Drive package, the server had to perform a full “rescan” of every folder to see what might have changed while it was offline—a process that could take tens of minutes for large deployments, making some processes feel sluggish.
Instead of making Drive do this detective work, we connected it to a dedicated change-tracking service within the DSM operating system. The key is that this service runs continuously and maintains a persistent log of all file changes, even when the Drive package itself is stopped. Now, when Drive starts, it doesn’t need to hunt for changes; it simply asks the service for a neat list of everything it missed. This deep OS integration transforms a resource-intensive marathon into a brief, efficient check-in.
Making File Versioning Efficient
Creative and technical teams often generate multiple versions of the same file as they work. How can a system provide robust versioning without consuming a crippling amount of storage space?
This was the perfect opportunity to leverage the Btrfs file system. Btrfs allows for something called reflink—the ability to create a “clone” of a file that points to the same data blocks without taking up new space. We re-architected our versioning logic to be Btrfs-aware. Now, the initial creation of a new version is near-instantaneous and consumes almost no additional storage. While file types that are completely rewritten on every save (like some encrypted or video formats) will naturally consume more space as they change, this approach provides massive storage and speed benefits for a huge range of common file types.
A Culture of Reliability and Performance
Our core belief is that a trustworthy product must be, above all else, reliable. The journey from a bloated database to a responsive interface, from a slow startup to space-efficient versions, is a testament to this commitment. By re-architecting our database for scale, achieving over 20x indexing speedups, slashing CPU usage with smarter queries, and deeply integrating with the platform for storage efficiency, we have engineered Synology Drive for the demands of the real world. It’s a culture that demands we not only build a solid foundation but also continuously tune our application and master its interaction with the platform it lives on. We handle the complexity so you can enjoy a solution that is fast, efficient, and reliable.