3 Data Center Trends You Need to Know for the Zettabyte Age

Data continues to grow at an incredible rate towards zettabyte scale, split between the core, edge, and endpoints. In this Zettabyte Age, data centers are the backbone for many of the applications that we use daily. Every time a person uploads a photo or video to social media, makes an online purchase, streams a TV show, or queries a search engine, there are data centers that support those actions and transactions. At the same time, there are different enterprise workloads from cloud-based computing, ERP, data lakes, data warehouses, analytics, and such that also reply on data centers to execute these business applications.

To keep up with these demands, data centers must commit to being open at all levels to deliver the desired user experience. In this blog post, we explain why open composability, open standards, and open-source are the data center trends critical to unlocking the value of data at scale.

1. Open Standards Enable TCO at Zettabyte Scale

Building upon open industry standards is the only feasible way to achieve the scale needed for the Zettabyte Age. This open architecture drives total cost of ownership (TCO) savings with purpose-built storage solutions. We talk a lot about dollars-per-terabyte ($/TB) as a key metric to show the operating cost for a modern cloud or hyper-scale data center. But, the actual TCO to run a data center is much deeper. Outside of equipment acquisition costs, there are additional expenses for installation, monitoring, and maintenance – not to mention the costs for power and cooling.

Choosing the most ideal solution with the lowest TCO places a tremendous amount of importance on data center server and storage density. Denser storage from high-capacity enterprise HDDs can store more data, hence help produce more revenue, given a constrained data center footprint. And while one of the most pivotal data center trends is SSDs growing at a faster rate than HDDs, over half of exabytes shipped in the coming years are expected to be on enterprise hard drives1. Simply put, there is no other media like HDDs that can scale and maintain competitive TCO at zettabyte scale.

3 Data Center Trends You Need to Know for the Zettabyte Age

When it comes to scaling HDDs, shingled magnetic recording (SMR) is a technology that can enable higher capacities by layering tracks on top of each other to create greater areal density. SMR adds enormous value by increasing the platter density or track per inch (TPI) which physically decreases the space required between tracks, hence significantly reducing the footprint while having more capacity. For context, 14TB drives are projected to be the capacity with the highest volume shipped to cloud data centers through 2020. 

Most of today’s data volume being processed by data centers is multimedia files (photos and videos) and sensor data. Because these type of data is generated sequentially, they will especially benefit from being stored in an SMR drive.  Similar to SMR for HDD, Zoned Namespaces (ZNS) is a pending open technology standard for SSDs that enables system-level intelligence of data placement. ZNS offloads complex firmware tasks to host software, which leads to leaner drives with a significant reduction of up to 8x in DRAM and up to 10x in overprovisioning. The host can manage write amplification and better utilize the endurance cycles of the NAND media.

2. Open Composability for Future Data Infrastructure

Data center architectures are increasingly moving from proprietary to composable and disaggregated models. There are three main reasons for this shift away from traditional data infrastructure. First, standard server configurations can no longer catch up with the changing storage requirements and applications. Second, the number of SKUs (server configurations) has risen dramatically as a result of more hardware accelerators being used to keep up with computing demands. Third, data-intensive applications with AI, ML, and HPC require quickly moving enormous datasets. In this case, it becomes more efficient to move compute (CPUs, GPUs) to the data, rather than vice-versa.

How are companies planning to implement open composability? The solution is NVMe™ over Fabrics, also known as NVMe-oF. Standardized in 2016, NVMe-oF is an open standard that enables remote accessing and sharing of NVMe devices across various networks. Ethernet is the generally accepted fabric of choice. NVMe-oF is growing in adoption, particularly in solid-state array shipments deployed to support primary storage workloads.

In current data infrastructures, the underlying problem for scaling is the way that compute and storage resources are attached to the network. You can see this in the above graphics, where components within a server communicate using a PCIe or similar interface. Processors, hardware accelerators, and memory are all direct-attached. In these systems, hardware and software are typically preset for specific applications and workloads. These configurations are usually simpler to procure, deploy and operate in a data center.

But, there’s a tradeoff. Fewer system configurations mean that fewer resources are used, which creates inefficiencies. Expanding this list of options could increase resource utilization, but make workloads less predictable and more challenging to manage.

What would happen if you replaced a PCIe interface with a high-performance Ethernet? By using NVMe-oF, data center architectures become more responsive to changing data environments. Pools of storage are shared using Ethernet among multiple, low-latency NVMe devices. Applications can compose exactly the compute and storage resources that they need in real-time. More impressively, this sharing applies to all environments – virtual, containers, bare metal – and applications. Composable systems offer higher resource utilization and efficiency, as well as reduced TCO – both in CAPEX and OPEX.

3. Enterprises Commit to Open Source Innovation

The Zettabyte Age brings with it new challenges that can only be addressed through innovation. But, innovation at this pace and scale in an expanding always-connected world doesn’t happen in a silo. That’s why more companies are committed to developing open-source ecosystems and partnerships to drive industry-wide innovation.

Western Digital is no exception. We’re committed to the open source ecosystem, and have been actively making significant contributions to both open-source software and hardware. We support industry groups such as the RISC-V Foundation, Chips Alliance, as well as the Linux® community in their efforts on open ISAs, processors, fabrics, kernel support, and many other important applications.

Besides, we are proud to be a founding partner of OpenTitan – the first open-source silicon project building a transparent, high-quality reference design for silicon root of trust (RoT) chips.

A Root of Trust (RoT) is a set of functions in a computing module that is always trusted by the computer’s operating system (OS). The RoT serves as a separate compute engine that controls the trusted computing platform cryptographic processor in the computing platform in which it is embedded.

One of our main goals through this group is to enable open, inspectable, and secure data infrastructures. We also want to show that one of the best and most efficient ways to create security is when you have an open data infrastructure.

In recent years, the open-source ecosystem continues to see incredible growth. The RISC-V Foundation has well over 150 members from academia and industry. At their annual summit, two new SweRV Cores based on RISC-V were announced, along with the first hardware reference design for OmniXtend cache coherent memory over Ethernet protocol. Chips Alliance is handling future development, management, and support of the architecture. The Linux kernel supports Zone Block Devices – in particular, SMR. Momentum is strong, but it must be sustained to handle data at a Zettabyte scale!

Recap of Top 3 Data Center Trends

In summary, adopting an open framework architecture makes sense for cloud and hyper-scale data centers. It’s a pragmatic move that will drive down TCO and increase output. Open composability through NVMe-oF enables composable data centers, where pools of computing and storage are shared to maximize resource utilization and efficiency. It’s also a profitable move. Open-source technologies such as SMR and ZNS enable denser storage that preserves TCO in enterprise HDDs and flash storage. As the Zettabyte Age approaches, enterprises should keep in mind these data center trends to design around open composability, ecosystems, and standards.

More Data Center Trends, Terms, and Technologies

Sources:

  1. HDD Remains Dominant Storage Technology. https://www.horizontechnology.com/news/hdd-remains-dominant-storage-technology-1219/


Forward-Looking Statements

Certain blog and other posts on this website may contain forward-looking statements, including statements relating to expectations for our product portfolio, the market for our products, product development efforts, and the capacities, capabilities and applications of our products. These forward-looking statements are subject to risks and uncertainties that could cause actual results to differ materially from those expressed in the forward-looking statements, including development challenges or delays, supply chain and logistics issues, changes in markets, demand, global economic conditions and other risks and uncertainties listed in Western Digital Corporation’s most recent quarterly and annual reports filed with the Securities and Exchange Commission, to which your attention is directed. Readers are cautioned not to place undue reliance on these forward-looking statements and we undertake no obligation to update these forward-looking statements to reflect subsequent events or circumstances.


Source link

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.