The story of enterprise storage infrastructure of late has been one of buyer concentration. Big cloud companies driving new economies of scale by managing thousands of servers per admin (versus the traditional tens), and launching thousands of containerized workloads per second (versus the traditional tens). These companies are operating infrastructure at unprecedented low Power Usage Effectiveness (PUEs) by innovating everything from hot and cold aisles to using ambient air and low-socket-count servers under scale-out workloads.[Tweet “Tempest in a Teapot? A Five-Year Enterprise Storage Outlook #datacenter #cloud #IoT”]
Why should we celebrate the cloud? Think about it. These new efficiencies have enabled new consumer-friendly business models at Amazon, Apple, Box, DropBox, FaceBook, Google, Microsoft, and elsewhere. There was a collective sigh of relief when our enterprise moved to cloud-based email services and our inbox quotas increased tenfold (yes, 10x!) in one day. As a consumer, I find that this “never say no” culture of cloud services allows me to focus on work and fun, as opposed to the drudgery of manually managing what goes where. We feel empowered to take HD burst shots and video on our mobiles, and share at whim with family and friends globally. It has driven mobile phone flash memory capacities to 256GB and beyond. In fact, I am typing this up on my 1TB Surface Pro 4!
Everyone Wants the Cloud
But back to the cloud – new server, storage, and networking technologies that lead to greater performance, scale, or efficiency, and especially those that reduce the cost of acquisition or operation, are being gobbled up by the large and rapidly growing major cloud players in the United States and China.
Simultaneously, enterprises new and old are grappling with a digitally transformed economy in which cloud-service-wrapped assets are rapidly overtaking bare physical assets, as evidenced by the rise of companies like Uber and Airbnb. Data is being generated, gathered, and analyzed by and for these services at incredible rates. We see Web 2.0 customers measuring stashes in exabytes, and mobile industry customers are collectively shipping tens of exabytes per year in smartphones and 2-in-1 laptops. With Internet of Things no longer just talk, such companies are starting to plan for a future data trickle (deluge?) of a petabyte a day. A company like Facebook already generates 4 new petabyes of data every day.
To say that the servers in internet data centers are busy would be a gross understatement. It was said that in the age of client-server computing, every 1,000 clients needed one server. In the age of smartphones, the number of subscribers is approaching three billion with each actively using multiple apps, leading to a need for a server for every 400 installed apps. That server count approaches millions at the largest cloud providers. These mobile users are now uploading hundreds of hours of video to YouTube every minute ‒ exceeding one petabyte in every day; and they’re downloading/streaming video a hundred times faster ‒ many tens of thousands of hours per minute. As you can see, for these companies, the efficiency of storage is critical to data management at such scale.[Tweet “Growing at a petabyte a day? Fast forward 5 years ahead —> enterprise #storage outlook”]
YouTube is not alone. One of our customers, a broadcasting corporation, found that choosing the right technology to store their digital video – in their case, InfiniFlash™ IF100 from SanDisk®– saved them 24 times the data center space, in addition to delivering electrical savings, better virtual machine performance, and overall improved performance. At a recent Gartner Data Center Conference, they also reported seeing an estimated 20x fewer catastrophic failures with flash.
So what does this mean for organizations? Such efficiencies are getting harder and harder to achieve on premises, and external clouds and services are becoming more and more compelling. GE dropped the bombshell by folding 9,000 of their applications from on-prem into the cloud. And Spotify embraced Google’s cloud platform.
The Cloud is Big (Getting Bigger), and Different
The cloud is accounting for more and more of the server growth, reaching 50% by 2017, with the largest cloud players estimated at having more than a million servers. As such, I see the market to be divided into three segments – the first is a small percentage comprised of hyperscale companies that have just a few sites of very focused applications running on tens of thousands of servers. This segment will see the biggest growth and reap the highest efficiencies. The second is the enterprise market which will still see companies building their own private cloud and on-premise deployment, but this deployment will make most sense for organizations who are running thousands of servers. The third are small sites ‒the largest segment of the overall market‒ which are running just tens of servers at single sites. I see many of this segment moving off-premises and contributing to the cloud’s growth.[Tweet “The #cloud grows bigger AND different. A look at our zettabytes of #data:”]
What’s in our zettabytes of data is no less important in what’s shaping our enterprise storage outlook and the future of the data center. If we look at a ‘typical’ breakup of a petabyte of data, the biggest growth of user data (40+%!) is in the video and photo tier. But machine learning and analytics on historical data are not much further behind. Eighty percent of data is currently “dark,” and there will be a massive opportunity arising when that data becomes available to computing and analytics. That opportunity is just around the corner.
Enterprise Storage Outlook: A Storage Hierarchy Ahead
The shape of the memory storage hierarchy in the data center is far from settled. The industry is anticipating the arrival of storage class memory in the memory hierarchy. The implications are prodigious. A new tier will enter the landscape with cheap and plentiful replacement for DRAM and with fast media for persistent storage.
But with that will also come challenges. Bandwidth density will require some fresh ideas and new thinking at the system level. The challenge is also on our counterparts building CPU and DRAM, as bandwidth limitations might be bringing our data path to a halt by 2020. The current commodity or X86 culture will continue to reign for a long time yet, as will resource disaggregation. But as applications will be rearchitected to take advantage of more cores, require far more memory, and see new requirements for data access, we will also see completely new architectures emerge of data-centric systems and pervasive data accessed by multiple applications. We will see the focus move from building hardware architecture and software code to the data and information itself; at unprecedented scale, and delivering insight like never before.