When any new technology appears, it can take a while for practices to catch up. For instance, videoconferencing was certainly mature and widely available prior to 2020, but it took a pandemic to truly accelerate its adoption. Today most organizations have finally found a permanent place for it in their everyday operations.
In the data center industry, groundbreaking applications like artificial intelligence (AI) and machine leaning (ML) are radically transforming the landscape. Technology has evolved rapidly to meet their rigorous demands.
However, much of that technology has been developed and kept exclusively for a handful of global cloud service providers: the “hyperscale” players. Legacy OEM models have prevented that hyperscale technology from proliferating to the broader market. As a result, increasing data center capacity and deploying high-performance rack-scale solutions in an efficient, scalable, sustainable way seems impossible.
It’s clear that these legacy approaches to IT infrastructure are preventing many operators from getting the most out their data center investment. So how can they be remedied or replaced? Understanding the shortcomings of these legacy approaches is key to overcoming them.
Shortcoming #1: Time to value takes forever
Technology is changing with lightning speed, but the same can’t always be said for data center facilities. When operators make the decision to buy server hardware for an all-new deployment or an upgrade to existing infrastructure, they have to plan for months in advance. They then buy server hardware in the moment and wait to see if their predictions of how the needs of their customers and their operations would evolve came true.
And even if they buy the very best server hardware at the time of purchase, the actual deployment could be subject to common supply chain issues and delays. Furthermore, OEM-sourced data center racks don’t plan and install themselves. For most rack-level solutions, the facility has to set aside extra time and money to spin up an army of IT professionals to guide their integration and installation.
Shortcoming #2: Overbuilt OEM systems with vendor lock-in
Original equipment manufacturers, or OEMS, are in no hurry to upset the legacy approach. This is because, when there are delays and scarcity, equipment purchasers often spend more on top-of-the line systems in order to future-proof their data center investment.
Unfortunately, these costly OEM systems aren’t necessarily designed for their needs. Instead of optimized solutions that use technology purpose-built for and tested under the most demanding workloads, data center facilities end up with proprietary, overbuilt hardware.
Worse still, this hardware often comes with strings attached, like expensive, long-term OEM maintenance and support agreements. Operators find themselves locked into a cycle of generic solutions and paying for that dubious privilege.
Shortcoming #3: Deep, enduring inequalities
Moving to open-architecture models that are more flexible, responsive and unhampered by vendor lock-in seems like the obvious path forward. But there are still deep inequities to overcome—including accessibility and pricing barriers. It will take some time to make the shift to models that are designed to bring the highest-performance, data-center-optimized technology to non-hyperscale operators in a cost-effective way.
The financial argument in favor of sustainability, a foundational element of these new models, is a key consideration for those very reasons. Rack-scale solutions with high energy efficiency and compute density are nothing but good news for the bottom line, which in turn creates extra spending power that can be channeled right back into even more powerful and efficient rack-scale solutions.
That said, although there’s a strong desire across the industry to move toward sustainable data centers, legacy approaches haven’t helped convert that into action. If anything, they’ve simply reinforced the idea that only the largest, best-resourced players have the luxury of prioritizing business economics and the environment at the same time. This has served to perpetuate the myth that sustainability has to take a backseat in favor of securing new server hardware and scaling data center capacity to meet skyrocketing demand.
Time for a change: New data center models
Legacy mindsets have proven hard to shake, yet the need and the rationale for new data center models is more compelling than ever. Whereas we’ve focused here on the shortcomings of the status quo, the next blog post will look at some of the corresponding strengths of the new data center model and why it’s more beneficial—and far easier—to integrate and migrate than you might think.
In the meantime, check out our white paper, “3 Reasons to Adopt New Data Center Models,” where we explore some of these same important ideas in more detail.
IT teams must rapidly scale their users, infrastructure and workloads overnight to meet massive application demand and the immediate requirement for more services. Sesame For Open Systems provides the scale you need for Kubernetes orchestration withRead more ›
The global IT industry is responsible for 4% of global emissions and on track to double by 2025. With those high stakes, a focus on operational energy efficiency isn’t enough. If we factor the pre-use (embodied) and post-EOL costs of infrastructureView here ›