Beyond virtualization: improving data center energy efficiency through adept management
Ordinary virtualization is no longer enough
When it comes to improving data center energy efficiency, ordinary virtualization has become table stakes. It’s time to reach higher.
If you perform a Google search for “energy efficiency,” the results are prefaced by a notification estimating that the query returned approximately 514,000,000 hits in 0.45 seconds (your mileage may vary slightly). And while a search engine’s ability to assemble a list of over half a billion links in under half a second may seem like a small miracle, it’s a process enabled by the same laws of physics that govern nearly every aspect of our daily lives.
Consequently, as is the case with any other physical task, working with data—storing it, moving it, processing it, analyzing it—inevitably involves the consumption of energy. In fact, the energy required to execute a single Google search is roughly equivalent to the energy required to illuminate a 60-watt light bulb for 17 seconds. Considering Google alone processes around 3.5 billion searches per day, it should come as no surprise that data centers currently consume between two and three percent of the world’s electricity.
Needless to say, maintaining this level of energy usage isn’t cheap, and it’s not unusual for a moderately-sized data center’s monthly electric bill to exceed $1 million. With Cisco forecasting that the volume of global data center traffic will more than triple between 2016 and 2021, these bills are all but certain to creep even higher in the years to come.
As such, to keep their operational costs under control, it’s imperative for data center administrators to place energy efficiency at the top of their agendas. In many circumstances, this will require supplementing tried-and-true energy-saving measures like virtualization with newer, more granular techniques.
- Microsoft opens first African data centers
- How to build a data center in nine months
- Nvidia closes in on Mellanox deal in data center push
Virtualization remains the foundation of data center energy efficiency
To be clear, server virtualization remains the gold standard of data center energy efficiency. Prior to the mainstreaming of virtualization, companies typically ran one operating system instance and one application per server, wasting a tremendous amount of capacity in the process. According to a study published in the Journal of Power Technologies in 2014, at the time, nearly a third of servers in the average non-virtualized data center were “zombies,” meaning they were technically active but were utilizing less than 10 percent of their capacity.
Fortunately, in the last half-decade, the IT community has substantially mitigated this underutilization by leveraging hypervisors to orchestrate the creation and side-by-side operation of virtual machines (VM). Since multiple VMs can draw on a physical server’s underlying resources (i.e. CPU, memory, and storage) simultaneously, an effectively executed VM strategy dramatically improves a data center’s overall utilization rate.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
When utilization rates go up, companies are able to run the same number of workloads on a smaller number of servers, generating savings through reductions to both overall energy consumption and data center square footage requirements. These potential savings are a large part of why enterprise VM saturation is speeding toward the 90 percent threshold and the average number of virtual workloads per physical server is set to jump from 8.8 in 2016 to 13.2 in 2021.
The value of attentive data management
To push their facilities’ energy efficiency to the next level, data center administrators must make a concerted effort not only to streamline their server portfolios, but to ensure the workloads they run on their servers are both organized and deployed as efficiently as possible.
For instance, real-time compression tools are becoming increasingly popular as a way to optimize a company’s storage operations at every juncture of the data lifecycle. These tools sit in front of a company’s network-attached storage devices and can reduce the volume of data that’s written to disk by up to a factor of ten—all while still providing random (or “direct”) read-write access to the compressed files. And while real-time compression isn’t intended to be used for comprehensive backups, it’s one of the few technologies that is fully complementary with another key implement in companies’ data management toolbox: deduplication.
Deduplication tools isolate identical byte patterns and replace redundant strings of data with pointers that lead to a single copy of the information. In a corporate world defined by digital communication and collaboration, the value of such an anti-redundancy measure cannot be overstated.
Say, for example, a company’s leadership team sends a graphics-laden 1 MB PowerPoint presentation to its 100-person sales staff in advance of a conference. If each salesperson routinely backs up their email, the company’s email server will end up storing 100 discrete instances of the file. After performing data deduplication, however, the company will only need to store a single instance of the file on its server—and set up 99 pointers—reducing its storage requirements by a factor of 100.
These reductions add up remarkably quickly, as it’s not unheard of for duplicates to comprise as much as 80 percent of a company’s data. Indeed, because data siloing remains such an intractable problem in the corporate world, many companies maintain a separate full copy of their master files for each of their organizational teams—one for development, one for recovery, one for staging, one for training, etc.
Optimizing the virtualization process
Of course, techniques like real-time data compression and data deduplication are not replacements for virtualization, but mechanisms meant to help data center administrators minimize the number of VMs they create. That said, the practice of virtualization itself is relatively young and by no means set in stone, and a growing cohort of researchers have started experimenting with a variety of cutting-edge strategies for more efficient VM placement.
As part of a study published in Multiagent and Grid Systems, researchers deployed a hierarchical cluster-based modified firefly algorithm to orchestrate VM placement. By using this algorithm in lieu of a traditional honey bee algorithm, the researchers improved VM placement efficiency to such an extent that their server environment’s energy consumption fell 12 percent. Similarly, in a separate study, researchers combined elements of an ant colony system algorithm and order exchange and migration local search techniques to optimize VM placement and achieve a six percent reduction in energy consumption.
Ultimately, companies looking to trim the fat from their electric bills need to explore all of these options—advanced algorithmic VM placement, data deduplication, and real-time data compression—as they pursue the ideal virtualization strategy for their unique needs. Striking the right strategic balance between performance and energy efficiency represents a considerable challenge for many data center administrators, but doing so is an absolutely essential component of modern business.
Fortunately, anytime these administrators need assistance illuminating an opaque technical question, they can always just Google it.
Albert A. Ahdoot, Director of Business Development at Colocation America
- We've also highlighted the best antivirus to help protect your business from the latest cyber threats
Albert A. Ahdoot is the Director of Business Development at Colocation America. He leads the company’s sales efforts by gathering intelligence, crafting sales policies, and implementing new business strategies. As an IT entrepreneur, he knows that the IT foundation of every business must be secure, scalable, and cost-effective. Looking back, he wished he had someone to walk him through the pitfalls of each contract, explain the exact amount of power & space needed, and, heck, understand that clients grow & change, and so do his data centre space and connectivity needs.