‘Future servers could have a shared DPU’: Could the next decade see a rise in socket heterogeneity?
Dual-socket servers may no longer be the 'be all and end all', with single-socket and quad-socket servers more viable than we thought
Most servers in today's enterprise computing landscape adopt a dual-socket format configuration rather than a single-socket one – meaning it can house dual processors that work in tandem, rather than just one. While it might sound fool-proof in theory, theoretically doubling performance without altering the server footprint, today's cutting-edge technology means it may not be necessary – and it could actually be much cheaper and more efficient to switch to a single-socket server.
This is the subject of a blog post written by Robert Hormuth, AMD's corporate vice president, architecture and strategy – data center solutions group. In it, Hormuth argues single-socket servers are occasionally better than a dual-socket machine. Why? Frankly, the best processors today are more than capable of running really intense workloads – and you get an efficiency boost by not having to sustain running double the number of processors without compromising on performance, depending on your particular workloads.
Rather than replacing dual-socket servers altogether, however, Hormuth instead envisages an IT landscape in which infrastructure diversifies to incorporate single-socket servers in particular use cases, alongside the dual-socket norm that exists today. The latter is the right choice for enterprises that need maximum performance, and are willing to pay the costs of accommodating that level of sheer power. Single-socket servers, however, may gradually become more closely integrated into organizations that need them for standard business applications, including network and security, running database applications, or other forms of back office functions.
Speaking to TechRadar Pro, Hormuth argues that single-socket servers may not have been appealing in the past, but that modern-day technology has made them a real genuine contender. He also spells out the role data processing units (DPUs) can play in the future of IT infrastructure, especially when it comes to how hyperscalers will come to evolve the very nature of the servers they use, and how they're configured to work with one another.
You were compelled to write a blog on "Myths & Urban Legends About Dual-Socket Servers". What was the rationale behind starting it in the first place?
Two rationales inspired this piece. Firstly, people still believe redundancy is a factor for using a dual-socket solution, as this is what has been the norm for many years, however, you could end up increasing your overall TCO when it isn’t needed. The second reason is efficiency. Historically the market was steered toward dual-socket solutions due to a lack of compelling single-socket offerings. However today with the AMD-based solutions available, you can achieve the performance needed with a one-socket solution whilst improving the overall efficiency and acquisition cost when compared to larger, dual-processor systems.
In your blog, you mentioned that dual-socket systems will always exist, even if their role is diminished. But whatever happened to quad-socket systems?
When you turn back the clock of time on servers, you started with large-scale mainframe, minicomputers, and so on. Over the industry’s evolution, we’ve moved from eight sockets to four, to two, and now single socket solutions enabled by high-density core counts, shrinking process nodes and next gen memory and I/O technologies. In time, I think you would see that evolution continue as single-socket servers become the preferred customer solution for a majority of the market. The competition remains focused on dual-socket solutions, whilst we are delivering highly performant, efficient, right-sized solutions today.
There are only so many cores you can stuff on one socket. Instead of having multi sockets, are we going to see bigger sockets à la Cerebras to cater for thousands of cores?
Our approach to socket size is a customer value-driven decision. AMD is a pioneer in chiplet technology and we are fully committed to this engineering approach to increasing core density, energy efficiency, and, of course, value for our customers. We’ll continue to execute this strategy as long as it is economically feasible and physically possible. Will we eventually have bigger sockets? Potentially, however, it needs to make economic sense for our customers and provide them value.
Building on that idea, is there an argument to discuss the relevance of using sockets as a primary determinant of performance within the data center?
As I mentioned before, not all sockets are created equal and not every single-socket solution is truly competitive or compelling in today’s market. AMD’s chiplet architecture and packaging technologies have enabled us to deliver the density needed to achieve impressive performance, efficiency, and workload consolidation; no matter what solution our customers pick. Performance is highly dependent on what CPU goes into the socket, and what workload is intended for that solution.
Beyond this and with the emergence of new technology (e.g. CXL), how do you (rather than AMD) see the physical embodiment of compute in the realm of hyperscalers evolve (e.g. no more sockets? true 3D layered CPU? socket towers?)
The socket count progression will continue to evolve. I think there is a world where all hyperscalers will leverage servers with DPUs for operator-tenant separation. That will embrace multi-host DPUs such that the future server may look like a dual- or single-socket server with one shared DPU. Overall, I think AMD is ideally positioned to be the one-stop shop for compute, no matter what the future of the socket becomes.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Désiré has been musing and writing about technology during a career spanning four decades. He dabbled in website builders and web hosting when DHTML and frames were in vogue and started narrating about the impact of technology on society just before the start of the Y2K hysteria at the turn of the last millennium.
- Keumars Afifi-SabetChannel Editor (Technology), Live Science