Remember when a Terabyte of data sounded like something from science fiction? It was only a few years ago, but now you can buy a Terabyte disk in any high street computer store.
Well, last year, global annual data center traffic last year hit 2.6 Zetabytes (ZB) – that's 2.6 thousand million million Gigabytes or 2.6x10^15 Megabytes of data transmitted. But there's more; data center traffic is forecast to hit 4.1ZB next year and by 2016, it will be up 250 percent to 6.6ZB.
Such massive traffic rises cannot occur without the data center and networking infrastructure, industries rising to the challenge. And that means not just more data centers but also changes to the interconnecting infrastructure – fibre and copper – deployed within the growing global population of data centers.
So who is leading, demand or technology?
Before we get into the cabling detail, let's just look at what's driving this planned growth in data – and remember we are talking about the quantity of data being transmitted, not the quantity stored.
Two things are driving traffic from the user sphere (both consumer and business). First is the growth of internet enabled mobile devices such as smartphones and tablets.
The total number of mobile internet devices sold passed the number of PCs sold right back in 2011 - and by 2016, there will be twice as many mobile devices as PCs and laptops.
This growth in devices is then compounded by the second - the growth in user video.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
For example, one data center CIO speaking at a DCD Intelligence conference said he had calculated that during a pop concert, he attended people sharing photos to Facebook and MySpace, streaming live video to friends and uploading to YouTube created 5TB of data – all of which had to be transmitted to and through data centers. And that was a single 90 minute event!
In the enterprise space the architecture is changing. There is a significant move from "fat" clients (where the applications run on the user PC) combined with local servers – to thin clients for users and servers banished to massive data centers where power costs can be halved, resilience guaranteed and uptime guaranteed by highly systemized operational controls and processes.
Whether this is public cloud, private cloud or remote servers in co-location data centers, the effect is similar – there's still a lot of client-server traffic though this is now via the WAN and a massive increase in server-serve traffic within the data center.
In fact according to Cisco by 2011, 17 percent of traffic was from data center to user, seven percent data center to data center and a massive 76 percent was within-data center traffic.
Not only is there a massive increase in within-data center traffic, the growing use of virtualization means that each physical server network interface now has to carry the traffic from 8, 10, maybe 12 virtual servers.
This indicates that 10Gbps Ethernet (10GbE) connectivity which is only now being rolled out in anger will not have adequate capacity for more than the next few years.
A recent forecast by Cisco indicates that 2013 sees the majority of the world's servers interconnected by 10GbE, with GbE almost totally phased out and a small percentage using 40GbE. By 2017, not only will there be a lot more servers but two thirds of them will ship with 40GbE connectivity according to Intel and Broadcom.
2020 will almost witness a doubling of today's number of servers - all connected at either 40GbE or 100GbE. All of which means that its a real race for infrastructure technology to keep up with the growing demand for interconnect data rates.
A pattern
Interestingly, we've now been through enough iterations of Ethernet speed updates to see the pattern that is likely to apply in the next update – from 10Gbps to 40Gbps – and its this:
Early adopters with high performance computing requirements will need 40 & 100Gbps first, and they will use fibre and short reach coax solutions in advance of a BASE-T cabling solution being available.
The 40 & 100Gbps solutions for high density connectivity will use multi-lane coax, but only to the top of the rack (ToR) as the maximum reach is only 7m. The 40GBps implementation is effectively 4 x 10Gbps 'lanes' side-by-side so its time-to-market was fast and its available now.
The electronics to multiplex such high data rates are complex, and the cables are bulky and very expensive - so it is very much an 'early adopter' solution only.
Particularly so, if you take into account that the length constraint means that every rack must have a ToR (top of rack) switch with high speed fibre connections to the higher level switches.
Having to install a switch-per-rack is obviously expensive in terms of capital costs, maintenance contracts and power consumption. It is also difficult to manage.
The attraction of higher utilization of the ports in centralized switches using a fibre implementation is obvious. Deploying ToR switches also wastes 2U or 3U in every rack that could otherwise be used for production servers – potentially a big opportunity cost.
A number of 40Gbps fibre solutions are already available for longer links, but the cost benefits of a BASE-T copper solution (when it is available) is still likely to make it the solution of choice, over fibre connections, for server to switch.
Meanwhile, whilst these early-adopter installations are being rolled out, the Ethernet community is preparing to start work on a 40Gbps over "category" cabling standard – although at the time of writing, IEEE has not yet formed a working group to develop such a standard.
If we can assume that the IEEE group are formed, then we could speculate that we may see propriety pre-standard 40 Gb/s category solutions in late 2014 followed by a standardized 40GBASE-T in 2015.
Is 40Gbps Ethernet worth it?
You may wonder if it is worth developing 40Gbps Ethernet if it will not be available until two years later? The answer is one of simple economics.
Looking back at 10Gbps, a fibre server to server link, including network interfaces, patchcords etc. within the data center originally cost around $1000. An equivalent short-reach coax solution was $700 (but on top of that was the cost of all the extra switches needed). However when 10GBASE-T came on stream, the cost plummeted to $400.
For 40Gbps, although the costs will all be higher, we can expect to see a similar pattern unfold. Given that the majority of "production" data centers will only need 40GBps server connections in large volumes from 2016 to 2022, 40GBASE-T is likely to be the solution that will make most economic sense.
Currently, data center standards such as EN51073-5 and ISO/IEC 24764 call for each "zone" in the data center to be equipped with a "zone distributor" (ZDA). This can be a top of rack (ToR) switch or patch panel, a middle of row (MoR), or end of row (EoR), switch or patch frame.
For the current build-out of 10GBASE-T we mostly recommend MoR or EoR edge switches with fibre trunks back to the main distribution frame, (Fig. 5) although some cases demand different approaches to suit the business requirement.
Do we need to change the topology for 40GBASE-T? The likely answer is yes. A bit.
It's currently looking unlikely that 40GBASE-T will be capable of 100m links unlike all previous BASE-Ts. Current talk is of 30metres. But research we conducted for our 10Gbps zone cable indicated that 50m covers off approximately 90 percent of all data center link lengths so it is not difficult to plan links to be well under this length.
One possible change is that centralized switching will not be as popular as it was (unless single mode fibre is used) and deployment of EoR & MoR switches will be the preferred architectures.
Trunk cabling from EoR & MoR switches to the central MDF and core switches could be 40/100 Gbps OM4 multimode fibre – though this requires eight fibres per link for 40GbE and twenty fibres for each 100GbE link.
This may well be the preferred choice for co-location data centers keen to contain CapEx since the multi mode optical interfaces are significantly cheaper than single-mode.
The downside of course is that there is a need to run-in a lot more fibre to move from 10G to 40G and then to 100G - with ensuing disruption and danger of service interruption.
Looked at this from a flexibility and minimized disruption perspective, a preference may be to go straight to single mode solutions based on duplex connectivity with high-density panels providing cross-connect to server and switch connections.
Single-mode requires only two fibres per link whether it is running at 1Gbps, 40Gbps or 100Gbps. This means that the 10 to 40 to 100Gbps switch interface upgrades can be done with no change needed to the data center backbone.
Links from the zone-located switches to the higher level switches and servers will be a user choice. 40GE over fibre is already available and whilst 40GBASE-T is likely to turn out less expensive, there is still a question of when and indeed if it will be commercially available.
So our advice here will be to use fibre if your requirement has to be implemented before the availability of 40G copper is clear.
All in all, the growth in internet and data center traffic over the coming years means that we are going to be living in very exciting times with demand potentially overtaking technology and significant pressure on cabling infrastructure manufacturers and integrators to keep ahead in the race.
For the poor data center managers, whose future-proofing goal and forecasting task have never been easy, I'm sorry to say it's just got harder.
I strongly advise data center managers to discuss their plans with the top-tier cabling manufacturers at a very early stage to de-risk their projects and investments as much as possible.
- Ken Hodge is the CTO of Brand-Rex and is involved in design, NPD and new technology, promoting the Company and its products and representing our interests in the international standardization arena.
Ken Hodge is the CTO of Brand-Rex and has been involved in the cabling industry since 1982, researching, designing and developing optical fibre and high frequency cabling for LAN and Telecom networking. He is actively involved in BSI, IEC and CENELEC activities in International Standardisation in the cabling sector.