Adding more and faster general-purpose processors to routers, switches and other networking equipment can improve performance but adds to system costs and power demands while doing little to address latency, a major cause of performance problems in networks.
By contrast, smart silicon minimizes or eliminates performance choke points by reducing latency for specific processing tasks.
In 2013 and beyond, design engineers will increasingly deploy smart silicon to achieve the benefits of its order of magnitude higher performance and greater efficiencies in cost and power.
Enterprise Networks
In the past, Moore's Law was sufficient to keep pace with increasing computing and networking workloads. Hardware and software largely advanced in lockstep: as processor performance increased, more sophisticated features could be added in software.
These parallel improvements made it possible to create more abstracted software, enabling much higher functionality to be built more quickly and with less programming effort.
Today, however, these layers of abstraction are making it difficult to perform more complex tasks with adequate performance.
General-purpose processors, regardless of their core count and clock rate, are too slow for functions such as classification, cryptographic security and traffic management that must operate deep inside each and every packet. What's more, these specialized functions must often be performed sequentially, restricting the opportunity to process them in parallel in multiple cores.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
By contrast, these and other specialized types of processing are ideal applications for smart silicon, and it is increasingly common to have multiple intelligent acceleration engines integrated with multiple cores in specialized System-on-Chip (SoC) communications processors.
The number of function-specific acceleration engines available continues to grow, and shrinking geometries now make it possible to integrate more engines onto a single SoC.
It is even possible to integrate a system vendor's unique intellectual property as a custom acceleration engine within an SoC. Taken together, these advances make it possible to replace multiple SoCs with a single SoC to enable faster, smaller, more power-efficient networking architectures.
Storage Networks
The biggest bottleneck in datacenters today is caused by the five orders of magnitude difference in I/O latency between main memory in servers (100 nanoseconds) and traditional hard disk drives (10 milliseconds).
Latency to external storage area networks (SANs) and network-attached storage (NAS) is even higher because of the intervening network and performance restrictions resulting when a single resource services multiple, simultaneous requests sequentially in deep queues.
Caching content to memory in a server or in a SAN on a Dynamic RAM (DRAM) cache appliance is a proven technique for reducing latency and thereby improving application-level performance.
But today, because the amount of memory possible in a server or cache appliance (measured in gigabytes) is only a small fraction of the capacity of even a single disk drive (measured in terabytes), the performance gains achievable from traditional caching are insufficient to deal with the data deluge.
Advances in NAND flash memory and flash storage processors, combined with more intelligent caching algorithms, break through the traditional caching scalability barrier to make caching an effective, powerful and cost-efficient way to accelerate application performance going forward.
Solid state storage is ideal for caching as it offers far lower latency than hard disk drives with comparable capacity. Besides delivering higher application performance, caching enables virtualized servers to perform more work, cost-effectively, with the same number of software licenses.
Solid state storage typically produces the highest performance gains when the flash cache is placed directly in the server on the PCIe® bus. Intelligent caching software is used to place hot, or most frequently accessed, data in low-latency flash storage.
The hot data is accessible quickly and deterministically under any workload since there is no external connection, no intervening network to a SAN or NAS and no possibility of associated traffic congestion and delay.
Exciting to those charged with managing or analyzing massive data inflows, some flash cache acceleration cards now support multiple terabytes of solid state storage, enabling the storage of entire databases or other datasets as hot data.
Mobile Networks
Traffic volume in mobile networks is doubling every year, driven mostly by the explosion of video applications. Per-user access bandwidth is also increasing by an order of magnitude from around 100 Mb/s in 3G networks to 1 Gb/s in 4G Long Term Evolution (LTE) Advanced networks, which will in turn lead to the advent of even more graphics-intensive, bandwidth-hungry applications.
Base stations must rapidly evolve to manage rising network loads. In the infrastructure multiple radios are now being used in cloud-like distributed antenna systems and network topologies are flattening.
Operators are planning to deliver advanced quality of service with location-based services and application-aware billing. As in the enterprise, increasingly handling these complex, real-time tasks is only feasible by adding acceleration engines built into smart silicon.
To deliver higher 4G data speeds reliably to a growing number of mobile devices, access networks need more, and smaller, cells and this drives the need for the deployment of SoCs in base stations.
Reducing component count with SoCs has another important advantage: lower power consumption. From the edge to the core, power consumption is now a critical factor in all network infrastructures.
Enterprise networks, datacenter storage architectures and mobile network infrastructures are in the midst of rapid, complex change. The best and possibly only way to efficiently and cost-effectively address these changes and harness the opportunities of the data deluge is by adopting smart silicon solutions that are emerging in many forms to meet the challenges of next-generation networks.
- Greg Huff is Chief Technology Officer at LSI. In this capacity, he is responsible for shaping the future growth strategy of LSI products within the storage and networking markets