“That's the experience we have built up over 40 years, to then bring in to the next 40 years” - Cisco’s Chintan Patel on realizing potential of AI and the lessons to learn from the Internet boom

Cloud, networking and internet
(Image credit: Shutterstock)

AI development is moving at an unprecedented pace compared to technologies of a similar magnitude.

The internet as a technology took between two to five decades to be realized as a commercial product - depending on who you ask - and some will also argue the building blocks of AI have been around just as long, with the earliest ‘large language models’ emerging from Joseph Weizenbaum’s ELIZA program in 1966 - but again, our definitions for an LLM have changed and it really does depend who you ask.

The key building blocks for both technologies do have one thing in common however - networking. To find out more, I spoke to Chintan Patel, Cisco’s CTO UK & Ireland, who began his career installing some of the first of the UK’s internet infrastructure, and is now again at the forefront of a technological revolution.

Building the future

“I was right at the heart of the early days of building out the internet infrastructure, and firsthand I saw what connectivity actually allowed people to do. You think back now, that initial moment of connectivity, what it spawned for society in terms of the digital era that we’re now living in,” Patel says, pointing to the success that many companies around the world have built from the internet, including Cisco.

The same cycle appears to be happening again, with numerous companies adapting their business strategy to capitalize on the emergence of AI, and new companies emerging as leaders in AI technology, such as OpenAI. It’s happening so fast, in fact, that the number of AI tools available to use is almost countless, and consumers are spoilt for choice.

“Any new technology is compressed in time in terms of how it reaches people. To get on to the internet took a while for people because you needed to have the connectivity, you needed to have the access, you needed to have the technology, and you had to afford it in the early days - all those rules are broken now,” Patel notes.

“You can simply go on to a web browser or go somewhere with free internet access and access one of these AI tools - it's completely changed from that dimension.”

What is important to recognize is that the Internet also had a similar hype cycle to AI. There were plenty of skeptics pointing to Metcalfe's law, arguing that the internet was only as useful as the number of people who use it, and that if there isn’t a network to support it, there are no users to use it, and therefore it would have no value.

“If we think of it from an AI perspective, we’re at that similar moment again where you’ve got this technology which has come along, it’s had its ‘wow’ moment, but we’re yet to feel the true impact of it. I think we see pockets of it in different places - everyone’s experimenting. People can see that this will have some tremendous impact for humanity and I think the lessons from what we learned around building the internet places us in a better place now to think about what we need to do,” Patel counters.

A profile of a human brain against a digital background.

(Image credit: Pixabay)

The key questions we are asking about AI and its risks have already been answered as the internet has evolved, he points out. From building them and securing them, to making them accessible for all while battling their energy consumption. “All of those things that we’ve learned about building large scale infrastructure systems - this is the 10x of that. You can take the principles and bring them to this new AI era and I think that's the experience we have built up over 40 years, to then bring into the next 40 years.”

There are challenges the lessons from the internet are less helpful in answering. For example, if every company will immediately see the benefits of AI. Many businesses are implementing AI technologies as part of the AI hype, but not seeing any marked improvements in productivity and efficiency in return. To this end, Patel offers some advice.

“You can kind of think about this as two races - the race that the big hyperscalers and the model providers are in, and then there is everyone else, and actually you need to focus on your race, on your organization, on your use case,” which is especially important when it comes to regulating AI, Patel notes.

“So that's where organizations are having to think about what their data posture is like, what are their security protocols, what are their AI principles and guardrails and how are they going to responsibly use AI within their environment for their customers and for their employees?”

In Europe, regulations such as the AI Act are providing a baseline for the responsible use of these technologies. However, some AI enterprises and enthusiasts have argued against the introduction of binding multinational regulations as a competitive and innovative disadvantage.

“In this instance regulation is good because it actually helps us provide some guardrails because we think we know what these models can do, but you get to a stage where these models and these environments are self-learning - that’s a non-deterministic environment where you don’t really know the outcome. So actually having some of those things built in, in highly regulated industries especially where you’re dealing with life and death, and the safety and security of human life - no question - you’ve got to make sure those controls are in place to protect society.”

As for the problems AI could be used to solve, Patel is highly optimistic about the potential to remove the mundane and dangerous tasks from having to be done by humans.

“We’ve already got a skills shortage in terms of talent within our industry and organizations can’t hire fast enough to keep pace with some of the challenges. We’re also dealing with a world that is operating at machine scale, and as humans we can’t manage that," he notes.

“Humanoid workers in hazardous environments for example, where we don’t need to send humans, really makes sense. Digital agents that are able to take admin tasks off us, get them done autonomously, and come back to us - that’s a good thing, so if we can actually give everybody a superpower and a sidekick that can get things done, then that’s going to be a great thing.”

You might also like

Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.