Alphabet CEO Sundar Pichai calls for AI regulation
AI is “too important” not to be regulated
Alphabet CEO Sundar Pichai has called for AI regulation to help govern how the emerging technology is used in his first big public move since being appointed as the head of Google's parent company last month.
Pichai shared his concerns regarding how and why AI should be regulated in a recent editorial in the Financial Times in which he wrote:
“Now there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to. The only question is how to approach it”
- Alphabet is now a trillion-dollar company
- AI systems are a threat – but not the way Elon Musk claims
- Google's AI system beats doctors in cancer test
A sensible approach that balances “potential harms, especially in high-risk areas, with social opportunities” is Pichai's recommendation and in his editorial, he also pointed to Europe's GDPR as a “strong foundation” for AI regulation.
The case for regulation
While Pichai calls for new regulation, he makes the case that in certain areas, there are already guidelines in place. For instance, existing medical frameworks could serve as “good starting points” for devices such as AI-assisted heart monitors. Self-driving cars on the other hand, will require governments around the world to “establish appropriate new rules that consider all relevant costs and benefits”.
Leveraging AI for the greater good is very important to Alphabet's CEO and he believes that letting the market decide how this technology will be used just isn't good enough, writing:
“Companies such as ours cannot just build promising new technology and let market forces decide how it will be used. It is equally incumbent on us to make sure that technology is harnessed for good and available to everyone.”
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
In his editorial, Pichai also referenced Google's AI Principles which were introduced in 2018 following criticism within the company over Google Cloud's work with the US military. The principles have been applied across Google and they specify areas where the company “will not design or deploy” its technologies.
If used incorrectly, AI could have devastating effects on humanity which is why regulation will likely come sooner rather than later.
- We've also highlighted the best VPN services
Via 9to5Google
After working with the TechRadar Pro team for the last several years, Anthony is now the security and networking editor at Tom’s Guide where he covers everything from data breaches and ransomware gangs to the best way to cover your whole home or business with Wi-Fi. When not writing, you can find him tinkering with PCs and game consoles, managing cables and upgrading his smart home.