Regulating AI without stifling innovation

Regulating AI without stifling innovation
(Image credit: Shutterstock)

Most people are probably unaware of the everyday, simple use cases of Artificial Intelligence (AI) in action. Smart home speakers, Siri and Cortana phone assistants, personalized social media feeds and website adverts are all driven by AI. But at what point does seamless customer experience stop being enough and consumer privacies or “creepy” AI-driven ads start to put people off?

It’s not just consumers and end-users that AI will benefit. Many see AI as enabling businesses to foster innovation and want to introduce regulation to minimize the disruption in the growth of the technology. Others however have concerns around privacy, discriminatory AI algorithms, unchecked growth and potentially unwarranted use, so AI has become an area of concern for many. As AI adoption increases and it becomes more ubiquitous in both our personal and professional lives, it will come under increasing scrutiny to ensure it is used as a force for good.

With global AI investment expected to reach $232 billion by 2025 and current investments standing at nearly $12.5 billion, the AI market is set to grow rapidly over the next few years. As it does, can the technology continued to grow unchecked, or does it warrant the need for regulation to ensure its use for good?

Hot button privacy issues

Privacy has always been a hot button issue and that’s not going to change anytime soon. We’re seeing discussions around AI and data privacy on a global scale --- the European Commission is considering a five-year ban on facial recognition technology because of potential “big brother” type implications. Last year, IBM declared it would stop offering facial recognition software and claimed that AI systems used by law enforcement departments needed to be tested for bias issues. Amazon and Microsoft soon followed suit. 

With the growth in AI generated deep fakes spreading across social media platforms, the bias issues that AI create are certainly prevalent within the media and there is no shortage of stories that highlight racial or gender biases in various AI systems. This is a major issue for public acceptance of AI and is one that needs to be fixed sooner rather than later.

It’s important to remember AI holds tremendous promise - impacting everything from helping cities to plan transit routes during peak times to chatbots facilitating greater customer satisfaction - so there needs to be clearly defined privacy boundaries. By requiring users to opt-in, similar to GDPR, to sharing their data for analysis and processing by AI, that boundary is clearly defined.

Striking the balance

The surface of AI has only just been scratched with almost endless possibilities for the growth and adoption of the technology. Everything from the biggest questions we face today, to the mundane everyday life hacks. As AI gathers up more data, on a personal level but also local government, national, or global level, a balanced regulation that protects both privacy and gives industry the opportunity to innovate is key.

Regulation forces technologists to think about the long-term side effects of AI to force them to consider the future problems that could arise in a year, a decade or a century. With AI still in its early days, there will be a lot more proposed plans, initiatives and regulations before we get it right. In theory, the “Global Partnership on AI” is a good idea, because global coalitions can work--just look at the Paris Climate Agreement as an example. But it’s important to strike a balance and governments will need to remain mindful of regulatory overreach. A global coalition enables long-term conversation as the technology develops. It’s not just one and done.

For AI to continue to deliver innovate services for end-users, businesses need to ensure they work within a framework of regulation without having their hands tied. Remaining cognizant of how a particular AI application is being developed and ensures it does not breach societal concerns, like gender and racial bias, security compromises or, mass surveillance increasing inequality. A delicate path forward should be taken with nuanced legislation that ensures greater good without hamstringing innovation. Companies need to take a holistic approach to address privacy concerns, versus a piecemeal approach.

Regulate to innovate

In the absence of any regulations or standards, many tech businesses have sought to create their own code of ethics, or regulations, that guide their development of AI. In 2018, Google published its own AI principles to help guide the ethical development and use of the technology. But without a wider regulatory framework, businesses are free to choose how they develop their AI systems.

Regulation delivers a broad framework to work within, without overstepping its boundaries and becoming restrictive. By working together to deliver a framework that works for everybody, is developed responsibly and leaves nobody behind then AI has the power to truly transform our lives for the better.

Prasad Ramakrishnan

Prasad Ramakrishnan, CIO, Freshworks. 

Read more
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
A person holding out their hand with a digital AI symbol.
AI safety at a crossroads: why US leadership hinges on stronger industry guidelines
An AI face in profile against a digital background.
How to harmonize the complexities of global AI regulation
A person holding out their hand with a digital AI symbol.
DeepSeek kicks off the next wave of the AI rush
Half man, half AI.
Ensuring your organization uses AI responsibly: a how-to guide
Concept art representing cybersecurity principles
Navigating the rise of DeepSeek: balancing AI innovation and security
Latest in Security
Microsoft
"Another pair of eyes" - Microsoft launches all-new Security Copilot Agents to give security teams the upper hand
Lock on Laptop Screen
Medusa ransomware is able to disable anti-malware tools, so be on your guard
An abstract image of digital security.
Fake file converters are stealing info, pushing ransomware, FBI warns
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Coinbase targeted after recent Github attacks
hacker.jpeg
Key trusted Microsoft platform exploited to enable malware, experts warn
IBM office logo
IBM to provide platform for flagship cyber skills programme for girls
Latest in News
Zendesk Relate 2025
Zendesk Relate 2025 - everything you need to know as the event unfolds
Disney Plus logo with popcorn
You can finally tell Disney+ to stop bugging you about that terrible Marvel show you regret starting
Google Gemini AI
Gemini can now see your screen and judge your tabs
Girl wearing Meta Quest 3 headset interacting with a jungle playset
Latest Meta Quest 3 software beta teases a major design overhaul and VR screen sharing – and I need these updates now
Philips Hue
Philips Hue might be working on a video doorbell, and according to a new report, we just got our first look at it
Microsoft
"Another pair of eyes" - Microsoft launches all-new Security Copilot Agents to give security teams the upper hand