The rise of AI is good news for ethical data advocates

Padlock against circuit board/cybersecurity background
(Image credit: Future)

It’s now clear that 2023 was the year of artificial intelligence. Since ChatGPT burst into the mainstream, we’ve seen AI spread into virtually every industry. From large language models powering search engines, to GenAI imaging baked into Adobe Photoshop, to chatbots and ad algorithms reshaping consumer experiences, AI is all around us — not just part of the zeitgeist, but increasingly woven into the way we live and work.

From an ethical-data standpoint, the mainstreaming of AI brings obvious challenges. We all know AI is powered by data — but the sheer ubiquity of AI means that in future, virtually all data will touch AI algorithms at some point in its lifecycle. Who decides which data is fed into AI tools, and for what purpose? How should data persist in algorithms, and what does that mean for consent and privacy? What kinds of guardrails are needed to keep us safe in a world of omnipresent AI?

These are important questions. Like any powerful new technology, AI can seem scary, and there’s scope for unintended consequences; some tech bigwigs are even calling for us to halt AI innovation altogether. Instead of treating AI like a fire that needs to be doused, though, we need to recognize that the rise of AI is pouring fuel on a fire that’s already blazing. The biases, data-privacy concerns, and other ethical headaches associated with AI weren’t created by algorithms — they were revealed by them, with AI shining a 1000-watt spotlight on pre-existing shortcomings in organizations’ data practices.

That’s a key distinction, because it means while the AI revolution brings challenges, it also presents an enormous opportunity for ethical data advocates. Driven by regulatory mandates, consumer demand, and economic necessity, businesses are now forced to reevaluate their data practices and data infrastructure — not just for their AI tools, but across all their operations. As such, the arrival of AI has the potential to accelerate the adoption of responsible practices across the entire data economy.

Jonathan Joseph

Head of Solutions at Ketch.

More flexible solutions

Importantly, the mainstreaming of AI will spur both regulators and businesses to find more dynamic and flexible ways of achieving their goals. The traditional back-and-forth between industry and regulatory bodies, with innovators moving the ball forward and regulators patching rulebooks in response, is fine when new developments come slowly. But it’s completely unsuited to a world of breakneck AI innovation, with new applications being developed on a near-daily basis.

As a result, regulators are learning to regulate purposes and outcomes instead of specific technologies. We can’t regulate every algorithm individually, so regulators are working to define how different categories of data and types of functionality should be handled. Underpinning such efforts is a broader shift toward flexible enforcement that frames responsible data-handling in terms of core values such as fairness and transparency, giving regulators a clearer yardstick by which to assess novel AI technologies.

The push to create more flexible rulebooks mirrors the way that businesses are evolving toward more adaptable data solutions. Rather than building brittle privacy systems that deliver rigid compliance with specific laws or regulations, businesses cry out for programmatic tools that automatically map regulatory intent onto the purposes for which data is collected and used. By going beyond legalistic compliance, such methods can support AI innovation while ensuring consumers’ data dignity across the full data lifecycle.

The role of privacy leaders

Managing the challenges posed by AI also requires a reevaluation of the role of privacy practitioners, who will need to accept that AI and ethical data use has landed on their desks and requires more collaboration across the organization. It’s clear, for instance, that algorithmic bias is real. But eliminating bias, and ensuring that data utilization and data dignity go hand in hand in the AI era, is only possible if we bring privacy leaders into the conversation with data and tech leaders when designing and developing new AI technologies.

At the simplest level, addressing bias requires mindfulness about the data used to train AI tools, with effective monitoring of the datasets involved to ensure that only properly permissioned data flows into algorithms. That cuts to the core of responsible data handling: many organizations don’t know what data they hold, where it is, or how it can (or can’t) be used. Building out organization-wide data mapping and rigorous consent management is thus a vital piece of the ethical AI puzzle.

Going a step further, solving for bias requires advanced data-privacy capabilities. If you’re building a face-recognition tool, for instance, you need to be able to seamlessly retrain your algorithm if a subject revokes consent — but you also need to be able to account for the impact that any deletions have on your algorithm’s output. Eliminate a rare datapoint, such as the biometrics of a member of a minority, and you risk skewing your algorithm in damaging ways.

These are solvable problems: differential privacy, for instance, can be used to ensure algorithms remain representative even when rare data points are removed. But putting such methods into action is only possible if privacy practitioners step up. In the AI era, responsible data usage is a team sport: we need to break down the silos between tech and privacy teams, and start building the advanced privacy capabilities needed to overcome the complex challenges we now face.

A new opportunity

For privacy professionals and leaders at all levels, it’s important to recognize that ethical AI is only possible if it’s built on a solid foundation. Neither regulators nor consumers want to stifle AI innovation, but they will increasingly punish companies that fail to show real commitment to fair, transparent, and ethical data practices.

The trouble is, that commitment can’t only extend to responsible AI innovation. When it comes to data, AI will rapidly come to touch everything we do — so only by taking a holistic approach, and ensuring responsible data practices from cradle to grave and across all aspects of our organizations, can we build truly ethical, trustworthy, and sustainable AI technologies.

The stakes are high, and the opportunity is real. Companies that fail to embrace ethical data practices will lose consumer trust, draw the wrath of regulators, and wind up falling by the wayside. Businesses that lean into this challenge, on the other hand, have a chance to position themselves as true ethical data champions — and thereby pave the way for enduring success in today’s AI-powered world.

We've featured the best data visualization tool.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

TOPICS

Jonathan Joseph is Head of Solutions at Ketch, a platform for programmatic privacy, governance, and security.