What does AI mean for data privacy?
Are privacy fears about AI justified?
AI can sometimes feel like the tech industry’s secret sauce, underpinning many of the most pervasive tech-driven changes that have impacted society in the past decade. From the smart assistants which have redefined customer service, to the tools which detect and prevent payments fraud to new and exciting ways to predict and track the spread of infectious diseases, AI is already touching most of our lives, whether we choose to engage with debates over its use or not. In fact, many of us may not even be aware we are using AI at all.
For business and tech leaders, AI has meant the need to stay on top of changing and hotly debated data privacy laws. From forthcoming changes to the use of cookies, to the UK Government’s recent vow to replace GDPR regulations privacy, there is plenty to grapple with but, crucially, the use of AI does not just mean a shift in mindset for the tech industry. Consumers too must get used to the way their data is managed as the privacy implications of the use of AI become clearer. But how exactly do consumers do this and what are the ways in which businesses can help them? As AI models continue to develop, what legislation needs to be in place to ensure that data privacy is as protected as it can be and what can we expect in the near future from both a tech and legal perspective?
Mark Mamone is CTO at digital identity specialist, GBG.
Issues around data
When it comes to data privacy, ensuring consent and the legitimate use of data is essential. If data is the new oil in our digital age, then AI is the way we transform that data into something useful and, indeed, valuable.
As AI becomes more pervasive, transparency and explainability are critical in providing reassurance that decisions reached through the use of AI are sound and free from bias. Decision-making is currently being overhauled due to an explosion in volumes of available data, and the new power of machine learning algorithms.
While it is true that data can enable us to see where bias is happening and measure whether our efforts to combat it are effective, it could also, depending on how it is deployed, make problems worse. There are, sadly, multiple examples of where algorithms have amplified existing biases. We now know that algorithm discrimination may arise when an algorithm is built in such a way that it discriminates against a certain demographic, for example. It is vital that the algorithms themselves, as well as the data on which they depend, are carefully designed, developed, and managed to avoid unwanted and negative consequences. Ensuring that algorithms are free from bias and that results are suitably validated is crucial.
The buck stops with all of us
Fairness is of course highly subjective. Even before the advent of AI there were many different interpretations and definitions of exactly what we mean when we talk about “fairness”. Now that complex algorithms are being applied to decision-making systems, it comes as no surprise to learn that these definitions have multiplied. We need technical expertise to help us understand and work within the available definitions and choices, but fundamentally the decisions about how we ensure that AI is operating fairly is one for society as a whole to navigate – it is not a question for us to pose to data scientists and then forget about. Decisions around the use of AI only gain legitimacy if they are accepted by society as a whole.
The role of the tech industry
The tech industry's understanding of the implications of AI are rapidly maturing as are the relevant regulations and policies. We already have robust regulations in place (i.e., GDPR) which govern data, but moving forwards we will see regulations governing the AI models themselves and the algorithms behind them. There are also technological advancements being made which will ensure AI technology will become less inherently problematic - including Privacy Enhancing Technology (PETs), for example. PETs ensure that encrypted data can be used without losing its value or, crucially, needing to be decrypted. This lack of decryption is key here as the privacy and integrity of the data remains intact. From a privacy perspective it is particularly exciting to look forward to the way technology will remove opportunities for human error, ensuring the AI technology of the future is compliant by design.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Mark Mamone is CTO at digital identity specialist, GBG. An internationally published author, he is a recognized expert in a number of technical domains and an expert in Enterprise Architecture.