AI governance is now an even bigger problem, as most governance tools are “limping along”

The lady of justice comes for AI
(Image credit: Future / James Cutler)

A report has uncovered that many AI governance tools are ineffective in measuring the fairness and explainability of AI systems due to “faulty fixes”.

Many of these tools, developed by companies such as Microsoft, Google and IBM, are used by governments to determine the fairness and accountability of AI systems.

But a report released by the World Privacy Forum claims many such systems are often used improperly due to a lack of specific instructions on their use, and a lack of guidelines and requirements for quality assurance.

Faulty tools and a rickety framework

The report reviewed 18 governance tools designed to reduce the risk of AI bias and accuracy and found that a lack of regulation, framework and baseline requirements meant that many of the tools are being used incorrectly.

“Most of the AI governance tools that are in use today are kind of limping along," noted Pam Dixon, founder and executive director of the World Privacy Forum. "One big problem is that there’s no established requirements for quality assurance or assessment. There are no instructions as to the context that it is supposed to be used for, or even a conflict of interest notice.”

A number of scholars interviewed as part of the report criticized AI governance tools that “mention, recommend, or incorporate off-label uses of potentially faulty or ill-suited tools,” which could compromise AI fairness and explainability. The report also noted that a number of governance tools feature disparate impact benchmarks that only apply in specific contexts.

One such example is the Four-Fifths rule, which is widely recognized in the US employment field as a measure of the fairness of recruitment selection processes. However, a 2019 study found that the rule had been coded into a number of tools used to measure AI fairness in contexts with no relation to employment, without regard for the potential impact on their systems.

The report found that, “standards and guidance for quality assessment and assurance of AI governance tools do not appear to be consistent across the AI ecosystem.” The lack of universal quality assurance means that there are significant disparities in how AI governance tools are regulated, and the report stated that more needs to be done, “to build an evaluative AI governance tools environment that facilitates validation, transparency, and other measurements.”

The report summarized that, “Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems.”

Via VentureBeat

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.

Read more
A representative abstraction of artificial intelligence
Innovation in AI is in danger of outpacing governance
A representative abstraction of artificial intelligence
Enterprises aren’t aligning AI governance and AI security. That’s a real problem
Avast cybersecurity
How to beat ‘shadow AI’ across your organization
A scale with AI on one side and a brain on the other
What is AI bias? Almost everything you should know about bias in AI results
ai quantization
Shadow AI: the hidden risk of operational chaos
A hand reaching out to touch a futuristic rendering of an AI processor.
Balancing innovation and security in an era of intensifying global competition
Latest in Pro
Code Skull
Interpol operation arrests 300 suspects linked to African cybercrime rings
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Multiple H3C Magic routers hit by critical severity remote command injection, with no fix in sight
ai quantization
Shadow AI: the hidden risk of operational chaos
An abstract image of a lock against a digital background, denoting cybersecurity.
Critical security flaw in Next.js could spell big trouble for JavaScript users
Digital clouds against a blue background.
Navigating the growing complexities of the cloud
Zendesk Relate 2025
Zendesk Relate 2025 - everything you need to know as the event unfolds
Latest in News
FiiO FX17 IEMs
Our favorite budget audiophile brand unveils wired earbuds with 26(!) drivers, electrostatic units, USB-C ultra-Hi-Res Audio, and a not-so-budget price
girl using laptop hoping for good luck with her fingers crossed
Windows 11 24H2 seems to be a massive fail – so Microsoft apparently working on 25H2 fills me with hope... and fear
Code Skull
Interpol operation arrests 300 suspects linked to African cybercrime rings
ChatGPT Advanced Voice mode on a smartphone.
Talking to ChatGPT just got better, and you don’t need to pay to access the new functionality
Insecure network with several red platforms connected through glowing data lines and a black hat hacker symbol
Multiple H3C Magic routers hit by critical severity remote command injection, with no fix in sight
Apple Watch Ultra 2 timer
The Apple Watch is getting a sleep alarm upgrade it probably should have had 10 years ago