AI governance is now an even bigger problem, as most governance tools are “limping along”

The lady of justice comes for AI
(Image credit: Future / James Cutler)

A report has uncovered that many AI governance tools are ineffective in measuring the fairness and explainability of AI systems due to “faulty fixes”.

Many of these tools, developed by companies such as Microsoft, Google and IBM, are used by governments to determine the fairness and accountability of AI systems.

But a report released by the World Privacy Forum claims many such systems are often used improperly due to a lack of specific instructions on their use, and a lack of guidelines and requirements for quality assurance.

Faulty tools and a rickety framework

The report reviewed 18 governance tools designed to reduce the risk of AI bias and accuracy and found that a lack of regulation, framework and baseline requirements meant that many of the tools are being used incorrectly.

“Most of the AI governance tools that are in use today are kind of limping along," noted Pam Dixon, founder and executive director of the World Privacy Forum. "One big problem is that there’s no established requirements for quality assurance or assessment. There are no instructions as to the context that it is supposed to be used for, or even a conflict of interest notice.”

A number of scholars interviewed as part of the report criticized AI governance tools that “mention, recommend, or incorporate off-label uses of potentially faulty or ill-suited tools,” which could compromise AI fairness and explainability. The report also noted that a number of governance tools feature disparate impact benchmarks that only apply in specific contexts.

One such example is the Four-Fifths rule, which is widely recognized in the US employment field as a measure of the fairness of recruitment selection processes. However, a 2019 study found that the rule had been coded into a number of tools used to measure AI fairness in contexts with no relation to employment, without regard for the potential impact on their systems.

The report found that, “standards and guidance for quality assessment and assurance of AI governance tools do not appear to be consistent across the AI ecosystem.” The lack of universal quality assurance means that there are significant disparities in how AI governance tools are regulated, and the report stated that more needs to be done, “to build an evaluative AI governance tools environment that facilitates validation, transparency, and other measurements.”

The report summarized that, “Incomplete or ineffective AI governance tools can create a false sense of confidence, cause unintended problems, and generally undermine the promise of AI systems.”

Via VentureBeat

More from TechRadar Pro

Benedict Collins
Staff Writer (Security)

Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division),  then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.