Defining fairness: How IBM is tackling AI governance
Enterprises are hesitant to adopt AI, but IBM wants to help
Enterprises are hesitant to adopt AI solutions due to the difficulty in balancing the cost of governance with the behaviours of large language models (LLM), such as hallucinations, data privacy violations, and the potential for the models to output harmful content.
One of the most difficult challenges facing the adoption of LLM is in specifying to the model what a harmful answer is, but IBM believes it can help improve the situation for firms everywhere.
Speaking at an event in Zurich, Elizabeth Daly, STSM, Research Manager, Interactive AI Group of IBM Research Europe, highlighted that the company is looking to develop AI that developers can trust, noting, “It's easy to measure and quantify clicks, it's not so easy to measure and quantify what is harmful content.”
Detect, Control, Audit
Generic governance policies are not enough to control LLMs, therefore IBM is looking to develop LLMS to use the law, corporate standards and the internal governance of each individual enterprise as a control mechanism - allowing governance to go beyond corporate standards and incorporate the individual ethics and social norms of the country, region or industry it is used in.
These documents can provide context to a LLM, and can be used to ‘reward’ an LLM for remaining relevant to its current task. This allows an innovative level of fine tuning in determining when AI is outputting harmful content that may violate the social norms of a region, and can even allow an AI to detect if it’s own outputs could be identified as harmful.
Moreover, IBM has been meticulous in developing its LLMs on data that is trustworthy, and detects, controls and audits for potential biases at each level, and has implemented detection mechanisms at each stage of the pipeline. This is in stark contrast to off-the-shelf foundation models which are typically trained on biassed data and even if this data is later removed, the biases can still resurface.
The proposed EU AI Act will link the governance of AI with the intentions of its users, and IBM states that usage is a fundamental part of how it will govern its model, as some users may use it’s AI for summarization tasks, and others may use it for classification tasks. Daly states that usage is therefore a “first class citizen” in IBM’s model of governance.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
More from TechRadar Pro
- Microsoft is investing billions to bring AI to the UK
- Looking to collaborate? Here are the best online tools for businesses
- AI and ChatGPT are scary, according to cybercriminals
Benedict has been writing about security issues for over 7 years, first focusing on geopolitics and international relations while at the University of Buckingham. During this time he studied BA Politics with Journalism, for which he received a second-class honours (upper division), then continuing his studies at a postgraduate level, achieving a distinction in MA Security, Intelligence and Diplomacy. Upon joining TechRadar Pro as a Staff Writer, Benedict transitioned his focus towards cybersecurity, exploring state-sponsored threat actors, malware, social engineering, and national security. Benedict is also an expert on B2B security products, including firewalls, antivirus, endpoint security, and password management.