Is an AI Bill of Rights enough?
Experts can't agree on how exactly AI should be left to solve our problems
In October 2022, I covered the publication by the White House of the blueprint for an Artificial Intelligence (AI) Bill of Rights in the US, developed by the Office of Science and Technology.
At the time, I raised the concerns that these are just plans, and that it would be potentially very difficult to get any resultant bill through a divided legislature after 2022’s midterms. I felt I could say that without being Nostrodamus, somehow..
I also noted that the blueprint’s “Safe and Effective Systems”, “Algorithmic Discrimination Protections”, “Data Privacy”, “Notice and Explanation”, and “Human Alternatives, Consideration, and Fallback” remain just ideas, guidelines for businesses.
Setting the record straight
It remains true that the blueprint is just plans and, as it turns out, the words “Bill of Rights” in this context are just bluster.
“Calling it a Bill of Rights gives the wrong impression because it is not equivalent in legal status to the constitutional US Bill of Rights. It is also not an equivalent to the EU’s proposed AI Act. As a result, there are limits to parallels that can be drawn,” said Tom Whittaker, Senior Associate in the Technology team at independent UK law firm Burges Salmon.
“Using the words of the document itself, the US Office of Science and Technology Policy’s AI Bill of Rights is a ‘framework [which] [...] is meant to assist governments and the private sector in moving principles into practice.”
However, in researching and speaking to experts on this topic for a longer feature, I started by asking them whether it was at all realistic to try and pass a bill, and quickly found that, while passing a bill is predicted to be an academic certainty, whether it will please anyone is another matter.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“ No matter what happens with the Midterms, Congress will come out overwhelmingly in support of the AI Bill of Rights,” said Prashant Natarajan, Vice President Strategy and Products at “AI democratization” company H2O.ai, that he claims has directly consulted on the blueprint.
“There is already significant bipartisan interest from both sides, but also a recognition of the overwhelming need for such legislation.”
The “perfect” AI legislation
When asked about the blueprint’s shortcomings, Natarajan wasn’t drawn, claiming he was overwhelmingly in favor of the blueprint entering law. But not everyone was so generous.
“The blueprint for an AI Bill of Rights follows what I call the law of AI as Risk,” said Orly Lobel, the Warren Distinguished Professor of Law and Director of the Center for Employment and Labor Policy at the University of San Diego, as well as the author of The Equality Machine.
“While the principles themselves--safety, privacy, anti-discrimination--are important, the AI Bill of Rights offers little direction on how to achieve these goals nor does it recognize some of the internal tensions between principles.”
“For example, privacy can sometimes conflict with the goals of equality and inclusion - we need to collect full and representative data to ensure our algorithms are fair, consistent, and unbiased.”
“Because AI requires better, more representative data to perform accurately and fairly and detect inequities, too much privacy and data minimization can impede the very issues that technology sets out to overcome.”
Privacy, Lobel says, benefits the rich and powerful, and automated systems informed by a healthy data corpus can level the playing field.
“There are already examples of targeted efforts to increase diversity and spread opportunity using online automated ads,” she wrote.
“LinkedIn Recruiter, for one, allows companies to track applicants by gender, making it easier to ensure a balanced applicant pool. Another promising example that is already being integrated in corporate America is software that expands the diversity of applicants through the drafting of job ads.”
“Textio, for example, analyzes job descriptions in real time to help companies increase the percentage of women and minority recruits by avoiding phrasing and formatting that result in exclusions.”
“Its algorithm discovers certain phrases used in ads for job openings—such as sports terms, military jargon like “mission critical” and “hero,” “aggressive,” “competitive,” and phrases like “coding ninja”—which result in fewer women and minority applicants. This is a promising use of AI to help diversify the pool of job applicants”.
Lobel raises one example of the importance of digital paper trails facilitating fairness in The Equality Machine: in 2020, after Harvard Business School created a number of fake profiles for both white and black AirBnB customers, and Twitter picked up on it with the hashtag #AirBnBWhileBlack, the company made fairer changes to how the app handles user data.
Now profile pictures of customers are only shown to AirBnB hosts after bookings have been confirmed, and customers have the option of an ‘Instant Book’ feature, by which customers have their bookings confirmed automatically, so that the whole process is completely blind.
AirBnB hosts must also now agree to an anti-discrimination policy, and, according to The Equality Machine, the company has removed over 1.5 million users for violating it.
As Lobel notes in the book, the discrimination was uncovered by humans, but putting that alongside Lobel’s privacy point makes it clearer: data is required to “debias” automated systems, and privacy for privacy’s sake hurts everyone.
Other experts took issue with the blueprint’s provision of a non-binding set of guidelines. Rather than seeing it as a means to overcome biases, some saw it as creating them.
“Frankly, the current draft of the AI Bill of Rights is not useful,” he said. “There are no provisions in the AI Bill of Rights to hold companies accountable for violating its requirements. Without consequences, the Bill of Rights is meaningless,” said Muddu Sudhakar, CEO of AI service platform Aisera.
“The point of having fines/consequences for businesses that violate the AI Bill of Rights is NOT to stifle innovation. Instead it should be a catalyst for the safe and responsible continued development of AI.”
“The current draft of the AI Bill of Rights doesn’t cover ethics. This is a serious gap because if left unchecked, AI could develop (either purposefully or unintentionally) biases based on gender, race, spoken language [and] socioeconomic factors.”
“Similar to data privacy laws, there needs to be provisions in place giving consumers visibility and control over their own personal data and how it is used by companies for AI solutions. Ultimately, AI is by the people, for the people, and the AI Bill of Rights must reflect that.”
AI governance internationally
Ian Liddicoat, Chief Technology Officer and Head of Data Science for mobile advertising platform Adludio, told me that the AI Bill of Rights is the US playing catch-up in the AI regulation space compared to other governments and organizations internationally, and still potentially falling short.
“While the White House’s AI Bill of Rights is well intentioned, it noticeably lags behind similar efforts currently being made by the World Economic Forum, the EU and even UNESCO,” he said.
Echoing Sudhakar, he continued: “For instance, the five principles contained in the US legislation are top level and do not contain much, if any, detail on AI governance and how these policies will be enforced. Rather than dealing with the specifics of AI, it reads more like an extension to existing privacy legislation.”
“Nevertheless,” Liddicoat continued. “now that more than 60 countries have or are in the process of adopting AI regulatory policies, including China, this draft confirms that the US is late to the party and has ground to make up.”
Many of the experts I’ve been in contact with have sought to bring their expertise on the European Union’s AI Act (“a matter of when, not if” it becomes law, according to Whittaker) and the UK’s National AI Strategy.
As the names may imply, two different approaches are being taken. The EU intends to create a universal set of binding regulations, outlawing principles such as predictive policing, and "social-scoring" systems such as the one in place in China.
The UK, meanwhile, is opting for a set of guidelines to be developed and applied by separate regulators per sector, seemingly doing everything it can to avoid legislating in order to drum up business in the post-EU landscape.
“The UK is expected to publish a white paper in late 2022 with specific proposals on what, if any, AI-specific regulations will be considered, said Whittaker.”
“However, the UK’s July 2022 policy statement in advance of the white paper indicated that minimal new laws or regulations are expected, and any guidance that is published by central government will be on a non-statutory footing so that their impact can be assessed.”
I was, however, looking for an even more global picture. Whittaker did mention Canada’s Directive on Automated Decision-Making, which holds the government to account on the way it uses AI to provide services to external parties.
He also mentioned its proposed Artificial Intelligence and Data Act (AIDA), originally tabled in June 2022 and reportedly in an advanced stage, which will require organizations to “identify, assess and mitigate harms” surrounding AI as they affect ordinary Canadians.
However, Liddicoat was the only expert I spoke to who mentioned China's AI governance, and I wanted to hear more.
“China is moving quickly to provide clear guidance and legislation for the development and application of AI through its creation of several key regulatory bodies.”
“The first is the Academy of Communications and Technology, which has developed a certification of trustworthiness in AI systems. The second is a review board within the Ministry of Science, which rules on the ethical issues that arise from the application of AI. Lastly, the Cyberspace Administrator has been given a lot of power and budget to define public policies on the use of AI.”
“The very close degree of oversight the ruling Chinese Communist party has over government ministries, academic institutions and technology companies means it can rapidly place authority in these organizations - something that is hard to replicate within the EU or in the US.”
It’s almost certainly going to irritate Western democracies like the US that China is steaming ahead by simply being able to make progress happen, but they can rest easy knowing that their goals are aligned with that great leveler: the military industrial complex.
Of the US’s AI Bill of Rights, he claimed that some of the vagueness contained in its principles could be “related to sensitivity around AI’s applications in advanced weapons systems”.
He then went on to say that “the Chinese Government’s interest in this area is partly because it is impossible to entirely separate the need for governance in AI from its applications in technologies that will ultimately be deployed by the military.”
America vs. the World
Democracy, as ever, is leading to debate, compromise, and discontent over the best way to govern artificial intelligence. Canada’s AIDA appears similar in scope to the EU’s AI Act in offering “a principles-based, as opposed to a rights-based framework”. But not every expert I spoke to believed this would be helpful.
“The EU act is a legitimate, first attempt to regulate AI practices and in this sense should be welcome,” said Sophie Stalla-Bourdillon, Senior Privacy Counsel and Legal Engineer at data security platform Immuta.
“However, the regulatory method chosen by the EC is problematic for at least two reasons. Firstly, It is not individual or even community right centric. This is clearly a missed opportunity. The assumption was probably that the GDPR is complementary to the AI Act but although the domain of these two regulations overlap, they are not identical. What is more, the GDPR is a rather weak instrument for protecting community rights.”
“Secondly, the implementation of the Act implies a delegation of regulatory power to standardization bodies, which are poorly equipped for addressing human-rights challenges.”
She claimed that the US AI Bill of Rights, being designed for and by one country over twenty-seven, is better placed to focus on the rights of individuals and communities, but she also criticized the blueprint’s complete lack of regulatory weight. Like Whittaker and Sudhakar, she addressed that its primary purpose, much like the UK’s strategy, is to assist the American government and the country’s private sector in “moving principles into practice”, rather than holding them accountable.
We already know that an AI Bill of Rights entering US law is all but guaranteed, although it’s unlikely to grow regulatory teeth before then.
H2O.ai’s Prashant Natarajan seemed unconcerned about this, claiming that it would raise awareness of the minutiae surrounding AI use into public conversation, and work to foster the public’s trust in AI.
“[The AI Bill of Rights] is not only going to pass, but you will also see other extensions and protections coming into the national conversation around access, equity, and AI feedback loops—there is a lot in here, and it all matters.”
“Companies and governments are taking decisions using AI, and at scale, today. These can range from financial decisions to healthcare decisions—and are also decisions around voting and gerrymandering.”
“When AI is being adopted this extensively, it is absolutely essential for citizens whose data is being used to not only know that it's being used, but to control how it's being used--and to be able to understand and address both the concerns and the opportunities such usage brings.”
“It’s not a question of does it go far enough or not: we are happy to see the AI Bill of Rights recognise several areas that we and some of the industry has been focusing on, especially around how to make Responsible AI a reality.”
“The fear and confusion around AI has been so overwhelming that we need to bring to light more of the positive stories. The average American citizen today doesn't always understand what AI is, even though they are absolutely being impacted and influenced by AI on not just a daily basis, but one would argue on a minute-by-minute basis.”
“Every time we engage with media, whether it's television or social media, or we participate in a poll, or we look at the weather, or we apply for a credit card or a bank account or we take a decision on which Emergency Room we want to take our child to, machine learning is playing an important role.”
“So, the question is not what the average American citizens understand is currently right now, that's almost immaterial; the question is what does it need to be, given how pervasive AI already is today?”
“The answer, if you’re serious about access, equity, and fairness, is education; to be able to make the technology understandable to the average person. [...] That way we might escape a repeat of some of the horror stories that we have seen with foreign influence on elections or with people's data being under threat and being used by enemy players and so on.”
While I’m keen to heed Natarajan and Lobel’s calls for more positive reporting on artificial intelligence, I don’t think that governing it in a way that simply trusts organizations to act morally is the way to do it. Nor do I find comfort in governments using concerns about the privacy of individuals and the discrimination they face to lay the groundwork for AI’s application in the military.
Lobel’s case for feeding automated systems data to resolve biases is a compelling one, but would I trust every corporation with my data unbidden? Definitely not.
On that basis, I’d say the AI Bill of Rights’ provisions for notice about how data is being used is the most heartening thing about it. However, as Sudhakar notes, it’s all theater until genuinely damaging consequences are in place for companies who misuse AI: the ones we’re right not to trust.
Luke Hughes holds the role of Staff Writer at TechRadar Pro, producing news, features and deals content across topics ranging from computing to cloud services, cybersecurity, data privacy and business software.