IBM abandons all facial recognition work over potential for misuse
Facial recognition could support mass surveillance and social scoring systems
In a letter delivered to the United States Congress, IBM CEO Arvind Krishna has declared the company will no longer provide any form of general purpose facial recognition software.
IBM also later confirmed it will halt all research and development activities associated with the controversial technology over concerns it can be misused.
The decision, according to Krishna’s letter, was motivated by the potential for facial recognition to facilitate mass surveillance, aggravate racial prejudices and result in the miscarriage of justice - as well as worldwide protests following the death of George Floyd.
- IBM publishes free data set to assist AI developers
- IBM, HP announce major job cuts
- IBM: Transition to cloud now 'an existential question'
Facial recognition software
While facial recognition technology has evolved dramatically in recent years and has the potential to assist in legitimate police investigations, its application has always been contentious.
Concerns about the opportunity for mass surveillance and social scoring are compounded by the issue of AI bias, which could see individuals discriminated against based on their physical attributes and is particularly problematic in the context of law enforcement.
Methods for auditing data sets that underpin AI models (including facial recognition software) for bias remain inconsistent and unregulated, increasing the possibility the technology could serve to further disadvantage minority demographics.
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with out values,” wrote Krishna.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
“We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
In 2018, IBM published a diversity-optimized data set for public use, designed to minimize bias in facial recognition products. But its latest announcement suggests the firm has reevaluated the viability of bias-free facial recognition software.
Krishna is not proposing a blanket abandonment of AI, which he sees as pivotal to the future success of business, but rather reiterated earlier calls for transparency and responsible use.
“Artificial intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of AI systems have a shared responsibility to ensure that AI is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported,” he said.
- Here's our list of the best VPN services on the market
Via The Verge
Joel Khalili is the News and Features Editor at TechRadar Pro, covering cybersecurity, data privacy, cloud, AI, blockchain, internet infrastructure, 5G, data storage and computing. He's responsible for curating our news content, as well as commissioning and producing features on the technologies that are transforming the way the world does business.