Big Tech giants Microsoft, Amazon, and IBM have announced they will not sell facial recognition technology to police forces until the government rolls out strict regulations to govern its use.
The brutal killing of 46-year-old George Floyd by a policeman in Minneapolis, Minnesota, has reignited concerns around the use of facial recognition technology by law enforcement authorities. Some facial recognition technologies have already been found to feature a level of "racial bias" when matching photos to images of real people.
Law enforcement authorities believe facial recognition is a novel way to combat crime quickly and effectively but with the Minnesota incident reinforcing the widely-held belief that African Americans are at the receiving end of racial bias among policemen in the U.S., the use of facial recognition tech by police forces may not enjoy much support from the public in the coming days.
This week, Big Tech giants Microsoft, Amazon, and IBM have announced they are putting a hold on the sale of facial recognition technologies to police forces until the government puts in place strict laws to regulate the use of such technologies.
"We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology," said Amazon in a blog post while stressing that it will continue to allow organisations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.
"We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested," it added.
IBM CEO Arvind Krishna, who took over from Ginni Rometty earlier this year, also sent a letter to Congress, stating that "in the context of addressing responsible use of technology by law enforcement, IBM has sunset its general-purpose facial recognition and analysis software products."
"IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies," the company said.
"Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported. Finally, national policy also should encourage and advance uses of technology that bring greater transparency and accountability to policing, such as body cameras and modern data analytics techniques."
Software giant Microsoft also hopped on to the bandwagon, stating that it does not support the use of facial recognition technology which is not regulated by "a strong national law grounded in human rights" “We do not sell our facial recognition technology to U.S. police departments today, and until there is a strong national law grounded in human rights, we will not sell this technology to police,” the company said.
Commenting on Big Tech giants joining forces to prevent police forces from carrying out unregulated or poorly-regulated use of facial recognition tech, Paul Bischoff, Privacy Advocate at Comparitech, said that at this critical moment in our history, now is not the time to empower police with the ability to identify protesters or restrict freedoms of movement and assembly. We need more regulation that stipulates how, when, where, and in what context police are allowed to use face recognition, and with whom the police can share face recognition data.
"Allowing police to purchase face recognition services without oversight could have serious consequences, both predictable and unforeseen. As our study found, the technology has an unintentional but clear racial bias, and improper use can result in high rates of misidentification," he added.