IBM Backing Out from the Facial Recognition Business to Tackle Racial Injustices
On May 25th 2020, the Minneapolis Police remorselessly killed an African-American man named George Floyd. After that, every media platform was rampaged with the video of this brutal murder in broad daylight. Within a few days, the video had reached every part of the world as it enraged everyone. It showed a law enforcement officer, kneeling down on the neck of this innocent man who kept crying that he couldn’t breathe. People around the crime scene managed to videotape the entire incident and then the social media did the rest. Soon millions took to the streets, protesting against racism. Riots took place and the protestors burned and wrecked everything that came their way. As a result of this, the Black Lives Matter movement came into being to protest against police brutality and all other forms of racism. Instead of supporting the individuals fighting for their rights, the law enforcement units arrested them using the power of Artificial Intelligence and facial recognition.
IBM out of the Facial Recognition Business
IBM’s CEO ordered the Congress to play their part in abolishing the racial norms and advised them to take down their facial recognition services. According to the letter sent by the CEO of the company, Arvind Krishna, ‘the company is to halt any research regarding facial recognition technology and the services already available are to be shut down immediately’. The decision was taken amidst the rising racial discrimination and the criticism received by the company for racial and gender bias.
Krishna said in his letter, “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.” Krishna further said, “IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency”.
Apart from this, he also mentioned the expansion of training and education, especially, ‘P-tech’ which is a school program, sponsored by the company, offering training to students for high school, colleges, and practical life. He added that over 150,000 students in 220 institutes are currently seeking education in potentially educationally underserved areas. From Brooklyn to Chicago, Dallas to Baltimore, these schools are creating real opportunities and jobs for young people today. Krishna also advocated for the expansion of Pell Grants; the federal subsidies for students who are in need.
Amazon’s use of facial recognition
Last year, Amazon’s use of facial recognition was put to vote to find out whether the algorithm should be put down or not. 2.4% of the shareholders voted in favor of restricting the sale of the technology to government and other powerful institutions. A person closely related to the matter, however, mentioned that the facial recognition software did not generate much income for the company the previous year and they were deciding to shut it down. The company had to take into account the work disturbances it would cause its customers, the major one being, the government.
What is Facial Recognition Technology?
Facial recognition is a technology that uses images from a video and matches them with the pictures present in a database for the purpose of recognizing any individual. The technology was first used in cellphones but with the everyday growth in the field, companies have managed to intertwine it with other high-tech devices. Facial recognition is now not only used for locating criminals in security systems but also in robots to maneuver independent working.
Different biometrics and iris recognition also work on the basis of the same principle. “Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularly when used in law enforcement, and that such bias testing is audited and reported,” Krishna wrote. The CEO said he focuses more on building the cloud service of the company.
Authorities, these days, are using tools such as the Clearview AI platform for the purpose of facial recognition as well. Clearview AI is an app with a database of over three billion images from Facebook, Twitter, and even Venmo. It has people’s personal information stored in its database without their consent.
This app is used by everyone, from creepy stalkers to agencies like the FBI. The app faced tremendous backlash from the New York Times earlier this year, exposing that the FBI and other law enforcement agencies had also used the app as a tool for surveillance.
Biased Facial Recognition by ClearView AI
The app has also been identified to show a false positive for Asians and African-Americans as said by NIST. “The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm,” the authors noted. They also saw higher rates of false positives for African-American women, which is “particularly important because the consequences could include false accusations”. The software has a biased algorithm which means that it has misidentified faces on the basis of their gender, race, and age. According to a report by the National Institute of Standards and Technology published in 2019, “The majority of face recognition algorithms exhibit demographic differentials”. NIST held a study in which they contemplated 189 software algorithms generated by 99 developers and stated the fact that most of the tech companies are working with the concepts of facial recognition, whether it’s ‘one to one’ matching exercise (facial unlock on cellphones) or ‘one to many’ (one face photo in a database of millions).
In another study about facial recognition algorithms in companies such as IBM, Microsoft, and Face++, the researchers found out that light skin-coloured people were recognized with better precision while dark-coloured individuals were recognized with multiple errors. According to the results of the study, IBM and Face++ only showed 65% accuracy with darker skin tones. IBM released a statement saying that they would improve their Watson Visual Recognition platform and provide better results in the future.
Artificial Intelligence and Justice
In 2016, ProPublica did an analysis which showed that the system to calculate criminal risk scores was so racially flawed that they are “written in a way that guarantees black defendants will be inaccurately identified as future criminals more often than their white counterparts”. Krishna also spoke in terms of police injustice in his letter to Congress, “Congress should bring more police misconduct cases under federal court purview and make modifications to the qualified immunity doctrine that prevents individuals from seeking damages when police violate their constitutional rights”.
Further, one of the members of the famous girl band Little Mix, Jade Thirwill, talked about her fight against racism and told that how Microsoft’s Artificial Intelligence editor recognized the other member, Leigh-Ann Pinnock, of the band as her.