Editor @Hackernoon by day, VR Gamer and Anime Binger by night
Noonies 2020 award nominee
Limarc has been nominated for the award!
Add your vote
“IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency.” The company has halted all facial recognition development and disapproves of any technology that could lead to racial profiling.
The Problem with Face Recognition
However, according to IBM, this technology may not yet be ready for use in law enforcement. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies,” says Krishna. “Artificial Intelligence is a powerful tool that can help law enforcement keep citizens safe. But vendors and users of Al systems have a shared responsibility to ensure that Al is tested for bias, particularity when used in law enforcement, and that such bias testing is audited and reported.”
Bias in Modern Face Recognition Algorithms
In December of 2019, The National Institute of Standards and Technology (NIST) published a study which found large variations in accuracy across numerous contemporary face recognition algorithms.
A false positive is when the system says that a face matches an entry in the database, when in fact they are not of the same person. The biggest glaring issue was that false positive rates were much higher among those of African and Asian descent. Oppositely, they were lowest among Eastern Europeans. The study found that “This effect is generally large, with a factor of 100 more false positives between countries.”
Additionally, they found that false positives were higher in faces of women, the elderly, and young children.
A false negative is when the system says that a face does not match any entries in the database, when in fact it is in the database. This is especially dangerous at airports or border crossings. If the system checks individual faces against a criminal database, it could potentially allow dangerous individuals past a security check.
How Will IBM’s Boycott on Face Recognition Tech Affect the Industry?
IBM has been at the forefront of machine learning development for a long time. Their natural language processing system, IBM Watson, is regarded as one of the best question-and-answer systems in the world.
Positive Impact of IBM’s Stance on Facial Recognition
When such a large company takes a strong position on facial recognition, it is bound to make waves. In fact, IBM’s decision already seems to be causing a domino effect in the industry.
On the other hand, other companies may take advantage of IBM halting facial recognition development. There is now more room in the market for other companies and startups to move in. The worst case scenario is that nothing changes. Companies might continue developing face recognition algorithms to sell to law enforcement agencies, without worrying about the consequences.
Ideally, other developers will start investing more time and money to ensure that their algorithms are free of bias. Hopefully, more companies will take a step back to study the effects of emerging technologies before releasing them.
IBM’s CEO believes that the biases that exist in modern facial recognition algorithms warrant a complete halt of their usage. They are calling on governments and vendors to ensure that their algorithms are free of bias before being implemented in vital areas of society, such as law enforcement.
Many AI technologies are emerging faster than governments are able to regulate. Hopefully, IBM’s stance paves the way for other large tech companies and startups to think about ethics before advancement.
Subscribe to get your daily round-up of top tech stories!