Face recognition can be a valuable tool that makes our lives easier and safer, but it has to be done right.

Guest Commentary, Guest Commentary

October 8, 2018

4 Min Read

Face recognition – a burgeoning technology that promises a plethora of consumer and business benefits and is projected by one analyst firm to be a $9.78 billion market by 2023 – appeared to take it on the chin in late July with a negative report by the American Civil Liberties Union that received wide media attention and dramatic headlines.

The ACLU said it used Amazon’s face recognition tool, Rekognition, to compare photos of congressional members against a database of publicly available mugshots. The software incorrectly identified 28 of them, including six members of the Congressional Black Caucus, as people who have been arrested for a crime.

“These results demonstrate why Congress should join the ACLU in calling for a moratorium on law enforcement use of face surveillance,” the organization said. Several lawmakers also raised questions about the technology in letters to Amazon CEO Jeff Bezos.

The ACLU’s findings are a good reminder that face recognition systems are not foolproof and highlight challenges that experts in the field must continue to confront. The report also is a textbook example of what can go wrong when an emerging technology like face recognition is improperly administered, as it threatens to unfairly cast a shadow over one of the world’s most useful and exciting scientific advancements in recent years

In the wake of the public discussion around the ACLU report, however, some unpacking needs to be done.

First, do all face recognition systems have an inherent bias against darker-skinned people? No, but some certainly do. This arises out of bias in the dataset used to train that system, as well as the context in which it is used.

The same goes for other variables that can throw the system off, such as people not looking directly at the camera or wearing glasses or a hat. Ethnicity is just one source of “variability” that can degrade accuracy.

That leads to a second question: Were the results from the ACLU test reliable?

Without knowing more details about the makeup of the 25,000 arrest photos in the ACLU’s test or how the Rekognition model was trained, it’s hard to grasp exactly what went wrong. What is certain is that Rekognition is an all-purpose face recognition system – a McDonald’s of face recognition, if you will – rather than a finely tuned solution designed specifically for the rigors of mission-critical face recognition applications like law enforcement, banking and other areas where accuracy, speed, security, and flexibility are essential.

Another crucial thing to understand about the ACLU test is that, under the covers, face recognition does not deal in absolutes, but in probabilities – i.e., there’s an X percent chance this image matches that one.

Indeed, an Amazon spokesperson said the ACLU’s tests were performed using Rekognition’s default confidence threshold of 80%, not the 95% or higher threshold that Amazon apparently recommends for law enforcement applications. “While 80% confidence is an acceptable threshold for photos of hot dogs, chairs, animals, or other social media use cases,” the spokesperson said, “it wouldn’t be appropriate for identifying individuals with a reasonable level of certainty.”

Regardless of the potential flaws in the ACLU’s report, as I said earlier, the broader point of the report is correct: Racial bias is a real concern in face recognition, and it needs to be addressed. So what’s the answer?

Microsoft recently became the first large tech company to call for regulation of face recognition technology. The right regulation could be appropriate, but regulation isn’t going to lead to better accuracy or eliminate racial biases in face recognition systems.

Fortunately, there are some existing ways to test the accuracy of different systems. For example, the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) already has the Face Recognition Vendor Test to gauge the accuracy of various face recognition systems in different scenarios. But because these tests are voluntary, not everyone uses them or is even aware of them. It would behoove all face recognition companies to start employing the tests. And, there needs to be a constant back-and-forth conversation about where the systems are failing and how the face recognition models can be retrained for the specific application.

Face recognition can be a valuable tool that makes our lives easier and safer, but it has to be done right. Focusing on accuracy is a great start.

Bhargav Avasarala is Machine Learning Lead at Ever AI, a face recognition platform.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights