AI facial recognition algorithms are far from perfect. Here's what you should know if your enterprise is considering deploying this technology.

Jessica Davis, Senior Editor

February 11, 2019

5 Min Read
Image: tanaonte - stock.adobe.com

Shelf-mounted cameras paired with artificial intelligence facial recognition software that can identify a person's age, gender, and ethnicity were one of the emerging systems being pitched to retail companies during this year's National Retail Federation Big Show in New York in January.

The idea was to give physical stores demographic information that could guide how they market to individual customers. It's something that could give them a competitive edge against online retailers such as Amazon, that have been leveraging customer data all along. 

But using cameras to capture photos of your customers in a way they may not even notice seems like it could be crossing that line between cool technology and creepy technology. Is it too invasive? Beyond that, there could be other problems, too. What if the software misidentifies a man as a woman and offers him a discount on feminine hygiene products? What are the consequences?

The consequences may not be hugely significant in the retail setting. A customer could get miffed, talk about it on social media, and not go back to that store for a while. But could the consequences be higher in other applications of machine vision and AI-driven facial recognition software?

Turns out, there is a great deal of concern about AI facial recognition software, which is commercially available from a number of big vendors, including Microsoft, IBM, and Amazon.

Most recently the focus is on a study about how some commercial algorithms are not as accurate at identifying darker-skinned people and women as they are at identifying lighter-skinned men. It's a topic that's been covered before. For instance, in July 2018 the American Civil Liberties Union (ACLU) applied an Amazon algorithm to photos of members of the US Congress and the algorithm identified 28 of them as people who have been arrested for a crime.

More recently a study from MIT Media Lab asks the question of whether public audits of these commercial algorithms impact vendor attention to improving the accuracy of the algorithms.

The study was co-authored by MIT graduate student Joy Buolamwini, who also is founder of The Algorithmic Justice League, an organization that describes itself as dedicated to fighting bias in algorithms. This study says that these algorithms are best at identifying lighter-skinned men. Their performance isn't as good when identifying women or darker-skinned people. The study also notes that some vendors improved their algorithms after these results were pointed out to them. Amazon's response to the study is captured in this New York Times article, and Buolamwini has posted her ongoing statements and responses regarding the issue and vendor response to Medium.

In machine learning, the results can be biased or inaccurate based on the volume and type of training data used. For instance, Amazon used machine learning to screen resumes of job applicants and ended up with a pool of mostly male candidates. That's probably because the historic pool of data used to train the algorithms included more men than women.

By adding more data or data sources to the pool used to train the algorithm, vendors may have improved the accuracy of their AI facial recognition systems.

Yet, vendors also offer a kind of safety valve to protect against the imperfection of these algorithms. These systems allow organizational customers to set a threshold or confidence level. This may be set based on the type of action the organization plans for the results.

The retail systems on display at NRF demonstrated how that works. For instance, in terms of gender, these systems may decide that someone is male. But they will also provide a confidence score, essentially saying that they are 67% (or some other percentage) certain that the person is male. Retailers have already set the level of confidence score they are willing to accept. So if someone is deduced to be male with a 67% confidence score and the retailer has set the threshold level at 60%, the customer will see the offer customized for a man. If the retailer sets the threshold score at 70%, the customer's 67% score would not meet that threshold and the customer would see a generic offer that could be made to any customer, male or female.

If the stakes are high, for example, in a law enforcement application where someone's life trajectory may change, the organization may set the threshold at 99%. If the stakes are not as high they may set the threshold at a much lower level.

Heightened privacy awareness

Still, are there privacy issues with collecting customer images, particularly in the age of GDPR and other new data privacy laws? A booth representative at one of these demos at NRF told InformationWeek that the images of customers are not retained. However, aggregated data about the demographics of the customers who visit a particular display is retained and analyzed to help retailers gain insights into their customers.

Should enterprises be experimenting with AI facial recognition software? That probably depends on the application and the level of risk that is entailed. For physical store retailers looking to gain an edge against their digital competitors, these applications could open up a world of data and insights that have not before been available.

Other machine vision technology that looks for matches of images of people against a database of known images has been used to fight child trafficking in the case of Thorn. Likely matches are surfaced by the AI and the human-in-the-loop makes the final determination of whether a match has been found. The benefit of using AI in this type of application, whether it is identifying missing children or identifying a suspected terrorist at a crowded sporting event in real time, is that the algorithm can analyze and make a match in seconds. But in these kinds of high-risk applications, having a human in the loop to make the final call is probably an important safeguard against mistakes in this nascent technology.

That's something that enterprises should keep in mind. This technology is still new, so obviously it's not perfect, and it should be handled with care. Also, like many emerging technologies, it lacks many regulations to govern its use. So far. Those regulations will likely be coming in the years ahead.

About the Author(s)

Jessica Davis

Senior Editor

Jessica Davis is a Senior Editor at InformationWeek. She covers enterprise IT leadership, careers, artificial intelligence, data and analytics, and enterprise software. She has spent a career covering the intersection of business and technology. Follow her on twitter: @jessicadavis.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights