The best way forward is to treat facial recognition data from the perspective of the rights of the people portrayed.

Guest Commentary, Guest Commentary

October 13, 2020

5 Min Read
Image: metamorworks - stock.adobe.com

Many people trust facial recognition technology to access their mobile phones, but there is still resistance toward deploying this type of technology in public spaces. In fact, some jurisdictions have put the use of facial recognition technology “on hold” because it poses particularly complex ethical dilemmas. Tech giants ceasing to supply facial recognition products to police departments and major US cities banning its use has further fueled these ethical debates.

For example, two use cases for facial recognition technology include recognition of criminal suspects and finding missing people. However, this immediately poses significant ethical questions: Would the same people who support the use of facial recognition to catch criminals also want to use this technology to track down people with outstanding child support payments? When people go missing, their families and friends may suffer great distress, but does that suffering outweigh the individual’s freedom not to be found?

There is no single responsible use of facial recognition that is applicable to all circumstances. Rather, this technology’s suitability depends on the prevailing culture, ethics, legislation and practices. As a result, there is no globally applicable set of right and wrong deployment contexts. IT leaders must instead ensure that they are adhering to digital ethics in order to use facial recognition technology responsibly. Here are four actions that they should take to do so:

1. Battle issues of bias and false positives

Training bias means that facial recognition technology isn’t always equally accurate for all types of faces. For instance, some algorithms may have trouble recognizing people of certain skin tones, while some may identify certain women as men and vice versa. This inaccuracy leads to some people being misidentified in “false positive” results.

Another potential issue with this technology is that facial expressions are easily misinterpreted. For example, a muscle movement that conveys a polite greeting in one culture may indicate confirmation or agreement in another. Or, some people may naturally appear to frown; while they are in reality showing a neutral expression, facial recognition software might misinterpret them as sad, depressed or agitated. Other people make such slight facial expressions that software may misinterpret them as an absence of emotional response.

Before deciding to make facial recognition technology operational, it’s important to take into account its measured reliability. For any applications of this technology, IT leaders should aim to develop sufficient countermeasures or verification procedures to battle these issues of bias and false positives.

2. Establish proportional use of facial recognition

Proportionality is an important ethical concept. In a technological context, it means that an organization should use technology powerful enough to solve a particular problem, but not much more powerful. It’s important to understand why an activity is being undertaken and question the accompanying technological deployment and subsequent data creation and usage.

For example, security cameras with built-in digital facial recognition capabilities are relatively inexpensive and simple to use. But this technology easily overshoots the functional requirement of a business being able to keep an eye on the building’s perimeter for security purposes. IT leaders should ask: “Can we achieve the same end by less invasive and more consensual means?” Consider evaluating less invasive technologies when a potential facial recognition use case comes to light -- for example, a standard video-recording security camera instead of one with facial recognition capabilities.

3. Explicitly determine purpose boundaries for collected data

Data should preferably be processed for specific, deliberate, predefined purposes. Ethical issues often arise when data use crosses the originally stated purpose boundaries -- also known as the “lineage of intent.”

For instance, facial recognition results used for emotion analysis to detect tension in a public place could also theoretically be processed for the purposes of handling claims and pricing offers by insurance companies -- but shouldn’t be. For any data collected via facial recognition technology, it’s critical that IT leaders explicitly determine and document its lineage of intent and restrict its use to only that predefined purpose.

4. Expand the rights of people identified in images

Who owns the image of your face or expressions collected via facial recognition technology? Are the emotions your face conveys effectively in the “public domain,” and therefore usable by others for all kinds of purposes? Or do you own the rights to your own face and expressions, which means that associated data should be used and stored only with your informed consent?

On the one hand, one’s facial expressions that are made in a public place are potentially available for everyone present to see, so one cannot claim them to be completely private. But, on the other hand, facial expressions are often made subconsciously, and they are transient. They are simply not meant to be systematically captured, stored and analyzed.

IT leaders should work with their legal teams to understand the intellectual property rights relevant to facial recognition images and analysis. The best way forward, however, is to treat facial recognition data not from the perspective of the organization’s rights, but rather from the perspective of the rights of the people portrayed. Extend their rights as much as possible.

It will be hard, if not impossible, to stop the use of facial recognition technology entirely. Even if your organization doesn’t use it, it will be used within ecosystems in which your organization operates -- for example, on the social media channels it uses or on the mobile technologies that it produces apps for. Therefore, it’s important to consider how to use this technology responsibly before it is deployed at scale.

Frank_Buytendijk-gartner.jpg

Frank Buytendijk is a Distinguished VP and Gartner Fellow in Gartner's Data and Analytics group, covering the topics of "the future," "digital ethics" and "digital society" and helping organizations to do the "right thing" with technology. Frank and other Gartner analysts will provide additional analysis on digital ethics and IT leadership at Gartner IT Symposium/Xpo 2020, taking place virtually October 19-22 in the Americas.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights