AI has been abused as part of disinformation campaigns, accused of perpetuating biases, and criticized for overstepping privacy bounds. Let’s address this.

Guest Commentary, Guest Commentary

November 14, 2019

6 Min Read
Image: TimeStopper - stockadobe.com

The issues of ethical development and deployment of applications using artificial intelligence (AI) technologies is rife with nuance and complexity. Because humans are diverse -- different genders, races, values and cultural norms -- AI algorithms and automated processes won’t work with equal acceptance or effectiveness for everyone worldwide. What most people agree upon is that these technologies should be used to improve the human condition.

There are many AI success stories with positive outcomes in fields from healthcare to education to transportation. But there have also been unexpected problems with several AI applications including facial recognition and unintended bias in numerous others. AI software has been abused as part of disinformation campaigns, accused of perpetuating racial and socioeconomic biases, and criticized for overstepping privacy bounds. Ethics concerns and customer backlash have the potential to create an existential business crisis for the firms that develop or deploy AI applications.

AI ethics is concerned with making sure that the technology serves the interests of people and not deployed to undermine them. Instead of solely focusing on whether something can be done, AI ethics also considers whether something should be done. The growing awareness and importance of AI ethics is leading to a rising sense of urgency to create ethical standards.

Many executives are beginning to appreciate the ethical dimension of AI. A 2018 survey by Deloitte of U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. Developing an AI code of ethics can help companies avoid a business or communications crisis and to better serve their stakeholders.

There are many AI related ethics topics, but this article focuses on the top two that have received the most attention.

#1: Data privacy

AI thrives on data, particularly for deep learning applications where huge volumes of content are needed to train the algorithms. Any application of AI will only be as good as the volume and quality of data collected to serve it. Much of that data is collected directly from individuals in the course of ordinary activities such as e-commerce purchases or simply walking down the street, where permission to use that data may not have been explicitly granted. One of the primary ethics issues is the privacy of individual data.

Like the European Union (EU) passage of GDPR, there has also been a push in the U.S. for a federal data privacy law . At this juncture, no legislation has been enacted at the national level. In the absence of national rules, several of the states in the U.S. have recently passed laws that would restrict the use of personal information by commercial enterprises. One example is the California Consumer Privacy Act (CCPA), a data privacy regulation set to be enforced at the start of 2020. CCPA gives California residents the ability to see and control the personal data companies have, share, and sell. Businesses will be required to tell consumers what data is being collected and what the businesses intend to do with it.  Several additional states including Massachusetts, Hawaii, Maryland, Mississippi and New Mexico have introduced consumer privacy laws since the start of 2019. It remains to be seen if these laws will expand to the national level and ultimately influence the development and deployment of AI applications.

#2: Facial Recognition

One of the more contentious issues regarding AI has been the use of facial recognition technologies. These are being applied for a variety of purposes across multiple industries including as a biometric to unlock smartphones. However, some applications have been met with consumer concerns. One consumer survey revealed that 71% of millennials were uncomfortable with the idea of facial recognition technology being used where they shop. 

The most significant concerns are about facial recognition being used for mass surveillance. As noted in VentureBeat, the deeply personal and pervasive nature of facial recognition has already made it a major issue around the globe. Recently in Hong Kong, facial recognition towers were destroyed in protest of a surveillance state. These controversies have led to increasing calls for government regulation, from individual citizens and various groups and even companies.

Significantly, Microsoft has recognized the potential for facial recognition abuse and has called for government regulation. Momentum is growing toward government regulation of facial recognition at the national level, though it is still unclear what the legislation might look like. In the meantime, several U.S. cities have taken it upon themselves to ban facial recognition within their city limits, including San Francisco and Oakland in California. 

While facial recognition is the technology raising the most ethical concerns at present, it’s likely that other new AI-powered technologies will raise increasing ethical concerns as well. For instance, natural language generation (NLG) could be used to create fake news or abusive spam on social media. NLG is starting to create debate, increasing questions about a new world in which the line between what is real and what is not or, rather, what is human-generated content and computer-generated content, becomes increasingly hard to differentiate.

These ethics issues add to a growing anxiety and fear of machines controlling humans, the potential for job losses and social dislocation, a deepening economic divide, threats to democracy, and general fear of human irrelevance.

Adherence to AI ethics breeds trust

According to Angel Gurria, Secretary-General of the Organization for Economic Co-Operation and Development (OECD): “To realize the full potential of [AI] technology, we need one critical ingredient. That critical ingredient is trust. And to build trust we need human-centered artificial intelligence that fosters sustainable development and inclusive human progress.” To achieve this, he adds that there must be an ethical dimension to AI use. This all underscores the urgency for companies to create and live by a responsible AI code of ethics to govern decisions about AI development and deployment.

The EU has developed principles for ethical AI, as has the IEEE, Google, Microsoft, Intel, Tencent and other countries and corporations. As these have appeared in only the last couple of years, AI ethics is very much an evolving field. There is an opportunity and critical need for businesses to lead by creating their own set of principles embodied in an AI code of ethics to govern their AI research and development to both further the technology while also helping to create a better tomorrow.

Gary_Grossman-Edelman.jpg

Gary Grossman is Senior Vice President and Technology Practice Lead, Edelman AI Center of Expertise.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights