If AI lets you implement discriminatory pricing or restrict access to a product based on a customer’s race, age or gender, should you?

Guest Commentary, Guest Commentary

November 4, 2019

6 Min Read
Image: Pixabay

You browse an e-commerce site on your mobile device, looking for a pair of shoes. Then, with every swipe on your phone, you see ads from other retailers offering you shoes, shoes and more shoes. Are you flattered that the retailer shared your session cookie with third parties? Or do you shake your head, annoyed that these ads are following you everywhere?

You visit an online retailer and can’t find what you’re looking for. Up pops a chat window. Beneath a small photo of a woman in her 20s is this cheery message; “Hi, I’m Brenda. How can I help you?” You tell Brenda what you want. She asks a few questions. You reply. You go back and forth at length until Brenda finally finds what you’re after. So helpful! You thank Brenda profusely for her help. You discover later that Brenda is not a human, but a chatbot. Are you delighted at the store’s efficiency? Or do you feel duped?

Artificial intelligence (AI) is transforming the face of e-commerce. From chatbots to recommendation engines to image search to smart logistics, the pace of technological advances in e-commerce is outstripping the pace of regulatory and ethical frameworks. But are these advancements coming at a price?

According to a study conducted in 2019 by Capgemini, most executives (77%) are uncertain about the ethics and transparency of their AI systems. More tellingly, executives in nine out of 10 organizations believe the use of AI systems have created ethical issues during the last two to three years. Consumer sentiment echoes this trend, with close to half of consumers believing they have felt the impact of an ethical issue caused by AI.

AI and ethics defined

Artificial intelligence is a collective term for learning systems that provide capabilities perceived by humans as representing intelligence. These capabilities can include a wide range of functionalities including speech, image and video recognition, autonomous objects, natural language processing, conversational agents, prescriptive modeling, augmented creativity, smart automation, advanced simulation, as well as complex analytics and predictions.

Yet, all this impressive functionality raises many ethical questions over the design, development and deployment of AI-based technology -- and businesses and government agencies are scrambling to keep pace.

Notably, the European Commission last year launched a study into the ethics of AI, with the aim to provide an appropriate ethical and legal framework for the increasing public and private investments in AI. The resulting 41-page “Ethics Guidelines for Trustworthy AI” report outlines seven key requirements for ethical AI. The Capgemini report further consolidates those into four critical recommended elements:

  • Transparent: AI should be clear, consistent, and understandable in how it works.

  • Explainable and Interpretable: It should be easy to explain how the AI works in language people can understand, and people should be able to see how the AI outcomes can vary with changing inputs.

  • Fair: The use of AI should eliminate or reduce the impact of bias against certain users.

  • Auditable: It should be possible for third parties to audit the technology, assess data inputs and provide assurance that the outputs can be trusted.

capgemini-chart-AIEthicsSmaller.jpg

To be sure, these guidelines provide helpful context for decision-making when building AI-based applications. That said, the right decision is not always black or white, but rather varying shades of gray.

Ethical dilemma #1: Justificiation

Just because a technology is useful or practical, should you adopt it? For example, if AI lets you implement discriminatory pricing or restrict access to a product based on a customer’s race, age or gender, and if doing so gives you a competitive advantage, should you?

This is a question that often hits hard to any industry that utilizes dynamic pricing, such as the insurance industry. A 2018 study co-authored by ProPublica and NPR found that health insurers are collecting vast amounts of consumer data used to drive their pricing algorithms. However, these algorithms may determine that low-income, minority individuals may be more likely to live in dangerous neighborhoods and face greater health risks, which may warrant higher health insurance prices. Is this fair?

Every wave of technology development raises its own set of ethical questions around unintended yet harmful consequences for users.

Ethical dilemma #2: Bias

AI uses algorithms and historical data to mimic human intelligence. But for AI to be effective, humans must write the rules for the algorithms. However, these rules are often created by programmers who may not fully comprehend how consumers will be affected.

According to research conducted by Evans Data, 73% of software developers in the world are men, with a median age of 36. What happens when these developers must create AI-based applications geared to, say, older women, or to any demographic different from the developer’s own personal perspective?

Biases that may be completely unintended but, when unrecognized, can creep into AI algorithms and raise unique ethical concerns.

Ethical dilemma #3: Culture

What’s comfortable for one person is not necessarily comfortable for another. Ethics and cultural norms differ by country, even by region. They also differ by generation. Behavior that teenagers find acceptable is not always shared by their parents or grandparents.

This creates the ultimate dilemma for commerce businesses, particularly businesses with a global footprint. How does a company define what is ethical? What is unethical? What is in-between, assuming such a place exists?

The good news

Getting ethics in AI right takes forethought and careful hard work, and yet it’s essential to mitigate the risks described here. More importantly, creating AI interactions that consumers perceive as ethical builds trust and satisfaction, while AI interactions perceived as unethical damage a brand’s reputation.

No wonder that executives are realizing the importance of ethical AI and are taking action when ethical issues arise. According to Capgemini, more than half (51%) of surveyed executives believe AI systems must be ethical and transparent. And an impressive 41% of senior executives report having abandoned an AI system altogether when ethics concerns were raised. However, 55% still implemented a “watered-down” version of the system.

AI offers significant benefits to businesses with the right vision, planning and approach to implementation. Organizations that adopt an “ethics-by-design” approach for AI are earning their consumers’ trust and loyalty -- and greater market share -- compared to their peers.

Mark_Kirby-capgemini.jpg

Mark Kirby is the North American Chief Technology and Innovation Officer and Perform AI Lead at Capgemini, a global leader in consulting, technology services and digital transformation. Kirby knows that AI and ML touch and are relevant to every part of business, which is why they’re becoming some of the most disruptive technology advances in the history of humankind. With this mindset, he’s able to assist companies in unlocking the potential of AI/ML through creative (yet practical) strategies, practices, and solutions needed to thrive in today’s digital world.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights