There are legitimate questions about the ethics of employing AI in place of human workers. But what about when there’s a moral imperative to automate?

Guest Commentary, Guest Commentary

December 30, 2019

5 Min Read
A patient consults an AI medicine robot in its office.
Miriam Dörr / Alamy Stock Photo

It is by now, well-known that artificial intelligence will augment the human worker and, in some instances outright take jobs once handled by humans. A 2019 report indicated that 36 million U.S. workers have “high exposure” to impending automation. For businesses, the opportunities of AI mean they must scrutinize which tasks would be more efficiently and cost-effectively performed by robots than by human employees, as well as which ones should combine human and AI resources. But along with these considerations are ethical ones, since a heated public debate over the morality of job displacement can easily impact company reputations and profit margins, especially if those enterprises are seen to be behaving unethically.

But the debate over the ethics of automation misses a key question that both the public and companies need to consider: When is it unethical to not replace -- or augment -- humans with AI? In the cost/benefit analysis of automating jobs and tasks, it should become integral for business leaders to identify the areas where AI should be deployed on ethical grounds. Based on my own experiences as an AI strategist, I can identify at least three broad areas where the ethics of employing AI are not only sound but imperative:

1. Physically dangerous jobs

The U.S. Bureau of Labor Statistics reported that in 2017, 5,147 civilian workers were killed while on the job. Though a slight decrease from 2016, that number represented a notable increase from the total in 2015, and fatalities as of that year had been on an upward trend since 2013.

Dangerous, life-threatening jobs are not a thing of the distant past. Logging, fishing, aviation, and roofing are very much thriving professions that each account for a large portion of work-related deaths and injuries. AI technology can and should be deployed to ensure that human beings do not have to be placed in such risky situations. AI, which can program machines to not only perform repetitive tasks but also to increasingly emulate human responses to changes in surroundings and react accordingly, is the ideal tool for saving lives. And it is unethical to continue to send humans into harm’s way once such technology is available.

Additionally, as natural disasters increase around the world, organizations that help coordinate rescue and relief efforts should invest in AI technology. Instead of sending human aid workers into risky situations, AI-powered robots or drones can better perform the tasks of rescuing people from floods or fires.

2. Healthcare

Finding ways to reduce healthcare costs as the population ages and treatments become more expensive is an ethical question as we contemplate how to provide high-quality care within tightening budget constraints. Healthcare organizations should consider all the ways AI can address these problems, particularly when it comes to diagnosing disease. A recent scientific review concluded that AI is already as effective at diagnostics as trained medical professionals, due to the ability of computers to use “deep learning” to emulate human intelligence and evaluate patients holistically.

If we can already say this in 2019, imagine what the future holds for medical diagnosis. If AI can prove to be better at finding dangerous illnesses in patients than humans (and at a lower cost and higher efficiency), it is morally indefensible to not commit the full resources of the healthcare industry toward building and applying that technology to save lives.

3. Data-driven decision-making

The rise of “Big Data” -- the readily available aggregate information of billions of people -- has impacted the way businesses make decisions in functions ranging from marketing to finance and human resources, while generating important ethical debates. It is data that informs AI systems to make decisions, and so there are legitimate concerns over whether that data is flawed or biased. But while these issues are key to future AI development, the ethical question should also factor in human imperfections, and seriously ask whether machines fed by data can do worse or better.

Humans can be motivated by bias -- including racism, sexism and homophobia -- as well as by greed (AI may make mistakes, but is unlikely to embezzle money from a company, for instance). Ongoing efforts to ensure the quality of data and purge it of bias are essential to utilizing AI across business areas. But to assume that human beings can always have the empathy or lack of self-interest to always do these jobs better is a dubious prospect.

Flipping the ethical lens

All this is not to say that replacing AI with humans is an ethically and practically simple process. The broader issue of the future of work and automation is important for policymakers, business and the wider public. But if lives can be saved and businesses can better reach outcomes less influenced by human faults, the unethical thing would be to not invest in and apply AI.

bret-greenstein-cognizant.jpg

Bret Greenstein leads Cognizant’s Digital Business AI Practice, focusing on technology and business strategy, go-to-market and innovation to help our clients realize their potential through their digital transformation. He is also helping Cognizant to grow AI skills and to shape investments in AI to bring the most value to our clients. Prior to this, Greenstein led IBM Watson’s Internet of Things Offerings, establishing new IoT products and services for the industrial internet of things.

About the Author(s)

Guest Commentary

Guest Commentary

The InformationWeek community brings together IT practitioners and industry experts with IT advice, education, and opinions. We strive to highlight technology executives and subject matter experts and use their knowledge and experiences to help our audience of IT professionals in a meaningful way. We publish Guest Commentaries from IT practitioners, industry analysts, technology evangelists, and researchers in the field. We are focusing on four main topics: cloud computing; DevOps; data and analytics; and IT leadership and career development. We aim to offer objective, practical advice to our audience on those topics from people who have deep experience in these topics and know the ropes. Guest Commentaries must be vendor neutral. We don't publish articles that promote the writer's company or product.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights