The Chatbot Will See You Now: 4 Ethical Concerns of AI in Health Care

Chatbots and other AI have the potential to reshape health care, but with the explosion of new tools come questions about ethical use and potential patient harm.

Carrie Pallardy, Contributing Reporter

September 28, 2023

14 Min Read
ChatBot out of screen monitor holds speech bubbles with medical icons. Vector illustration
Den Vitruk via Alamy Stock

At a Glance

  • Lack of regulations overseeing how AI is developed and used in the US health care system.
  • Differences between augmented intelligence and artificial intelligence.
  • Ethical pitfalls of AI: Bias, hallucinations and harmful info, privacy and consent, transparency.

Artificial intelligence has been quietly working in the background in health care for years. The recent explosion of AI tools has fueled mainstream conversations about their exciting potential to reshape the way medicine is practiced and patient care is delivered. AI could make strides toward achieving the long vaunted goal of precision medicine (personalized care rather than the standardized one-size-fits-all approach more commonly found in health care). It could reduce the administrative burden on clinicians, allowing them to step away from the screen and bring more human interaction back to the bedside. It could lead to more accurate diagnoses for more patients faster than the human workforce alone could ever hope.

These possibilities are dizzying in their number and potential, but they are not without the shadow of possible harm. The kind of radical change AI promises to bring to health care is messy, and patient lives hang in the balance.  

While a chatbot isn’t likely to be your new doctor anytime soon, doctors and health systems increasingly are finding ways to integrate AI into care delivery. In 2022, the American Medical Association (AMA), a professional association and lobbying group for physicians, conducted a survey on digital health care. Of the 1,300 physicians who responded to the survey, 18% reported using augmented intelligence (a distinction we’ll address) for practice efficiencies and 16% reported using it for clinical applications. Within a year, 39% plan to adopt AI for practice efficiencies and 36% plan to adopt it for clinical applications.

Related:Why AI’s Slower Pace in Healthcare Is as It Should Be

The adoption of AI in health care is happening now, while the technology is still nascent. There are plenty of voices calling for an implementation framework, and many health care organizations have published statements and guidelines. But there are yet to be any cohesive principles or regulations overseeing how AI is being developed and put into use in the US health care system.

Will ethics be left behind in the race to integrate AI tools into the health care industry?

Augmented Intelligence Versus Artificial Intelligence

When you see the term “AI,” you likely assume it stands for artificial intelligence. Many voices in the health care space argue that this technology’s applications in their field earn it the title of “augmented intelligence” instead.

The AMA opts for the term augmented intelligence, and so does the World Medical Association (WMA), an international physician association.

“We chose the term augmented intelligence because of our deep belief in the primacy of the patient-physician relationship and our conviction that artificial intelligence designs can only enhance human intelligence and not replace it,” Osahon Enabulele, MB, WMA president, tells InformationWeek via email.

Related:Experts Ponder GenAI’s Unprecedented Growth and Future

Whether considered augmented or artificial, AI is already reshaping health care.

The Argument for AI in Health Care

Last year, Lori Bruce, associate director of the Interdisciplinary Center for Bioethics at Yale University and chair of the Community Bioethics Forum at Yale School of Medicine, had a carcinoma. She faced all the uncertainty that comes with a cancer diagnosis and the different treatment possibilities. AI could dramatically reduce the time it takes to make a treatment decision in that kind of case.

“AI isn’t a magic bullet, but for someone like me, I could someday ask it to read all the medical literature, then I could ask it questions to see where it might have erred -- then still make the decision myself. AI could someday give narrower ranges for good outcomes,” Bruce tells InformationWeek via email.

And that is just one of the potential ways AI could have the power to do good in health care. Big players in the space are working to find those promising applications. In August, Duke Health and Microsoft announced a five-year partnership that focuses on the different ways AI could reshape medicine.

Related:10 AI Startups to Watch

The partnership between the health system and the technology company has three main components. The first is the creation of a Duke Health AI Innovation Lab and Center of Excellence, the second is the creation of a cloud-first workforce, and the third is exploring the promise of large language models (LLMs) and generative AI in health care.

The health system and technology company plan to take a stepwise approach to AI applications, according to Jeffrey Ferranti, MD, senior vice president and chief digital officer of Duke Health. First, they will explore ways that AI can help with administrative tasks. Next, they will examine its potential for content summarization and then content creation. The final level will be patient interaction. How could intelligent chat features engage patients?

This partnership emphasizes ethical and responsible use. The plan is to study what is working and what isn’t and publish the results.

“The technology is out there. It's not going away. [It] can’t be put back in the bottle, and so, all we can do is try to use it in a way that’s responsible and thoughtful, and that’s what we’re trying to do,” says Ferranti.

Administrative support and content summarization may seem like low-hanging fruit, but the potential rewards are worth reaping. Physician burnout reached alarming levels during the early years of the COVID-19 pandemic. In 2021, the AMA reported that 63% of physicians had symptoms of burnout. While burnout is a complex issue to tackle, administrative burden routinely emerges as a driving factor. What if AI could ease that burden?

If AI can do more of the administrative work, doctors can get back to being doctors. If AI can read all of the latest medical research and give doctors the highlights, they can more easily keep up with the developments in their fields. If AI can help doctors make faster, more accurate clinical decisions, patient care will benefit. Patient care could get even better if AI reaches the point where it can offer accurate diagnoses and treatment planning faster than humans can.

All of those potential “ifs” paint a bright future for medicine. But this ideal future cannot be reached without acknowledging and effectively addressing the ethical pitfalls of AI. Here are four to consider:

1. Bias

An AI system is only as good as the data it is fed, and biased data can lead to poor patient outcomes.

“We have seen some spectacular publicly demonstrated reported failures where algorithms have actually worsened care for patients, reintroduced bias, made things more difficult for patients of color, in particular,” says Jesse Ehrenfeld, MD, MPH, AMA president.

A study published in the journal Science in 2019 found racial bias in an algorithm used to identify patients with complex health needs. The algorithm used health costs to determine health needs, but that reasoning was flawed. “Less money is spent on Black patients who have the same level of need, and the algorithm thus falsely concludes that Black patients are healthier than equally sick White patients,” according to the study.

The study authors estimated that the racial bias in the algorithm cut the number of Black patients identified for additional care by more than half.

Scientific American reported that the study authors reached out to the company behind the algorithm and began working together to address the racial bias.

This kind of bias is not isolated, nor is it limited to racism. It is a major ethical issue in the field of AI that could deepen racism, sexism, and economic disparities in health care. If bias continues to go unchecked in systems like this, ones that impact the health care treatment decisions of millions of people, how many patients will be overlooked for the care they need?

Ethical use starts with identifying bias in data before it powers a system impacting people’s health.

“There need to be standards for understanding and demonstrating that the data on which AI is trained is applicable to the intended intervention and that there should be … requirements for testing in controlled environments prior to approval for implementation in health care settings,” says Brendan Parent, JD, assistant professor in the department of surgery and the department of population health at NYU Langone Health, an academic medical center.

But it is important to remain vigilant as AI systems are put into practice. Parent also emphasizes the importance of continuous monitoring and testing. AI models will change as they ingest new data.

2. Hallucinations and Harmful Information

With respect to science fiction author Philip K. Dick, it turns out that androids, or at least their early forerunners, might dream of electric sheep. AI can hallucinate, an eerily human term that describes instances in which an AI system simply makes up information. An AI system getting creative in a health care context is obviously a problem. Made up or false information, particularly when used in clinical decision making, can cause patient harm.

“We have to understand when and where and why these models are acting a certain way and do some prompt engineering so that the models err on the side of fact, [not] err on the side of creativity,” says Ferranti.

Even if the information an AI system offers to its user isn’t the product of a hallucination, is it serving its intended purpose? We already have an early example of the potential harm a chatbot can cause.

In May, the nonprofit National Eating Disorders Association (NEDA) announced that it would replace the humans manning its helpline with a chatbot, Tessa. The transition to Tessa was announced shortly after helpline staff notified the nonprofit of plans to unionize, NPR reported. But Tessa didn’t stay online for long.

The chatbot gave out dieting advice to people seeking support for eating disorders, NPR reported in a follow-up piece. NEDA CEO Liz Thompson told NPR the organization was not aware Tessa would be able to create new responses, beyond what was scripted. The founder and CEO of Cass, the company behind Tessa, Michiel Rauws, told NPR Tessa’s upgraded feature with generative AI was part of NEDA’s contract.

Regardless of the finger pointing, Tessa was taken down. NEDA shared a statement via email: “Eating disorders are complex mental health concerns, and our primary goal is to ensure people impacted are able to connect to accurate information, evidence-based treatment and timely care. There is still much to be learned about how AI will function in the area of health conditions. For this reason, we do not plan to reintroduce any AI-based programming until we have a better understanding of how utilizing AI technology would be beneficial (and not harmful) to those who are affected by eating disorders and other mental health concerns.”

Protected health information (PHI) is safeguarded under the Health Insurance Portability and Accountability Act (HIPAA). And AI introduces some interesting privacy challenges. AI, in health care and every other industry, requires big data. Where does the data come from? How is it shared? Do people even know if and how their information is being used? Once it is being used by an AI system, is data safe from prying eyes? From threat actors who would exploit and sell it? Can users trust that the companies using AI tools will maintain their privacy?

Answering these questions as an individual is tricky, and ethical concerns come to the fore when considering the privacy track record of some companies participating in the health care space.

In March, the Federal Trade Commission (FTC) said that BetterHelp, a mental health platform that connects people with licensed and credentialed therapists, broke its privacy promises. The platform “repeatedly pushed people to take an Intake Questionnaire and hand over sensitive health information through unavoidable prompts,” according to the FTC. BetterHelp shared private information of more than 7 million users with platforms like Criteo, Facebook, Pinterest and SnapChat, according to the FTC statement. The mental health platform will have to pay $7.8 million, and the FTC has banned it from sharing consumer health data for advertising purposes.

BetterHelp isn’t an AI system (it does use AI to help match users with therapists, according to Behavioral Health Business), but its handling of privacy illuminates a troubling divide between the way people and companies are treated.

Maria Espinola, PsyD, a licensed clinical psychologist and CEO of the Institute for Health Equity and Innovation, tells InformationWeek about BetterHelp’s run in with the FTC: “If I had done that as a psychologist, my license would be taken away,” she says. For either a therapist or a corporation to sell a patient's personal information for profit after saying they would not, Espinola says, is a violation of trust. BetterHelp is still in business.

As AI tools are increasingly integrated into health care, will the companies developing these tools honor the patient information privacy requirements? Will more enforcement action become necessary?

Patients have a right to keeping their medical information private -- although that right is hardly guaranteed in the age of data breaches and questionable corporate practices -- but do they have a right to know if AI is being used in their treatment or their data is being used to train an AI system?

Koko, a nonprofit online mental health support platform, stirred up consent controversy earlier this year. In January, the company’s co-founder Rob Morris, PhD, took to X (then Twitter) to share the results of an experiment. The company set out to see if GPT-3 could improve outcomes on its platform.

Part of the Koko platform allows users to “send short, anonymous messages of hope to one another,” says Morris. Users had the option to use Koko bot (GPT-3) to respond to these short messages. They could send what GPT-3 wrote as is or edit it. The recipients would receive the messages with a note informing them if Koko Bot helped to craft those words. Koko used this approach for approximately 30,000 messages, according to the X thread.

The experiment led to widespread discussion of ethics and consumer consent. The people sending the messages had the choice to use Koko Bot to help them write, while the recipients did not have any opportunity to opt out, unless they simply did not read the message.

In his initial thread, Morris explains that the messages composed by AI were rated higher than those written solely by humans. But Koko pulled the AI-supported option from its platform.

“We strongly suspected that over the longtime, this would be a detriment to our platform and that empathy is more than just the words we say. It's more than just having a perfectly articulated human AI generated response. It's the time and effort you take to compose those words that's as important,” Morris explains.

In response to the criticism the experiment sparked, Morris believes much was due to misinterpretation. He emphasizes that users (both message senders and recipients) were aware that AI was involved.

“I think posting it on Twitter was not the best way to do this because people misinterpreted a host of things including how our platform works,” he says.

Koko is making some changes to the way it conducts research. It is now exploring opportunities to conduct research through institutional review boards (IRBs), which are independent boards that review studies to ensure they meet ethical standards. Koko is working with Compass Ethics “to help better inform and publicly communicate our principles and ethics guidelines,” according to Morris.

4. Transparency

The term “black box” comes up a lot in conversations about AI. Once an AI system is put into practice, how can users tell where its outputs are coming from? How did it reach a specific conclusion? The answer can be trapped in that black box; there is no way to retrace its decision-making process. Often the developers behind these systems claim that the algorithms to create them are proprietary: In other words, the box is going to stay closed.

“Until there is clarity, transparency around where the models were built, where they were developed or that they had been validated. It’s really buyer beware,” says Ehrenfeld.

He stresses the importance of transparency and always keeping a human in the loop to supervise and correct any errors that arise from the use of an AI system. When humans are left out of that loop, the results can be disastrous.

Ehrenfeld shares an example from another industry: aviation. In 2019, The New York Times reported on flaws that resulted in the fatal crashes of two of Boeing’s 737 Max planes. The planes were outfitted with an automated system designed to gently push the plane’s nose down in rare conditions to improve handling. That system underwent an overhaul, going from two types of sensors to just one. Two planes outfitted with this system nosedived within minutes of takeoff.

A series of decisions made to rush the completion of the plane meant that Max pilots did not know about the system’s software until after the first crash, according to The New York Times.

“We cannot make that same mistake in health care. We cannot incorporate AI systems and not allow physicians, the end users, to know that those systems are operating in the background,” says Ehrenfeld. He stresses the ethical obligation to ensure there is “…the right level of transparency tied to the deployment of these technologies so that we can always ensure the highest quality safest care for our patients.”  

About the Author(s)

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights