How AI Ethics Are Being Shaped in Health Care Today

The multitude of use cases for artificial intelligence in health care comes with the clear potential for harm. How will ethical use be determined?

Carrie Pallardy, Contributing Reporter

October 3, 2023

16 Min Read
Medical AI concept. Doctor using artificial intelligence in medicine and health care at work
Valiantsin Suprunovich via Alamy Stock

At a Glance

  • Study: ChatGPT outperformed physicians answering patients’ questions.
  • Doctors and AI developers could find themselves on the hook when it comes to lawsuits.
  • Clinical experts need a voice in how the tools that will be used by their peers are made and vetted.

Health care is among the many industries being dazzled by the transformative promise of artificial intelligence. But the approach to implementation varies wildly. Multidisciplinary partnerships have been formed to study use cases. Small startups are racing to offer the latest tools to solve administrative and clinical challenges. Insurance companies are using AI systems to drive greater efficiency.

As the potential use cases for AI come into shape, so do several ethical issues. Health care providers, patients, AI developers and regulators all have a stake in ethical use of AI, but how will those ethics be defined?  

Artificial Empathy

An overarching theme emerging from AI in health care is that the technology is not meant to replace humans in medicine, nor would it be ethical for them to do so. But there are ways that AI chat assistants could stand in when a human isn’t readily available.

A study published in JAMA Internal Medicine this spring found that ChatGPT outperformed physicians in terms of quality and empathy when answering patient questions. The study noted: “Randomized trials could assess further if using AI assistants might improve responses, lower clinician burnout, and improve patient outcomes.”

Woebot Health, a mental health platform, aims to make mental health care more accessible. The company, founded by Alison Darcy, Ph.D., a clinical research psychologist, employs a chatbot powered by AI and natural language processing (NLP)

Related:Why AI’s Slower Pace in Healthcare Is as It Should Be

Woebot operates like a conversational tree, according to Joe Gallagher, Ph.D., the company’s chief product officer. Users can chat with Woebot about how they are feeling, and it responds in a way meant to emulate empathy, to make people feel heard. There is a back and forth, but the users always retain agency. “We conversationally design it such that the user has to actually say ‘No, Woebot, that’s not [right]’. Or ‘Yeah, you’re right, Woebot. That’s what I’m dealing with.’”

The company used natural free text from its users to help build its NLP model. But Woebot has no generative capabilities. “Woebot is entirely … using scripted messages that have been created by our conversational designers with the oversight of our clinician team,” Gallagher shares.

When users interact with Woebot, it is clear they are not interacting with a person. “Woebot is upfront about it being a bot. It is not trying to masquerade as anything other than what it is,” says Gallagher.

Does this model work? Woebot is out to prove that it does. The company conducts research, some of it internal and some of it in partnership with clinical organizations. It states on its website that all clinical studies undergo IRB review.

Related:Google Cloud Unveils AI Tools to Streamline Preauthorizations

It published a study in 2021 that concluded that digital therapeutics showed promise in developing a bond with users.

“If you want to prove something you need evidence; and if you want to get evidence, the best way to get evidence is to use the scientific approach; and the best way to do that is to run studies and be upfront and publish your data,” says Gallagher.

AI and Health Insurance

AI has applications beyond the provider and patient care arenas. Its ability to ingest vast amounts of data makes it ripe for the insurance side of the industry. In fact, insurers are already using AI to comb through patient medical records and help make coverage determinations.

The algorithms used by health insurance companies are meant to improve efficiencies.

“By automating tasks involved in prior authorization and referrals, insurers can improve efficiencies and productivity,” Lisa Martino, senior solutions engineer, health care at SS&C Blue Prism, an intelligent automation platform, tells InformationWeek via email. “For example, intelligent automation consumes handwritten prior authorization and referral forms to find missing data points and submit them for approval at a much greater speed and accuracy than a human. This results in cost savings, time savings, lower denials and increases patient satisfaction.”

Related:Should You Feel Good About Emotion AI?

Legal challenges popping up over health insurers’ use of algorithms paint a different picture.

The American Civil Liberties Union (ACLU) represented Idaho adults with development disabilities in a class action lawsuit: K.W. v. Armstrong. The people represented in the lawsuit rely on Medicaid, and the State of Idaho dramatically cut their coverage.

“When the government claimed trade secrets prevented disclosure of the algorithm that reduced benefits by 30%, the plaintiffs sued to gain access and discovered errors in the assumptions made in the formula used to calculate benefits,” Jacqueline Schafer, founder and CEO of Clearbrief, an AI legal writing platform, and a former assistant attorney general in the Washington and Alaska Attorney Generals’ offices, tells InformationWeek via email.

This isn’t the only time that government benefits programs have come under scrutiny for their use of algorithms. An in-depth investigation by STAT, a health and medicine journalism site, explored how Medicare Advantage plans are using AI to drive claim denials.

Investigative journalism nonprofit ProPublica reported that Cigna, a private health insurance company, uses a system referred to as PxDx to deny claims without doctors even reading them. Cigna is facing a class action lawsuit that alleges: “Relying on the PxDx system, Cigna’s doctors instantly reject claims on medical grounds without ever opening patient files, leaving thousands of patients effectively without coverage and with unexpected bills.”

In an emailed response, Cigna directed InformationWeek to a page on its claims review process. PxDx is short for “procedure to diagnosis,” and “it matches up codes, and does not involve algorithms, artificial intelligence or machine learning,” according to the company’s page.

“This filing appears highly questionable and seems to be based entirely on a poorly reported article that skewed the facts. Based on our initial research, we cannot confirm that these individuals were impacted by PxDx at all,” according to the emailed statement. “To be clear, Cigna uses technology to verify that the codes on some of the most common, low-cost procedures are submitted correctly based on our publicly available coverage policies, and this is done to help expedite physician reimbursement. The review takes place after patients have received treatment, so it does not result in any denials of care.”

The outcome of this case has yet to be determined, but there is still reason to think about the implications of using algorithms and AI to make coverage determinations.

In an interview, Brendan Parent, JD, assistant professor in the department of surgery and the department of population health at NYU Langone Health, an academic medical center, points out that the algorithms use by health insurers “are subject to the same kind of limitations and biases and mistakes that any others are, and so, there needs to be review and oversight of those models to ensure that they’re actually fair.”

Jesse Ehrenfeld, MD, MPH, AMA president, also voices concern on the way health insurers are using algorithms. “What we’re seeing is a concerning trend of those tools and technologies being used in ways that we don’t think are in our patient’s best interest all the time,” he shares.

He also shares that there is an industry popping up that provides tools for health care providers to combat the tools used by insurers. “Rather than having [an] AI claims adjustment arms race, we ought to just fix the underlying process, which is to remove unnecessary prior authorization from the beginning,” he says.

Do No Harm

“First, do no harm” is central to the Hippocratic Oath and an often-repeated mantra in health care. This technology has arguably already caused harm, and AI is going to continue putting this oath to the test.

The promise of AI in health care isn’t just about easing the burden on clinicians and making patient care better. It’s also about who can develop the tools to achieve those goals first. It is about making money. In the race for AI’s business case in health care, ethics might come second. Will patient harm be inevitable? “Yes, the question is how much,” says Parent.

The impact that patient harm leaves is substantial. And it isn’t a rare occurrence. Research has been conducted to pinpoint just how often it happens. One study published in 2023 found that 23.6% of patients admitted to US hospitals experience an adverse event. Medication errors, falls, infections, and death are just a few of the adverse events that can befall a hospital patient.

Patient harm damages the trust people have in the health care system. “Once a breach in trust happens, it’s rare that anyone from medicine has a chance to apologize, so the affected families and communities may carry that grief and frustration with them for some time,” says Lori Bruce, associate director of the Interdisciplinary Center for Bioethics at Yale University and chair of the Community Bioethics Forum at Yale School of Medicine, in an email interview.

Any patient harm caused by AI will not only directly impact the individuals hurt and their families, but it also has the potential to slow the technology’s positive impact.

“We’re moving a little too fast and … something could blow up and lead to a harm, which could set the field back, and scare both the potential implementers and regulators into saying we need to stop development or stop implementation because a few overzealous pioneers made some mistakes,” Parent explains.

Preventing harm isn’t a matter of saying that AI has no business being in health care. You might even argue there is an ethical obligation to harness the power of AI for good in health care. “There’s a cohort of people who think AI is going to save the world, and there’s a cohort of people who think AI is going to destroy the world; and this is a much more nuanced conversation,” says Jeffrey Ferranti, MD, senior vice president and chief digital officer of Duke Health, in a interview.

The answer lies in finding ethical and safe ways to use AI for the benefit of health care providers and patients, even if that means it takes more time to create a marketable product.

Ethical Guidelines and Regulation

The calls for AI regulation are loud and numerous. The European Union introduced the AI Act. The US has an AI Bill of Rights, and in July, several big-name companies at the forefront of AI development agreed to meet safety standards during a meeting at the White House.

Plenty of health care organizations have chimed in with their takes and guidelines for ethical use of AI. The American Medical Association has plans to develop its recommendations for augmented intelligence. The World Medical Association (WMA), an international physician association, released a statement in 2019. Osahon Enabulele, MB, WMA president, tells InformationWeek via email that the organization is watching the developments in AI and considering reviewing its policy. The World Health Organization (WHO) released guidance: Ethics and Governance of Artificial Intelligence for Health. In April, the Coalition for Health AI (CHAI), a group of academic health systems and AI experts, released the Blueprint for Trustworthy AI Implementation Guidance and Assurance for Healthcare.

But these are all guidelines for ethical implementation; they are not enforceable. “The US is … behind when it comes to AI regulation. We have nothing like the European Commission’s regulatory framework for AI which promotes patient safety, rights and health,” says Bruce.

AI is a nascent technology, and regulators are being challenged to understand how it works, grasp its implications across every industry (not just health care) and develop a framework from a disparate crowd of voices that balances safety with innovation. No small task.

“The idea of software as a medical device is a relatively new concept, and there has been a quite a bit of preparation and conversation around how the process of developing and implementing and monitoring AI should happen, but it hasn’t been codified enough,” says Parent.

What happens in the meantime? Development and deployment of AI tools in health care is not going to stop. The use cases are myriad, and for every use case, there will be companies looking for ways to offer up the next best AI solution.

“I think we’re going to find a lot of people in the next six to 12 months, small bespoke company -- hundreds of them, thousands of them -- that are all generating large language models,” says Ferranti. “I think we need to be really leery of those. I’m not sure that all of them are taking the thoughtful approach, and there could be unintended consequences.”

Although there is no comprehensive law that mandates how AI systems can be made and used in health care, there are regulatory pathways for some types of tools. For example, the FDA reviews and authorizes AI tools in health care; it has a list of AI and machine learning-enabled medical devices.

The EyeArt AI system autonomously detects diabetic retinopathy, a condition that can lead to blindness in diabetic people. It has earned multiple FDA clearances, its most recent in June. The EyeArt AI system is a software product that connects to a computer and retinal cameras. The system analyzes the photos these cameras take for the presence or absence of diabetic retinopathy.

AI digital health company Eyenuk, the company that developed the EyeArt AI system, was founded in 2010. A team of Ph.D.s with machine learning and clinical backgrounds in eye care developed its system. Eyenuk also collaborated with Doheny Eye Institute for clinical input, according to Kaushal Solanki, the company’s founder and CEO.

The EyeArt AI system earned its first regulatory clearance in the European Union in 2015. After that, the company began working with the FDA to design a clinical trial to test its technology.

Eyenuk has validated its technology in large clinical studies with thousands of patients. The technology was designed and tested to ensure it works for patients of different races and ethnicities and in different settings, rural and urban. Hundreds of providers are using the system today, shares Solanki.

Solanki addresses the black box issue. Eyenuk “developed proprietary technologies to open up that black box to see, to visualize what the AI is seeing, and then we have a closed loop with our clinical collaborators where they can verify that what AI is seeing is clinically relevant,” he says during an interview.

Not every tool that is being used in a health care context is required to go through that kind of rigorous development and review process.

“There are a number of AI machine learning applications that are bypassing research regulatory structures to test them and make sure that they’re doing what they’re supposed to be doing without unintended consequences by calling them quality improvement measures,” says Parent. “If you call it quality improvement and then implement it in a care setting, you don’t have to go through the institutional review board to make sure that the people who are technically participants in a trial are being adequately protected.”

As AI regulations are still getting hammered out in the US and products continue to pour into the marketplace, what is and is not considered ethical and acceptable will be likely defined in part by lawsuits. One of the big questions: Who is responsible?

If an AI system does cause harm, is the provider who used the system responsible or is the AI developer?

“The person who is best suited to mitigate the risk ought to be responsible and the liability for the performance of an algorithm ought to rest with the party who designed developed, deployed the tool and that often is not going to be the end user,” says Ehrenfeld.

He anticipates that disproportionate risk of physician liability will slow clinical integration and innovation because providers will be unwilling to adopt tools.

But should providers be absolved of any liability? If they are knowingly opting to use AI tools, those tools become part of their patient treatment plan.

“I am of the opinion that we have professional responsibility as doctors to own what we put in a chart, to own the treatment decisions that we make, whether they’re informed by an AI or not,” says Ferranti. “Those risks exist today. A doctor could Google bad information and use it in my care of a patient today. Is the bad information responsible for that or is the doctor? I would argue the doctor is responsible.”

When it comes to lawsuits, it is likely that doctors and AI developers could both find themselves on the hook. Peter Kolbert, JD, senior vice president for claim and litigation services for Healthcare Risk Advisors, an insurance and risk management services company and a part of TDC Group, offers the potential parallel of a malfunctioning scope in an email interview.

“The physician may be responsible if the scope malfunctions during the procedure/surgery or the coding that facilitates the instrument malfunctions. Still, there is a strong likelihood that both the physician and the patient will sue the manufacturer,” he tells InformationWeek via email. A similar scenario could play out if an AI tool results in harm. 

If a doctor is sued because AI leads to a misdiagnosis, their medical professional liability insurance will provide coverage, according to Kolbert.

Schafer expects one of the core issues of lawsuits alleging AI harm in health care to be a question of judgment. “Did the doctor, nurse or insurance company claims worker use the AI as a total substitute for their judgment?”

The Path Forward

Ideally, the risk of harm should be minimized before a tool ever makes its way into the hands of a doctor or an insurance company. The question of ethical use goes back to developing effective, safe technology from the very beginning. That means involving the right stakeholders in the development and testing of these tools. Questions about bias, harmful information, privacy, and transparency need to be asked and answered upfront, not later.

Clinical experts, not just tech experts, need a voice in how the tools that will be used by their peers are made and vetted. Bruce argues that there should also be a seat at the table for bioethicists.

“Bioethicists can be incredibly creative -- we’re known for resolving complex moral dilemmas -- but we need to be included in order to be part of the team,” she says. “There is a fear we’ll be the ‘ethics police,’ but many of us are very centered on collaboration and mutuality.”

Once a tool is developed, it needs to be validated before it is released into the wild. While internal research has value, is it enough? Will it be objective? Maria Espinola, a licensed clinical psychologist and CEO of the Institute for Health Equity and Innovation, advocates for collaboration with academic institutions that can provide objectivity.

Finally, when an AI system makes its way into the field, Bruce hopes to see a healthy level of skepticism from providers.

“Medicine traditionally has used various tools and ‘calculators’ with a certain blind faith in them -- yet many of these tools are having a long-overdue reckoning,” she says. “Take for instance, pulse oximeters. They don’t work well in patients who aren’t white because they were not well tested on diverse populations.”

Blind faith in AI will not serve providers or patients. “Hopefully, the newer generation of medical students will increasingly have a healthy skepticism over medical tools -- and that wariness would be instrumental for using AI as a tool but not an ‘end-all’ solution within health care,” says Bruce.

At the rate that AI tools are being created and launched, health care is likely to witness bountiful examples of ethical and unethical use in the years to come. But as the ethics of AI take shape, the technology will be changing the way clinicians practice and the way patients receive care.

“I can’t even begin to predict where we’re going to be in five years. But I do think that every aspect of modern medicine will be fundamentally changed by this technology in five years,” says Ferranti.

About the Author

Carrie Pallardy

Contributing Reporter

Carrie Pallardy is a freelance writer and editor living in Chicago. She writes and edits in a variety of industries including cybersecurity, healthcare, and personal finance.

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights