DANGER: Why People are Worried About the Use of AI in a Clinical Setting

Artificial Intelligence (AI) has played an increasingly important role in medicine and healthcare over the past few decades. Robots performing surgery may spring to mind, but this is just one of the ways AI and machine learning can be harnessed in a clinical setting.

Many people, understandably, have concerns about the use of AI and how it has crept into every aspect of daily life. From worries about machines pushing the limits of true ‘consciousness’ and autonomy, to issues of data protection, to the potential of a medical negligence claim if the machinery goes wrong, there’s lots to consider

The main question is, can AI truly replicate the expertise of a living, breathing healthcare professional? What’s more, who is to blame if AI technology makes an error while carrying out its functions? In this post, we discuss the uses of AI in clinical settings and some of the ethical issues that come with it, so let’s take a look…

What is Artificial Intelligence (AI)?

Artificial intelligence, commonly referred to as AI, is technology built by humans to help machines learn how to make intelligent decisions and carry out specific tasks. AI technology tends to learn by analyzing huge volumes of data or information from its environment and recognizing pattens.

AI is becoming a huge part of our everyday lives, even though we may not realize it. AI and machine learning are behind much of our modern technology, including our phones, video games, social media and entertainment platforms such as Netflix.

AI is also becoming extremely significant in a healthcare setting, with countries across the globe starting to invest millions in AI to improve patient care, diagnosis, treatment and monitoring, and to improve medical research.

As technology evolves, the need for professionals skilled in both healthcare and technology grows. These advanced educational opportunities in healthcare technology provide critical knowledge and training for those looking to make a significant impact in medical diagnostics, patient care, and the development of AI applications in medicine. Such programs offer a blend of theoretical knowledge and practical experience, preparing graduates for the challenges of modern healthcare.

How AI is Used in the Medical World

In healthcare, there are two broad categories of AI. The first of these is Virtual AI, such as applications for the organization of electronic healthcare records and virtual diagnostic and treatment software. The other to note is Physical AI, such as robots that assist with surgery, or intelligent prosthesis for elderly people and people with disabilities.

It is used in every aspect of healthcare and medicine, including:

  • Disease diagnostics
  • Digital consultations (virtual doctors)
  • Health monitoring
  • Surgical treatments
  • Managing healthcare and medical data
  • Drug development
  • Assessing and personalizing treatment
  • Analyzing health plans
  • Medical education

How is AI Used to Diagnose Patients in a Clinical Setting?

In clinical healthcare, AI operates across several dimensions. From providing basic administrative and organizational systems, to taking an active role in consultation, diagnostics and outcome prediction, it can be very helpful.

The First Way AI Diagnoses Patients

There are two main ways computers can diagnose patients. First, by working through a series of questions to assess a patient’s symptoms and combining the patient’s answers with huge database of information to arrive at a probable diagnosis.

AI tools, often referred to as ‘virtual doctors’, can comprehend medical datasets, patient health records and clinician notes. These results are then used to assess patient symptoms and make decisions about whether they need to see a human healthcare professional, as well as make future predictions about their health.

In a primary care setting, this can be harnessed to help human clinicians work faster, see more patients and make better decisions about treatments based on the information at their disposal. However, the outcomes of this method can be limited, because the computer cannot pick up on individual patient cues or appreciate the nuances of a doctor-patient consultation.

 

The Second Way AI Diagnoses Patients

The second diagnostic method utilizes deep learning or pattern recognition and involves using algorithms to teach a computer to recognize images or certain combinations of symptoms. For example, this method has long been used in radiology and clinical imaging through the use of tools such as Computer-Assisted Diagnosis (CAD), which is often used in screening mammography (breast imaging to detect cancer early).

CAD can convert mammograms into digital form, search for abnormalities and highlight areas for doctors to analyze further. Essentially CAD is a second pair of eyes rather than a diagnostic tool in and of itself. Research shows that it can increase radiologists’ accuracy when screening for breast cancer.

AI can also fill gaps and strengthen diagnostic practices where human experts are unavailable or unviable. One example of this is the use of deep learning to classify pulmonary tuberculosis using chest radiographs.

TB is the leading cause of death by infectious disease in the world. However, there is a relative lack of expertise in areas worst affected by the disease which can lead to impaired screening efficacy. AI can, therefore, step in to improve outcomes in these areas.

Why are People Concerned About AI in Clinical Settings?

AI is already prevalent throughout many areas of clinical practice and has multiple uses, from sorting patient records to performing surgery. However, there are bigger ethical questions behind its usage that make some people wary about its use in medicine and healthcare.

 

Does AI Undermine Medical Professionals?

We have already looked briefly at how AI can be harnessed to help doctors make more accurate decisions, faster and more efficiently. However, many feel that AI poses a risk to the integrity of the medical profession and could threaten job availability.

The vast majority of AI tools still require human input. However, studies now suggest that in certain areas, such as the assessment and classification of suspicious skin abnormalities, AI can outperform human doctors and even diagnose skin cancer more accurately.

This is mostly down to AI systems’ limitless ability to learn more and more from successive cases and expose itself to multiple cases in quick succession. Ultimately, an AI system has the potential to take in and evaluate more information than any individual clinician can in their lifetime.

That being said, skeptics maintain that machines cannot adequately translate human behavior or apply human levels of critical thinking, and this makes them unfit for wholly autonomous functions.

Returning to the Tuberculosis example outlined above, AI can step in where needed, but like most tools it has its limits. Studies show that a ‘radiologist-augmented approach’ – where an AI system handles the bulk of TB cases and radiologists are only responsible for addressing equivocal cases – is the most effective.

Skeptics also note that medicine is still fundamentally a human profession. Machines do not have the same interpersonal or communication skills as a human, nor do they provide an appropriate ‘bedside manner’. They, therefore, cannot fully replace human clinicians.

Overall, while AI is essential to improve clinical workflow and provide support in areas where expertise is lacking or unviable, a human component is still essential to analyze and monitor AI performances and provide empathetic support. However, this leads onto another concern.

 

Could AI Become Autonomous?

Leading on from above, is it possible for a machine to achieve a true human level of autonomy where it could develop appropriate patient care skills and operate without any need for human input or oversight? As well as the threat to the entire healthcare profession or broader concerns about ‘enslaving’ machines, this opens up other issues of legal liability and insurance.

 

Who is Responsible if a Machine Makes a Mistake?

If your human doctor misdiagnoses you, diagnoses you too late or makes some other kind of mistake and you suffer an injury as a result, you will likely have a claim in clinical negligence or medical malpractice and will be entitled to compensation. However, if it is a machine that is responsible for your injuries, who should you sue? The machine itself? The healthcare provider? Whoever built the machine?

Generally, AI should be considered a product rather than a person in its own right, regardless of how smart it is. Therefore, a machine cannot be held liable for medical malpractice. Its developer could, however, be held responsible if the machine is faulty by design. This would be a claim in product liability rather than medical malpractice.In most cases, it will be the healthcare provider utilizing the AI that is responsible for patient care. One can argue that an AI medical solution is never intended to be an autonomous being but a tool for medical professionals. Therefore, if a provider makes a mistake in your care, the fact that they were relying on the decision-making skills of a machine should not automatically absolve them of liability. The healthcare provider, in turn, could claim against the AI developer if the product was faulty.

 

How Can Healthcare Providers Safeguard Data Privacy?

Healthcare providers, insurance agencies and other interested parties hold a huge volume of information about their patients, from basic demographics to data on symptoms and treatments. Many AI tools require significant amounts of data to diagnose patients as accurately as possible, making healthcare data extremely valuable to private tech companies.However, there are major concerns about what safeguards are in place to regulate the use of healthcare data and prevent its misuse. One study from the University of Berkeley even suggests that HIPAA (the Health Insurance Portability and Accountability Act) needs to be completely rewritten to catch up with AI developments.

The American Medical Association (AMA) has now published its ‘policy recommendations on augmented intelligence’ which healthcare providers should use to take advantage of the benefits of AI while preserving patient security and privacy. When adopting AI systems, healthcare organizations need to ask themselves questions such as:

  • Who can access this data?
  • Who has permission to manage this data?
  • Who has control over the algorithms this machine is using to learn?

Due diligence when exploring AI options is absolutely key to ensuring that patient information is respected and used ethically and within the parameters of the law.

How Do You Prevent Biases Being Built into Machines?

While AI systems have a proven track record of making fast and accurate decisions, the possibility that a machine may make flawed decisions based on human biases is a real problem. Bias occurs when preconceived ideas about characteristics, such as race or gender (whether such ideas be conscious or unconscious), inadvertently skew healthcare data towards certain segments of the population. For example, in the US, one study revealed that black patients were 40 percent less likely to receive pain medication in the ER than white patients due to clinicians’ unconscious biases.

AI relies on enormous volumes of data and learns how to make decisions by analyzing and recognizing patterns. If the data used to teach the machine is biased, the machine cannot make accurate decisions based upon it.

For example, if the machine is only fed data from a military health source, it may be less likely to make accurate decisions about female patients. Therefore, integrity of data is everything. It is essential to ensure that the data used is representative of patient realities, with particular attention given to ensure historically marginalized groups are represented.

 

Should You be Worried About AI in Healthcare?

AI is transforming the face of healthcare, both in terms of clinicians’ day-to-day practice and patient experience. As we’ve seen, AI can have some incredible uses, but also presents technical challenges in terms of data privacy, integrity, and legal ramifications when the machine gets things wrong.

That being said, use of AI in clinical settings is still in its early days, and we have already seen innovative advances in the way AI is monitored and utilized. Furthermore, autonomous AI is still a long way away from becoming widespread (if it ever does). So, typically, there will always be a human who is ultimately responsible for delivering your care and ensuring your safety.

Overall, being concerned about the ethical issues associated with AI in a clinical healthcare setting is essential to ensure that the industry moves in a healthy direction. Therefore, the goal should always be to maintain an effective balance between the analytical strengths of AI and the reasoning, empathy and communicative skills of a human clinician.