“First do no harm.” Those are perhaps the most memorable words from Hippocrates’ oath, the vow of ethics most associated with healthcare professionals. Even without them uttering the words to us, we intrinsically trust them to abide by this code when treating us. But do we extend this trust to artificial intelligence? 

Trusting AI 

Research by the UK’s National Health Service (NHS) found that only 41% of people trust AI to support doctors in decision-making and diagnoses. Survey respondents noted that they would feel more comfortable trusting decisions primarily made by human healthcare professionals, with AI only used as support. 

When these decisions are centred around a subject as delicate and vital as health, it is understandable to want to wholeheartedly trust the source of the decision. The question is, how do we increase trust in AI? 

Some of the hesitancy lies in the accuracy of diagnoses. However, the truth is that AI has been outperforming doctors for years. A study published in the April 2023 edition of the Journal of the American Medical Association (JAMA) Internal Medicine found that chatbots outperformed doctors when answering patient questions. In this study, participants rated AI-generated responses as higher quality and more empathetic than those given by human doctors.  

Similar results have been published by the Nature Reviews Clinical Oncology Journal concerning AI use in mammographic screening and the Nature Communications Journal in relation to general medical diagnosis by AI. While more research needs to be conducted, a wider knowledge of these results may help remove the mistrust that patients have in the accuracy of AI diagnoses. 

The laws of robotics 

The mere fact that a decision is accurate is only part of building trust, however. The other part comes down largely to the source. A layperson may provide us with legal advice that is completely sound and accurate, but we are more likely to trust that same piece of information if it came from a qualified lawyer. To build this trust in healthcare, we must go back to Hippocrates, and his successor, Asimov. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

“First do no harm,” said Hippocrates in AD 275. Over millennia and a half later, science fiction writer Isaac Asimov introduced his three laws of robotics, mirroring the words of the Greek physician: 

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 
  1. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 
  1. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 

As with the Hippocratic Oath on healthcare providers, these three laws underpin the actions and outputs of an automaton with the intention of benevolence. While the application of these laws is easier said than done (a study in Volume 10 of the Journal of Law and the Biosciences in 2023 found that competing human rights may complicate deciding the right course of action), attempts to program AI with benevolence and to spread awareness of this to patients may be a step forward in building trust. It may go even further to make AI more trustworthy than humans, as the latter still possesses free will, with or without taking The Hippocratic Oath, whereas ‘Asimov AI’ would be completely restricted to doing no harm to patients. 

The application of AI in healthcare is still in its infancy in relation to its potential. Placing trust as a priority may allow the healthcare system to benefit from the wealth of knowledge AI has to offer.