editorial

Oman Medical Journal [2023], Vol. 38, No. 5: e542 

Artificial Intelligence in Medicine: A Double-edged Sword or a Pandora’s Box?

Masoud Kashoub1,2, Mariya Al Abdali2, Emaad Al Shibli2, Hajer Al Hamrashdi2,
Salim Al Busaidi1,2, Mohamed Al Rawahi1,2, Sara Al Rasbi1 and Abdullah Al Alawi1,2*

1Department of Medicine, Sultan Qaboos University Hospital, Muscat, Oman

2Internal Medicine Residency Training Program, Oman Medical Specialty Board, Muscat, Oman

article info

Online

Artificial intelligence (AI), defined as the simulation of human intelligence processes by computer systems, has invaded many industries, including medicine. AI holds significant promise in healthcare, with the potential to enhance diagnostics, treatment, and patient care. However, like any groundbreaking technology, it also presents substantial drawbacks and challenges. AI-powered medical technologies are rapidly advancing and becoming integral to clinical practice. The development of AI in medicine aims to assist physicians in tasks involving knowledge manipulation and data analysis to support therapeutic decisions, diagnosis formulation, and outcome prediction.1 AI’s integration into medicine is poised to have a profound impact on every aspect of primary care.

Radiology stands as one of the pioneering fields in AI adoption. The use of AI in radiology has shown great promise in classifying and detecting abnormalities in plain radiographs, magnetic resonance imaging scans, and computed tomography scans, resulting in more accurate diagnoses and improved treatment decisions.2,3 It can help identify anomalies that may elude human observation, such as early-stage cancers. AI models are being used to forecast patient outcomes and predict disease outbreaks, hospital readmissions, and patient health risks. Over the last decade, numerous AI-based algorithms have gained approval from the Food and Drug Administration (FDA) and are poised for implementation.2 An example is the early detection of septic shock algorithms, which employ high-resolution time series data to predict septic shock onset in the intensive care unit from 4 to 12 hours before the onset.4 AI technologies have found utility in various medical subspecialties to aid accurate disease diagnoses in areas like gastroenterology, ophthalmology, dermatology, oncology, and more.5 AI has been trained to detect various diseases, from skin cancer to diabetic retinopathy, by analyzing medical images, pathological slides, and electrocardiogram interpretations. For example, gastroenterologists have made use of convolutional neural networks and other deep learning technologies to analyze ultrasound and endoscopy images for the detection of abnormal structures such as colonic polyps.2

The development of pharmaceutical agents against specific diseases through clinical trials is time-consuming and costly.6 Advanced AI models have the potential to change the traditional ways of designing drugs. AI technologies are being increasingly employed to speed up drug discovery cycles by studying different chemical structures and their interaction with the human body. A recent example involves the use of AI to screen existing medications for potential efficacy against emerging threats like the Ebola virus. Through AI-driven virtual searches, Atomwise identified two drugs predicted to significantly reduce Ebola infectivity, showcasing the potential of AI in drug discovery.6 AI-powered computer applications can assist clinicians in tailoring treatment plans to individual patients, identifying those in need of extra attention, and offering personalized protocols based on patients’ medical histories, unique genetic makeup, and lifestyle factors.6

Several wearable and portable electrocardiogram monitors, intelligent seizure detection devices, and continuous glucose monitoring devices have received FDA approval. A notable example is AliveCor, whose mobile application Kardia received FDA approval in 2014, enabling smartphone-based electrocardiogram monitoring and atrial fibrillation detection. Recent studies, like REHEARSE-AF, have demonstrated its superior ability to identify atrial fibrillation compared to routine care.2 AI plays a pivotal role in telemedicine, providing remote monitoring and diagnostics, especially in areas with limited access to healthcare. Since 2018, the Boston Children’s Hospital and Buoy Health have collaboratively developed a web-based AI system that provides advice to parents about their sick children, addressing medication questions, and determining the need for a doctor’s visit.6 Augmented medicine incorporates not only AI-based tools, but also various other digital technologies, such as surgical navigation systems for computer-assisted surgery and virtual reality technologies for surgical procedures, pain management, and psychiatric disorders.2

Despite the numerous benefits of AI in medicine, significant drawbacks demand attention. One primary concern is the lack of transparency and interpretability of AI algorithms. Complex deep learning models often operate as black boxes, making it difficult to understand their decision-making processes.7 This opacity poses accountability issues, as physicians may hesitate to trust AI-driven diagnoses or treatment recommendations lacking clear rationale. AI systems rely on huge amounts of data to learn and make predictions. However, if training data is biased or unrepresentative, AI algorithms may yield inaccurate or discriminatory results.8,9 This bias can have grave consequences, especially in disease diagnosis and treatment recommendations, disproportionately affecting vulnerable populations. Hence, ensuring diversity and representativeness of training data are crucial to prevent potentially harmful errors.

The integration of AI in medicine raises profound legal and ethical considerations. Issues such as patient privacy, data security, and liability become more complex when sensitive medical data is involved.10 Determining responsibility in cases of AI-related errors or adverse outcomes poses a challenge. Ensuring patient consent and data protection are equally essential. This may lead to a growing trend where AI-using institutions are required to obtain patient consent for AI utilization. Human-to-human interaction plays an indispensable role in patient care, encompassing not only the technical aspects of treatment but also the emotional and psychological support patients require. Healthcare professionals’ ability to understand and address patients’ concerns, fears, and unique circumstances leads to more personalized and patient-centered care.7 However, while AI can enhance efficiency and accuracy, an excessive reliance on technology may lead to a loss of the human touch in medicine. Patients value the empathy and personal connection they have with healthcare providers. Replacing human interaction with AI systems could result in a dehumanized healthcare experience. Striking the right balance between AI-driven automation and human involvement is pivotal for maintaining patient-centered care, a matter of ongoing debate.

The widespread adoption of AI in medicine has raised concerns about the future of healthcare professionals. Some fear that AI may replace certain roles, potentially leading to job displacement. While AI can automate routine tasks, it cannot fully replace the expertise, judgment, and empathy of healthcare providers.11 Nevertheless, the implementation of AI in healthcare presents several challenges for healthcare professionals. They undergo comprehensive training to understand the capabilities and limitations of AI systems, interpret AI-generated tools, and integrate them into clinical decision-making processes. Balancing AI integration while preserving the human touch and patient-provider relationship poses another challenge. Healthcare professionals must also contend with the rapid pace of technological advancement, requiring continuous learning, adaptation, and expertise development to keep pace with the evolving AI landscape in healthcare.

In summary, while AI integration in medicine offers tremendous potential for improving healthcare outcomes, it is crucial to address the associated challenges to maximize its benefits. Overcoming these challenges necessitates a comprehensive approach that combines ongoing education, clear communication, ethical guidelines, and collaborative efforts between healthcare professionals and AI technology developers. In the ongoing debate, AI can be seen as a Pandora’s Box at the micro-technical level, but it undeniably remains a double-edged sword with the potential for both positive and negative impacts.

references

  1. 1. Ramesh AN, Kambhampati C, Monson JR, Drew PJ. Artificial intelligence in medicine. Ann R Coll Surg Engl 2004 Sep;86(5):334-338.
  2. 2. Briganti G, Le Moine O. Artificial intelligence in medicine: today and tomorrow. Front Med (Lausanne) 2020 Feb;7:27.
  3. 3. Koponen M, Anwaar W, Habib-ur-Rahman QS, Sadiq F. Use of artificial intelligence in coronary artery calcium scoring. Oman Med J 2023;38(5):e543.
  4. 4. Misra D, Avula V, Wolk DM, Farag HA, Li J, Mehta YB, et al. Early detection of septic shock onset using interpretable machine learners. J Clin Med 2021 Jan;10(2):301.
  5. 5. Ahmad Z, Rahim S, Zubair M, Abdul-Ghafar J. Artificial intelligence (AI) in medicine, current applications and future role with special emphasis on its potential and promise in pathology: present and future impact, obstacles including costs and acceptance among pathologists, practical and philosophical considerations. A comprehensive review. Diagn Pathol 2021 Mar;16(1):24.
  6. 6. Amisha MP, Malik P, Pathania M, Rathaur VK. Overview of artificial intelligence in medicine. J Family Med Prim Care 2019 Jul;8(7):2328-2331.
  7. 7. Topol EJ. High-performance medicine: the convergence of human and artificial intelligence. Nat Med 2019 Jan;25(1):44-56.
  8. 8. Rasheed K, Qayyum A, Ghaly M, Al-Fuqaha A, Razi A. Qadir J. Explainable, trustworthy, and ethical machine learning for healthcare: a survey. Comput Biol Med 2022 Oct;149:106043.
  9. 9. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science 2019 Oct;366(6464):447-453.
  10. 10. Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K. Artificial intelligence, bias and clinical safety. BMJ Qual Saf 2019 Mar;28(3):231-237.
  11. 11. Emanuel EJ, Wachter RM. Artificial intelligence in health care: will the value match the hype? JAMA 2019 Jun;321(23):2281-2282.