The implementation of AI in medicine: ethical and legal considerations regarding the black box dilemma


Опубликован: Mar 30, 2026
Markus Meserth
https://orcid.org/0009-0004-9291-1051
Аннотация

Artificial Intelligence (AI) is becoming increasingly prevalent in various aspects of our lives, promising to transform the way we live in the long term. The exploration of potential applications of AI systems is also underway in the healthcare sector. While this new technology has the potential to improve medical care, it is important not to become overly enthusiastic and focus solely on the opportunities it offers. Nevertheless, using such technology is not without risks. This article provides a comprehensive overview of the technological fundamentals and considers the ethical and legal implications of using artificial intelligence in healthcare. Additionally, it deals with the question of who can be held liable for damages caused by automated systems in the sensitive field of medicine, and how the diffusion of responsibility can be effectively addressed. The article also focuses on the central issue of explainability in relation to AI and so-called “black box” systems. Attributing liability unilaterally to the physician as the final decision-maker may be insufficient, as this could undermine the trust of patients and medical professionals, hindering the adoption of AI systems. A collaborative approach that integrates everyone involved in the decision-making process – from system developers to patients – into an ongoing transformation process should be pursued instead.

Article Details
  • Раздел
  • Original Articles
Скачивания
Данные скачивания пока недоступны.
Библиографические ссылки
Amann J, Blasimme A, Vayena E, Frey D, Madai VI. Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Med Inform Decis Mak 2020, 20:310.
Asghari H, Birner N, Burchardt A, Dicks D, Faßbender J, Feldhus N, Hewett F, Hofmann V, Kettemann MC, Schulz W, Simon J, Stolberg-Larsen J, Züger T. What to explain when explaining is difficult. An interdisciplinary primer on XAI and meaningful information in automated decision-making. Zenodo.
Bathaee, Y. The Artificial Intelligence Black Box and the Failure of Intent and Causation. Harvard Journal of Law & Technology 2018, 31(2), 889–938.
Baum K, Mantel S, Schmidt E, Speith T. From Responsibility to Reason‑Giving Explainable Artificial Intelligence. Philosophy & Technology 2022, 35:12.
Beck S, Faber M, Gerndt S. Rechtliche Aspekte des Einsatzes von KI und Robotik in Medizin und Pflege. Ethik in der Medizin 2023, 35:247–263.
Bekbolatova M, Mayer J, Ong CW, Toma M. Transformative Potential of AI in Healthcare: Definitions, Applications, and Navigating the Ethical Landscape and Public Perspectives. Healthcare 2024, 12, 125.
Benzinger L, Ursin F, Balke WT, Kacprowski T, Salloch S. Should Artificial Intelligence be used to support clinical ethical decision-making? A systematic review of reasons. BMC Med Ethics 2023, 24(1):48.
Chau M, Rahman M G, Debnath T. From black box to clarity: Strategies for effective AI informed consent in healthcare. Artificial Intelligence In Medicine 2025, 167, 103169.
Chen H, Sung JJY. Potentials of AI in medical image analysis in gastroenterology and hepatology. J Gastroenterol Hepatol 2021, 36(1):31–38.
Daneshjou R, Smith M P, Sun MD, Rotemberg V, Zou J. Lack of transparency and potential bias in artificial intelligence data sets and algorithms: A scoping review. JAMA Dermatol. 2021, 157, 1362–1369.
De Fauw J. Ledsam JR, Romera-Paredes B, Nikolov S, Tomasev N, Blackwell S, Askham H, Glorot X, O‘Donoghue B, Visentin D, van den Driessche G, Lakshminarayanan B, Meyer C, Mackinder F, Bouton S, Ayoub K, Chopra R, King D, Karthikesalingam A, Hughes CO, Raine R, Hughes J, Sim DA, Egan C, Tufail A, Montgomery H, Hassabis D, Rees G, Back T, Khaw PT, Suleyman M, Cornebise J, Keane PA, Ronneberger O. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med 2018, 24(9): 1342–1350.
Dettling H-U. Künstliche Intelligenz und digitale Unterstützung ärztlicher Entscheidungen in Diagnostik und Therapie. PharmR 2019, 41: 633–642.
Droste W. Intelligente Medizinprodukte: Verantwortlichkeiten des Herstellers und ärztliche Sorgfaltspflichten, MPR 2018, 109–115.
Eichelberger, J. Arzthaftung. In: Chibanguza K, Kuß C, Steege H (eds.). Künstliche Intelligenz: Recht und Praxis automatisierter und autonomer Systeme. Baden-Baden 2022, 655–674.
Elgin CY, Elgin C. Ethical implications of AI‑driven clinical decision support systems on healthcare resource allocation: a qualitative study of healthcare professionals’ perspectives. BMC Medical Ethics 2024, 25:148.
Ferrario A, Loi M. How Explainability Contributes to Trust in AI. ACM Conference on Fairness, Accountability, and Transparency 2022 (FAccT ’22), 1458.
Froomkin M, Kerr IR, Pineau J. When AIs Outperform Doctors: Confronting the Challenges of a Tort-Induced Over-Reliance on Machine Learning. Arizona Law Review 2019, Vol. 61:33.
Frost Y, Steininger M, Vivekens S. Nutzen, Chancen, Risiken und Haftung bei der Verwendung von Künstlicher Intelligenz im Kontext der KI-Verordnung und KI-Haftungsrichtlinie. MPR 2024, 4–20.
Goktas P, Grzybowski A. Shaping the Future of Healthcare: Ethical Clinical Challenges and Pathways to Trustworthy AI. J. Clin. Med. 2025, 14, 1605.
Gore JC. Artificial intelligence in medical imaging. Magnetic Resonance Imaging 2020, 68:A1–A4.
Hacker P, Krestel R, Grundmann S, Naumann F. Explainable AI under contract and tort law: legal incentives and technical challenges. Artificial Intelligence and Law 2020, 28:415–439.
Haftenberger A, Dierks C. Rechtliche Einordnung von künstlicher Intelligenz in der Inneren Medizin. Innere Medizin 2023, 64:1044–1050.
Li Z, Koban KC, Schenck TL, Giunta RE, Li Q, Sun Y. Artificial Intelligence in dermatology image analysis. Current developments and future trends. J Clin Med 2022, 11(22).
Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, Mahendiran T, Moraes G, Shamdas M, Kern C, Ledsam JR, Schmid MK, Balaskas K, Topol EJ, Bachmann LM, Keane PA, Denniston AK. A comparison of deep learning performance against healthcare professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. The Lancet Digital Health 2019, Volume 1, Issue 6, e271–e297.
Lohmann A, Schömig A. „Digitale Transformation“ im Krankenhaus. Gesellschaftliche und rechtliche Herausforderungen durch das Nebeneinander von Ärzten und Künstlicher Intelligenz. In: Beck S, Kusche C, Valerius B (eds.). Digitalisierung, Automatisierung, KI und Recht, Baden-Baden 2020, 345–364.
Parikh RB, Teeple S, Navathe AS. Addressing bias in artificial intelligence in health care. JAMA 2019, 322, 2377–2378.
Pham T. Ethical and legal considerations in healthcare AI: innovation and policy for safe and fair use. R. Soc. Open Sci. 2025, 12: 241873.
Meroueh C, Chen ZE. Artificial Intelligence in anatomical pathology. Building a strong foundation for precision medicine.Hum Pathol 2023, 132:31–38.
Riehm T. Nein zur ePerson - Gegen die Anerkennung einer digitalen Rechtspersönlichkeit. RDi 2020, 42–48.
Salloch S. Künstliche Intelligenz in der Ethik? Ethik in der Medizin 2023, 35:337–340.
Samhammer D, Beck S, Budde K, Burchardt A, Faber M, Gerndt S, Möller S, Osmanodja B, Roller R, Dabrock P. Klinische Entscheidungsfindung mit Künstlicher Intelligenz. Berlin 2020, 21–51.
Schlicker N, Langer M, Hirsch MC. Wie vertrauenswürdig ist künstliche Intelligenz? Ein Modell für das Spannungsfeld zwischen Objektivität und Subjektivität. Innere Medizin 2023, 64:1051–1057.
Schlör D, Hotho A. Verantwortungsvolle Empfehlungssysteme für die medizinische Diagnostik. In: Reder M, Koska C (eds.).
Künstliche Intelligenz und ethische Verantwortung, Bielefeld 2024, 101–120.
Seng L. Maschinenethik und Künstliche Intelligenz. In: Bendel O (ed.). Handbuch Maschinenethik, Wiesbaden 2018, 185–205.
Tretter M, Samhammer D, Dabrock P. Künstliche Intelligenz in der Medizin: Von Entlastungen und neuen Anforderungen im ärztlichen Handeln. Ethik in der Medizin 2024, 36:7–29.
Ursin F, Lindner F, Ropinski T, Salloch S, Timmermann C. Levels of explicability for medical artificial intelligence: What do we normatively need and what can we technically reach? Ethik in der Medizin 2023, 35:173–199.
Walchshofer M, Riedl R. Der Chief Digital Officer (CDO). Eine empirische Untersuchung. HMD Praxis der Wirtschaftsinformatik 2017, 54(3): 324–337.
Weicken E, Mittermaier M, Hoeren T, Kliesch J, Wiegand T, Witzenrath M, Ballhausen M, Karagiannidis C, Sander LE, Gröschel MI. Schwerpunkt künstliche Intelligenz in der Medizin – rechtliche Aspekte bei der Nutzung großer Sprachmodelle im klinischen Alltag, Innere Medizin 2025, 66:436–441.
Weng SF, Reps J, Kai J, Garibaldi JM, Qureshi N. Can machinelearning improve cardiovascular risk prediction using routine clinical data? PloS one 2017, 12(4):e0174944.
Yuan D, Jugas R, Pokorna P, Sterba J, Slaby O, Schmid S, Siewert C, Osberg B, Capper D, Halldorsson S, Vik-Mo EO, Zeiner PS, Weber KJ, Harter PN, Thomas C, Albers A, Rechsteiner M, Reimann R, Appelt A, Schüller U, Jabareen N, Mackowiak S, Ishaque N, Eils R, Lukassen S, Euskirchen P. CrossNN is an explainable framework for cross-platform DNA methylationbased classification of tumors. Nat Cancer 2025, 6: 1283–1294.