AI in Medicine: Rushing Forward Without a Moral Compass

There was a time when technological advances were met with cautious optimism, a balance between curiosity and critical reflection. Today, we find ourselves moving forward at a relentless pace, integrating artificial intelligence into medicine with minimal deliberation, as if ethics were a secondary concern rather than a foundational one.

AI-powered medical devices, chatbots offering ethical guidance, and algorithms influencing patient care—these developments have promise. They offer efficiency, precision, and scalability. But efficiency alone does not equate to wisdom, and precision without ethical clarity risks reducing complex human dilemmas to algorithmic outputs.

Consider the growing interest in using Large Language Models in medical ethics education. Some suggest that AI can aid in “cultivating virtue” among medical students, structuring ethical dilemmas and reinforcing principles. However, there remains an essential distinction: AI can simulate ethical reasoning, but it does not engage in moral judgment. It can process patterns but lacks the lived experience, contextual understanding, and intrinsic motivation that underpin ethical decision-making in human beings.

Similarly, Software as a Medical Device is advancing rapidly, presenting new regulatory challenges. AI-driven diagnostic tools and clinical decision-making systems are shaping healthcare practices, often outpacing the frameworks designed to govern them. When AI systems err, determining responsibility remains unresolved—should liability fall on the developers, the institutions deploying the technology, or the AI itself? These are not hypothetical concerns; they demand structured ethical and legal consideration before widespread adoption.

Discussions about AI in medicine often assume that regulation will adapt in due time, that ethical oversight will evolve to meet new challenges. However, relying on reactive measures places both medical professionals and patients in uncertain territory. The key question is not whether AI should be integrated into healthcare, but how it should be done responsibly.

This is not a rejection of AI’s role in medicine—I do not subscribe to reactionary scepticism. AI presents valuable opportunities, and its integration can lead to meaningful advancements in healthcare. But its deployment should be guided by alignment between incentives, regulatory frameworks, and ethical considerations. Regulation should not function merely as damage control, but as a structured mechanism for guiding AI development in ways that prioritize patient welfare, transparency, and accountability.

Additionally, ethical discussions on AI must contend with a more profound issue: How do we expect AI to operate within a singular ethical framework when no universally accepted ethical code exists for human decision-making? The absence of a unified ethical structure does not mean ethics is relative; rather, it underscores the necessity of deliberate and interdisciplinary engagement in shaping the moral frameworks we expect AI to follow.

The challenge is not just about preventing AI from making errors, but about ensuring that its integration aligns with clearly defined ethical objectives, rather than allowing technological possibility to dictate its own course. This requires continuous dialogue between ethicists, policymakers, AI developers, and medical professionals, not just retrospective adjustments after problems arise.

AI in medicine is neither an inherent risk nor an unquestionable breakthrough—it is a tool whose ethical and practical implications must be carefully defined. The responsibility lies in ensuring that its trajectory is shaped by thoughtful, anticipatory governance rather than hasty adaptation to its rapid development.

References:

Okamoto, S., Kataoka, M., Itano, M., & Sawai, T. (2025). AI-based medical ethics education: Examining the potential of large language models as a tool for virtue cultivation. BMC Medical Education, 25, 185. https://doi.org/10.1186/s12909-025-06801-y

Vu, T., & Throne, R. (2025). Current Trends in AI Ethics for Software as a Medical Device (SaMD). In IRB, Human Research Protections, and Data Ethics for Researchers. IGI Global. https://doi.org/10.4018/979-8-3693-3848-3.ch006

Mori, T., Watanabe, T., & Kosugi, S. (2025). Exploring ethical considerations in medical research: Harnessing pre-generated transformers for AI-powered ethics discussions. PLOS ONE. https://doi.org/10.1371/journal.pone.0311148

Subscribe to LayersofZoe

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe