test Distributed Virtue and the Phantom Agent: Rethinking Moral Responsibility in Human–AI Systems|Journal of Politics and Ethics in New Technologies and AI

Distributed Virtue and the Phantom Agent: Rethinking Moral Responsibility in Human–AI Systems


Published: Dec 16, 2025
Keywords:
Ethics of AI Distributed Virtue Phantom Agent Moral Responsibility Ethics of Human–AI Systems
Serap Keles
Abstract

Advances in artificial intelligence (AI) challenge traditional notions of moral agency and responsibility. This paper introduces the concept of the phantom agent as an ontological and ethical category for AI systems that are neither mere tools nor full moral agents yet decisively shape moral outcomes in human–AI collaborations. Drawing on classical moral philosophy and engaging contemporary philosophy of technology, the analysis reframes responsibility in socio-technical systems. It argues for distributed virtue, an extension of virtue ethics to human–AI collectives, and examines epistemic asymmetry, specifically the uneven distribution of knowledge and transparency between human and AI as a central moral challenge. The paper defends an original account in which moral responsibility is reconceived as an emergent and shared property that human–AI systems exhibit traits of character and accountability distributed across their components. This approach aims to integrate ethical influence of AI systems (the phantom agents) into a coherent model of responsibility and virtue by moving the debate beyond existing paradigms.

Article Details
  • Section
  • Research Articles
Downloads
Download data is not yet available.
References
Al Kuwaiti, A., Nazer, K., Al-Reedy, A., Al-Shehri, S., Al-Muhanna, A., Subbarayalu, A.V., Al Muhanna, D. and Al-Muhanna, F.A. (2023). A review of the role of artificial intelligence in healthcare. Journal of Personalized Medicine, 13(6), 951. https://doi.org/10.3390/jpm13060951
Amann, J., Blasimme, A., Vayena, E., Frey, D. and Madai, V.I. (2020). Explainability for artificial intelligence in healthcare: a multidisciplinary perspective. BMC Medical Informatics and Decision Making, 20(310). https://doi.org/10.1186/s12911-020-01332-6.
Arendt, H. (1998) The Human Condition. 2nd edn. Chicago: The University of Chicago Press.
Band, S.S., Yarahmadi, A., Hsu, C.-C., Biyari, M., Sookhak, M., Ameri, R., Dehzangi, I., Chronopoulos, A.T. and Liang, H.-W. (2023). Application of explainable artificial intelligence in medical health: A systematic review of interpretability methods. Informatics in Medicine Unlocked, 40, 101286. https://doi.org/10.1016/j.imu.2023.101286
Bekbolatova, M., Mayer, J., Ong, C.W. and Toma, M. (2024). Transformative potential of AI in healthcare: Definitions, applications, and navigating the ethical landscape and public perspectives. Healthcare, 12(2), 125. https://doi.org/10.3390/healthcare12020125
Bonnefon, J.-F., Shariff, A., & Rahwan, I. (2016). The social dilemma of autonomous vehicles. Science, 352(6293), 1573–1576. https://doi.org/10.1126/science.aaf2654
Coeckelbergh, M. (2020). Artificial intelligence, responsibility attribution, and a relational justification of explainability. Science and Engineering Ethics, 26, 2051–2068. https://doi.org/10.1007/s11948-019-00146-8
Conroy, M., Malik, A.Y., Hale, C., Graham, M., Wheeler, R. and Kitson, A. (2021). Using practical wisdom to facilitate ethical decision-making: a major empirical study of phronesis in the decision narratives of doctors. BMC Medical Ethics, 22(1), 16. https://doi.org/10.1186/s12910-021-00581-y
Danaher, J. (2016). The threat of algocracy: reality, resistance and accommodation. Philosophy & Technology, 29(3), 245–268. https://doi.org/10.1007/s13347-015-0211-1
Dourish, P. (2016). Algorithms and their others: Algorithmic culture in context. Big Data & Society, 3(2). https://doi.org/10.1177/2053951716665128
Durán, J.M. and Pozzi, G. (2025). Trust and trustworthiness in AI. Philosophy & Technology, 38, 16. https://doi.org/10.1007/s13347-025-00843-2.
Fischer, G. and Herrmann, T. (2011). Socio-technical systems: A meta-design perspective. International Journal of Sociotechnology and Knowledge Development, 3(1), 1–33. https://doi.org/10.4018/jskd.2011010101
Floridi, L., & Sanders, J. W. (2004). On the Morality of Artificial Agents. Minds and Machines, 14(3), 349–379. https://doi.org/10.1023/B:MIND.0000035461.63578.9d
Floridi, L. and Taddeo, M. (2016). What is data ethics?. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083), 20160360. https://doi.org/10.1098/rsta.2016.0360
Gsenger, R. and Strle, T., (2021). Trust, automation bias and aversion: Algorithmic decision-making in the context of credit scoring. Interdisciplinary Description of Complex Systems, 19(4), 542–560. https://doi.org/10.7906/indecs.19.4.7
Hassija, V., Chamola, V., Mahapatra, A. and others (2024). Interpreting black-box models: a review on explainable artificial intelligence. Cognitive Computation, 16, 45–74. https://doi.org/10.1007/s12559-023-10179-8
Heyder, T., Passlack, N. and Posegga, O. (2023). Ethical management of human-AI interaction: Theory development review. The Journal of Strategic Information Systems, 32(3), 101772. https://doi.org/10.1016/j.jsis.2023.101772
Kasenberg, D., Sarathy, V., Arnold, T. and Scheutz, M. (2018). Quasi-dilemmas for artificial moral agents. In: T. Williams, ed., Proceedings of the International Conference on Robot Ethics and Standards (ICRES), 20–21 August, Troy, NY. https://doi.org/10.13180/icres.2018.20-21.08.012
Krishnasamy, R. and Perumal, L. (2025). Ethical AI in Practice: Why AI Cannot Replace Human Moral Judgment and Oversight. Journal of Emerging Technologies and Innovative Research (JETIR), 12(2), February.
Kudina, O. and van de Poel, I. (2024). A sociotechnical system perspective on AI. Minds & Machines, 34, 21. https://doi.org/10.1007/s11023-024-09680-2
Laqueur, H.S. and Copus, R.W. (2024). An algorithmic assessment of parole decisions. Journal of Quantitative Criminology, 40, 151–188. https://doi.org/10.1007/s10940-022-09563-8
Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute. https://doi.org/10.5281/zenodo.3240529
Levin, S. and Wong, J.C. (2018). Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian. The Guardian, 19 March. Available at: https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe
Matthias, A. (2004). The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6(3), pp. 175–183. https://doi.org/10.1007/s10676-004-3422-1
MacIntyre, A. (1978). What has ethics to learn from medical ethics?. Philosophical Exchange, 9(1), 37–47. PMID: 11661680.
MacIntyre, A. (1981). After virtue: a study in moral theory. Notre Dame, IN: University of Notre Dame Press.
MacIntyre, A. (1985). Medicine aimed at the care of persons rather than what?. in Cassell, E.J. and Siegler, M. (eds.) Changing values in medicine. Frederick: University Publications of America, pp. 83–96.
MacIntyre, A. (1999). Dependent rational animals: why human beings need the virtues. Chicago: Open Court.
Mennella, C., Maniscalco, U., De Pietro, G. and Esposito, M. (2024). Ethical and regulatory challenges of AI technologies in healthcare: A narrative review. Heliyon, 10(4), e26297. https://doi.org/10.1016/j.heliyon.2024.e26297
Mill, J.S. (1969). Utilitarianism. In: Robson, J.M. (ed.) The Collected Works of John Stuart Mill: Volume X – Essays on Ethics, Religion and Society. Toronto: University of Toronto Press, pp. 203–259.
Olejarczyk, J.P. and Young, M. (2024). Patient Rights and Ethics Available at: https://www.ncbi.nlm.nih.gov/books/NBK538279/
Rawls, J. (1999). A Theory of Justice. Harvard: Harvard University Press.
Rawls, J. (2006). Political Liberalism. New York: Columbia University Press.
Sadeghi, Z., Alizadehsani, R., CIFCI, M.A., Kausar, S., Rehman, R., Mahanta, P., Bora, P.K., Almasri, A., Alkhawaldeh, R.S., Hussain, S., Alatas, B., Shoeibi, A., Moosaei, H., Hladík, M., Nahavandi, S. and Pardalos, P.M. (2024). A review of Explainable Artificial Intelligence in healthcare. Computers and Electrical Engineering, 118(August), 109370. https://doi.org/10.1016/j.compeleceng.2024.109370.
Schmidt, A.T., (2024). Consequentialism, collective action, and blame. Journal of Moral Philosophy, 22(1–2), 183–207. https://doi.org/10.1163/17455243-20244215
Semler, J. (2022). Artificial quasi moral agency. In: AIES '22: AAAI/ACM Conference on AI, Ethics, and Society, 1–3 August, Oxford, United Kingdom. New York: ACM. https://doi.org/10.1145/3514094.3539549
Smiley, L. (2023). The legal saga of Uber’s fatal self-driving car crash is over. Wired, 6 March. Available at: https://www.wired.com/story/ubers-fatal-self-driving-car-crash-saga-over-operator-avoids-prison/
Sparrow, R. (2016). Robots and Respect: Assessing the Case Against Autonomous Weapon Systems. Ethics & International Affairs, 30(1), 93–116. https://doi.org/10.1017/S0892679415000647
Valderrama, M., Hermosilla, M.P. and Garrido, R. (2023). State of the evidence: Algorithmic transparency. Open Government Partnership. Available at: https://www.opengovpartnership.org/wp-content/uploads/2023/05/State-of-the-Evidence-Algorithmic-Transparency.pdf
Vallor, S. (2016). Technomoral wisdom for an uncertain future: 21st century virtues. In: Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. New York: Oxford Academic. Available at: https://doi.org/10.1093/acprof:oso/9780190498511.003.0007
Varkey, B. (2021). Principles of clinical ethics and their application to practice. Medical Principles and Practice, 30(1), 17–28. https://doi.org/10.1159/000509119
Varnosfaderani, S.M. and Forouzanfar, M. (2024). The role of AI in hospitals and clinics: transforming healthcare in the 21st century. Bioengineering, 11(4), 337. https://doi.org/10.3390/bioengineering11040337
Verbeek, P.-P. (2011). Moralizing technology: Understanding and designing the morality of things. Chicago: University of Chicago Press.
Volkman, R. and Gabriels, K. (2023). AI moral enhancement: Upgrading the socio-technical system of moral engagement. Science and Engineering Ethics, 29, 11. https://doi.org/10.1007/s11948-023-00428-2
Torkamaan, H., Steinert, S., Pera, M.S., Kudina, O., Freire, S.K., Verma, H., Kelly, S., Sekwenz, M.T., Yang, J., van Nunen, K., Warnier, M., Brazier, F. and Oviedo-Trespalacios, O. (2024). Challenges and future directions for integration of large language models into socio-technical systems. Behaviour & Information Technology, 1–20. https://doi.org/10.1080/0144929X.2024.2431068
Zerilli, J., Knott, A., Maclaurin, J. and Gavaghan, C. (2019). Algorithmic decision-making and the control problem. Minds and Machines, 29(4), 555–578. https://doi.org/10.1007/s11023-019-09513-7
Zhao, A.P., Li, S., Cao, Z., Hu, P.J-H., Wang, J., Xiang, Y., Xie, D. and Lu, X. (2024). AI for science: Predicting infectious diseases. Journal of Safety Science and Resilience, 5(2), 130–146. https://doi.org/10.1016/j.jnlssr.2024.02.002
Zhang, J. and Zhang, Z.M., 2023. Ethics and governance of trustworthy medical artificial intelligence. BMC Medical Informatics and Decision Making, 23, 7. https://doi.org/10.1186/s12911-023-02103-9