Distributed Virtue and the Phantom Agent: Rethinking Moral Responsibility in Human–AI Systems
Abstract
Advances in artificial intelligence (AI) challenge traditional notions of moral agency and responsibility. This paper introduces the concept of the phantom agent as an ontological and ethical category for AI systems that are neither mere tools nor full moral agents yet decisively shape moral outcomes in human–AI collaborations. Drawing on classical moral philosophy and engaging contemporary philosophy of technology, the analysis reframes responsibility in socio-technical systems. It argues for distributed virtue, an extension of virtue ethics to human–AI collectives, and examines epistemic asymmetry, specifically the uneven distribution of knowledge and transparency between human and AI as a central moral challenge. The paper defends an original account in which moral responsibility is reconceived as an emergent and shared property that human–AI systems exhibit traits of character and accountability distributed across their components. This approach aims to integrate ethical influence of AI systems (the phantom agents) into a coherent model of responsibility and virtue by moving the debate beyond existing paradigms.
Article Details
- Section
- Research Articles

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under Creative Commons 4.0 (CC-BY 4.0) license, that allows others to share the work with an acknowledgement of the work's authorship and initial publication in this journal.