Writing the Algorithm of Good: Artificial Intelligence as a Machine for Administering Justice


Δημοσιευμένα: Δεκ 10, 2024
Ενημερώθηκε: 2024-12-10
Λέξεις-κλειδιά:
Artificial Intelligence; Ethics; Justice; Algorithms; Alignment Problem; Moral Agency; Legal Decision-making; Machine Ethics
Alkis Gounaris
George Kosteletos
Περίληψη

This article explores the ethical and philosophical challenges of using Artificial Intelligence (AI) systems for regulatory purposes and the administration of justice. The authors investigate whether AI can function as an "objective" decision-maker in moral and legal contexts, potentially mitigating human biases. Central to the discussion is the "Alignment Problem"—the difficulty of ensuring AI systems act in accordance with complex human values and legal principles. The paper contrasts "Top-Down" ethical programming with "Bottom-Up" machine learning approaches, questioning whether a machine can ever truly possess the "moral agency" or "practical wisdom" required for justice. The study concludes that while AI can assist in the legal process, the transparency of algorithms and the preservation of human responsibility remain paramount in the quest for a "just" machine.

Λεπτομέρειες άρθρου
  • Ενότητα
  • Άρθρα
Λήψεις
Τα δεδομένα λήψης δεν είναι ακόμη διαθέσιμα.
Αναφορές
Anderson, Michael, Susan L. Anderson. “A Prima Facie Duty Approach to Machine Ethics: Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles through a dialogue with Ethicists.” In Machine Ethics, edited by Michael Anderson and Susan L. Anderson. Cambridge University Press, 2011.
Anderson, Michael, and Susan L. Anderson. “Machine ethics: Creating an ethical intelligent agent.” AI magazine 28, no. 4 (2007): 15-15. https://doi.org/10.1609/aimag.v28i4.2065.
Anderson, Michael, Susan L. Anderson, Alkis Gounaris, and George Kosteletos. “Towards Moral Machines: A Discussion with Michael Anderson and Susan Leigh Anderson.” Conatus - Journal of Philosophy 6, no. 1 (2021): 177-202. doi: https://doi.org/10.12681/cjp.26832.
Aristotle. Nicomachean Ethics [Ηθικά Νικομάχεια].
Arkin, Ronald C. “The Case of Ethical Autonomy in Unmanned Systems.” Journal of Military Ethics 9, no. 4 (2010): 332-341.
Ashrafian, Hutan. “Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights.” In Science and Engineering Ethics 21, no. 2 (2015): 317-326.
Avramides, Anita. “Other Minds.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Winter 2020 Edition. https://plato.stanford.edu/archives/win2020/entries/other-minds/.
Awad, Edmond, Sohan Dsouza, Richard Kim, et al. “The Moral Machine experiment.” Nature 563 (2018): 59–64. https://doi.org/10.1038/s41586-018-0637-6.
Bentham, Jeremy. An Introduction to the Principles of Morals and Legislation, edited by J. Burns & H. Hart. Clarendon Press, 1789.
Bentham, Jeremy. An Introduction to the Principles of Morals and Legislation. 2017. https://www.earlymoderntexts.com/assets/pdfs/bentham1780.pdf.
Bostrom, Nick. “Existential Risks.” Journal of Evolution and Technology 9 (2002).
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2014.
Bostrom, Nick. “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents." Minds and Machines 22, no. 2 (2012): 71-85.
Bratman, Michael. Structures of Agency: Essays. Oxford University Press, 2007. https://doi.org/10.1093/acprof:oso/9780195187717.001.0001.
Bryson, Joanna J., Michalis E. Diamantis, and Thomas D. Grant. “Of, for, and by the people: the legal lacuna of synthetic persons.” Artificial Intelligence and Law 25, no. 3 (2017): 273-291. https://doi.org/10.1007/s10506-017-9214-9.
Bryson, Johana. J. “Robots should be slaves.” Close Engagements with Artificial Companions: Key social, psychological, ethical and design issues 8 (2010): 63-74.
Chalmers, David J. The Singularity: A Philosophical Analysis. 2010. https://consc.net/papers/singularity.pdf.
Christian, Brian. The Alignment Problem: Machine Learning and Human Values. W. W. Norton & Company, 2020.
Christiano, Paul. “Ambitious vs. narrow value learning.” AI Alignment (blog), 2015. https://ai-alignment.com/ambitious-vs-narrow-value-learning-99bd0c59847e.
Coeckelbergh, Mark. “Artificial Intelligence, Responsibility Attribution, and a Relational Justification of Explainability.” Science and Engineering Ethics (2019). https://coeckelbergh.net/wp-content/uploads/2019/10/2019_10_28-ai-responsibility-relational-explainability-coeckelbergh.pdf.
Collingridge, David. The social control of technology. St. Martin, 1980.
Council of Europe. “AD HOC Committee on Artificial Intelligence (CAHAI).” Feasibility Study, 2020. https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da.
Davis, Martin. Computability and Unsolvability. McGraw-Hill, 1958.
Delvaux, Mady. “DRAFT REPORT with recommendations to the Commission on Civil Law Rules on Robotics.” Committee on Legal Affairs, European Parliament, 2016.
Dennett, Daniel C. “Are we explaining consciousness yet?.” In Cognition 79, no. 1 (2001): 221-37.
Dreyfus, Hubert. What Computers Still Can't Do: A Critique of Artificial Reason. MIT Press, 1992.
Elder, Alexis. “Robot friends for autistic children: Monopoly money or counterfeit currency?.” In Robot ethics 2.0: From autonomous cars to artificial intelligence, edited by Patrick Lin, Ryan Jenkins, and Keith Abney. Oxford University Press, 2017.
European Commission. Ethics guidelines for trustworthy AI. High-Level Expert Group on Artificial Intelligence (HLEG), 2019.
Evas, Tatjana. “European framework on ethical aspects of artificial intelligence, robotics and related technologies.” European Parliamentary Research Service, 2020. https://www.europarl.europa.eu/RegData/etudes/STUD/2020/654179/EPRS_STU(2020)654179_EN.pdf.
Feenberg, Andrew. “Subversive Rationalization: Technology, Power, and Democracy.” In Technology and the Politics of Knowledge, edited by Andrew Feenberg and Alastair Hannay. Indiana University Press, 1995.
Frankfurt, Harry. The Importance of What We Care About. Cambridge University Press, 1988. https://doi.org/10.1017/CBO9780511818172.
Future of Life Institute. “Superintelligence Survey.” https://futureoflife.org/ai/superintelligence-survey.
Gibson, James J. The ecological approach to visual perception. Houghton Mifflin, 1979.
Goertzel, Ben. “Intelligence, Mind and Self-Modification: Defining the Core Concepts of AI.” 2002. https://www.goertzel.org/papers/IntelligenceAndSelfModification.htm.
Gounaris, Alkis, and George Kosteletos. “Artificial Intelligence Weapons: Problems of Assigning Moral Status to Autonomous Machines” [Όπλα Τεχνητής Νοημοσύνης: Προβλήματα Απόδοσης Ηθικού Καθεστώτος στις Αυτόνομες Μηχανές]. In Aspects of Applied Science and Technology – Exploring the Value Landscape of Technoscience [Όψεις της Εφαρμοσμένης Επιστήμης και Τεχνολογίας – Διερευνώντας το αξιακό τοπίο της Τεχνοεπιστήμης], edited by Kostas Theologou and Eugenia Tzannini. Hellinoekdotiki, 2022.
Gounaris, Alkis, and George Kosteletos. “Licensed to Kill: Autonomous Weapons as Persons and Moral Agents.” In Personhood, edited by Dragan Prole and Goran Rujiević. Hellenic-Serbian Philosophical Dialogue Series, Vol. 2. Novi Sad: The NKUA Applied Philosophy Research Lab Press, 2020. https://doi.org/10.12681/aprlp.82.
Gounaris, Alkis. “Can we literally talk about artificial moral agents?.” Presentation for the 6th Panhellenic Conference in Philosophy of Science. Department of History and Philosophy of Science – NKUA, Athens, Greece, 2020. DOI: 10.13140/RG.2.2.13671.47520. Retrieved [25/12/2020] from https://alkisgounaris.gr/en/archives.
Gounaris, Alkis. “Why do we need a Unified Theory of Embodied Cognition?.” Presentation for the 94th Joint Session of the Mind Association and the Aristotelian Society, University of Kent, Online Open Session, 2020. DOI: 10.13140/RG.2.2.11933.74729.
Grace, Katja. “Superintelligence 20: The value-loading problem. Nick Bostrom.” LessWrong, 2015. https://www.lesswrong.com/posts/FP8T6rdZ3ohXxJRto/superintelligence-20-the-value-loading-problem.
Graham, Jesse, Brian A. Nosek, Jonathan Haidt, Ravi Iyer, Spassena Koleva, and Peter H. Ditto. “Mapping the moral domain.” Journal of Personality and Social Psychology 101, no.2 (2011): 366-385. https://doi.org/10.1037/a0021847.
Gunkel, David J. “The other question: can and should robots have rights?.” Ethics and Information Technology 20, no. 2 (2018): 87-99.
Güzeldere, Güven. “The many faces of consciousness: A field guide.” In The Nature of Consciousness: Philosophical Debates, edited by Ned Block, Owen Flanagan, and Güven Güzeldere. MIT Press, 1997.
Haidt, Jonathan, and Craig Joseph. “Intuitive ethics: How innately prepared intuitions generate culturally variable virtues.” Daedalus 133, no.4 (2004): 55-66. https://www.jstor.org/stable/20027945.
Hao, Karen. “Should a self-driving car kill the baby or the grandma? Depends on where you’re from.” MIT Technology Review, October 24, 2018. https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/.
Hauser, Mark D., Liane Young, and Fiery Cushman. “Reviving Rawls’s Linguistic Analogy: Operative Principles and the Causal Structure of Moral Actions”. In Moral Psychology, Vol. 2, The Cognitive Science of Morality: Intuition and Diversity, edited by Walter Sinott-Armstrong. MIT Press, 2008.
Heidegger, Martin. The Basic Problems of Phenomenology [Τα Βασικά Προβλήματα της Φαινομενολογίας]. 1999.
Horn, Robert Ε. “The Turing Test: Mapping and Navigating the Debate.” In Parsing the Turing Test, edited by Robert Epstein, Gary Roberts, and Grace Beber. Springer Science, 2009. https://doi.org/10.1007/978-1-4020-6710-5.
Kant, Immanuel. Groundwork of the Metaphysics of Morals [Τα θεμέλια της Μεταφυσικής των Ηθών]. Dodoni, 1984.
Kosteletos, George. “The Music Turing Test” [Η Μουσική Δοκιμασία Turing]. Musicology [Μουσικολογία] 15 (2015): 290-300.
Königs, Peter. “What is techno-optimism?.” Philosophy & Technology 35, no.3 (2022): 63-68. https://doi.org/10.1007/s13347-022-00555-x.
Kurzweil, Ray. The Singularity Is Near: When Humans Transcend Biology. Viking, 2005.
Levy, David. “The ethical treatment of artificially conscious robots.” International Journal of Social Robotics 1, no. 3 (2009): 209-216. https://doi.org/10.1007/s12369-009-0022-6.
Lillemäe, Eleri, Kairi Talves, and Wolfgang W. Wagner . “Public perception of military AI in the context of techno-optimistic society.” AI & SOCIETY (2023): 1-15. https://doi.org/10.1007/s00146-023-01785-z.
Marchant, Gary E., Braden R. Allenby, and Joseph R. Herkert, eds. The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight. The Pacing Problem. Springer, 2011. https://doi.org/10.1007/978-94-007-1356-7.
McCarthy, John, and Patrick J. Hayes. “Some Philosophical Problems from the Standpoint of Artificial Intelligence.” In Machine Intelligence 4, edited by Bernard Meltzer and Donald M. Michie. Edinburgh University, 1969. https://www-formal.stanford.edu/jmc/mcchay69.pdf.
Merleau-Ponty, Maurice. Phenomenology of Perception. Routledge, 1962.
Microsoft Research. “Counterfactual Fairness.” Video. https://www.microsoft.com/en-us/research/video/counterfactual-fairness/.
Mill, John Stuart. Utilitarianism [Ωφελιμισμός]. Translated by Philemon Peonidis. Polis, 2013.
Minsky, Marvin. Computation: Finite and Infinite Machines. Prentice-Hall, 1967.
Moore, George E. Principia Ethica. Cambridge University Press, 1903.
Moore, Gordon E. “Cramming more components onto integrated circuits.” Electronics 38, no. 8 (1965). https://www.intel.com/content/www/us/en/newsroom/resources/moores-law.html.
Pelegrinis, Theodosios. Dictionary of Philosophy [Λεξικό Φιλοσοφίας]. Ellinika Grammata, 2004.
Pelegrinis, Theodosios. Ethical Philosophy [Ηθική Φιλοσοφία]. Ellinika Grammata, 1997.
Piaget, Jean. “The theory of stages in cognitive development.” In Measurement and Piaget, edited by D. R. Green, M. P. Ford, and G. B. Flamer. McGraw-Hill, 1971.
Pinto-Bustamante, Boris J., Julián C. Riaño-Moreno, Hernando A. Clavijo-Montoya, Maria A. Cárdenas-Galindo, and Wilson D. Campos-Figueredo. “Bioethics and artificial intelligence: between deliberation on values and ration-al choice theory.” Frontiers in Robotics and AI 10 (2023): 1140901. https://doi.org/10.3389/frobt.2023.1140901.
Ross, William David. The right and the good. Oxford University Press, 1930.
Schwitzgebel, Eric, and Mara Garza. “A Defense of the Rights of Artificial Intelligences.” 2015. https://doi.org/10.1111/misp.12032.
Searle, John. “Minds, Brains and Programs.” Behavioral and Brain Sciences (1980).
Shapiro, Lawrence. Embodied Cognition. Routledge, 2019.
Sinnott-Armstrong, Walter. “Framing Moral Intuitions”. In Moral Psychology, Vol. 2, The Cognitive Science of Morality: Intuition and Diversity, edited by Walter Sinott-Armstrong. MIT Press, 2008. https://doi.org/10.7551/mitpress/7573.001.0001.
Skelton, Anthony. “William David Ross.” In The Stanford Encyclopedia of Philosophy, edited by Edward N. Zalta. Spring 2022 Edition. https://plato.stanford.edu/archives/spr2022/entries/william-david-ross/.
Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy 24, no.1 (2007): 62-77. https://doi.org/10.1111/j.1468-5930.2007.00346.x.
Steunebrink, Bas R., Kristinn R. Thorisson, and Jurgen Schmidhuber. “Growing Recursive Self-Improvers.” 2016. https://people.idsia.ch/~steunebrink/Publications/AGI16_growing_recursive_self-improvers.pdf.
Stuart, Russell, and Peter Norvig. Artificial Intelligence: A Modern Approach [Τεχνητή Νοημοσύνη: Μια σύγχρονη προσέγγιση]. Kleidarithmos, 2005.
Sudmann, Andreas, ed. The Democratization of Artificial Intelligence: Net Politics in the Era of Learning Algorithms. Transcript Verlag, 2019. https://doi.org/10.14361/9783839447192.
Taylor, Jessica, Eliezer Yudkowsky, et al. “Alignment for Advanced Machine Learning Systems.” 2016. https://intelligence.org/files/AlignmentMachineLearning.pdf.
Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence [Life 3.0: Τι θα σημαίνει να είσαι άνθρωπος στην εποχή της τεχνητής νοημοσύνης;]. Translated. Travlos, 2018 (Original work published by Knopf, 2017).
The World Commission on the Ethics of Scientific Knowledge and Technology. “The Precautionary Principle.” UNESCO, 2005. https://unesdoc.unesco.org/ark:/48223/pf0000139578.
Vinge, Vernor. The Technological Singularity. 1993. https://cmm.cenart.gob.mx/delanda/textos/tech_sing.pdf.
Warren, Mary Anne. “On the Moral and Legal Status of Abortion.” In Contemporary Moral Problems, edited by J. White. Wadsworth/Thompson Learning, 2003. https://spot.colorado.edu/~norcross/Ab3.pdf.
Wiener, Norbert. “Some Moral and Technical Consequences of Automation.” Science 131, no. 3410 (1960). https://www.science.org/doi/10.1126/science.131.3410.1355.
Williams, Bernard. Ethics and the Limits of Philosophy [Η Ηθική και τα όρια της Φιλοσοφίας]. Translated by Chrysoula Grammenou. Arsenidis, 2006.
Wittgenstein, Ludwig. Tractatus Logico-Philosophicus. 1922.
Yampolskiy, Roman V. “Artificial Intelligence Safety Engineering: Why Machine Ethics is a Wrong Approach.” In Philosophy and Theory of Artificial Intelligence, edited by Vincent C. Müller. Springer, 2012.
Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković. Oxford University Press, 2008.
Yudkowsky, Eliezer. “The Value Loading Problem.” EDGE, July 12, 2021. https://www.edge.org/response-detail/26198.
Ziouvelou, Xenia, Vangelis Karkaletsis, George Giannakopoulos, et al. “Democratising AI. A Nation-al Strategy for Greece.” Institute of Informatics and Telecommunication, NCSR Demokritos, 2020. http://democratisingai.gr/index.html.