Towards Moral Machines: A Discussion with Michael Anderson and Susan Leigh Anderson


Published: Sep 19, 2021
Keywords:
Machine Ethics AI Ethics Philosophy of Artificial Intelligence Artificial Moral Agents Ethical Machines Moral Status of Robots Computation of Bio-Medical Ethics
Michael Anderson
https://orcid.org/0000-0001-7699-6156
Susan Leigh Anderson
Alkis Gounaris
https://orcid.org/0000-0002-0494-6413
George Kosteletos
https://orcid.org/0000-0001-6797-8415
Abstract
At the turn of the 21st century, Susan Leigh Anderson and Michael Anderson conceived and introduced the Machine Ethics research program, that aimed to highlight the requirements under which autonomous artificial intelligence (AI) systems could demonstrate ethical behavior guided by moral values, and at the same time to show that these values, as well as ethics in general, can be representable and computable. Today, the interaction between humans and AI entities is already part of our everyday lives; in the near future it is expected to play a key role in scientific research, medical practice, public administration, education and other fields of civic life. In view of this, the debate over the ethical behavior of machines is more crucial than ever and the search for answers, directions and regulations is imperative at an academic, institutional as well as at a technical level. Our discussion with the two inspirers and originators of Machine Ethics highlights the epistemological, metaphysical and ethical questions arising by this project, as well as the realistic and pragmatic demands that dominate artificial intelligence and robotics research programs. Most of all, however, it sheds light upon the contribution of Susan and Michael Anderson regarding the introduction and undertaking of a main objective related to the creation of ethical autonomous agents, that will not be based on the “imperfect” patterns of human behavior, or on preloaded hierarchical laws and human-centric values.
Article Details
  • Section
  • Discussion
Downloads
Download data is not yet available.
References
Aldinhas, Ferreira Maria, João Silva Sequeira, Gurvinder Singh Virk, Mohammad Tokhi Osman, and Ender E. Kadar, eds. Robotics and Well-being. Cham: Springer, 2019.
Allen, Colin, Varner Gary, and Zinser Jason. “Prolegomena to Any Future Artificial Moral Agent.” Journal of Experimental and Theoretical Artificial Intelligence 12 (2000): 151-261.
Anderson, Michael, and Suzan Leigh Anderson, eds. Machine Ethics. New York and Cambridge: Cambridge University Press, 2011.
Anderson, Michael, and Suzan Leigh Anderson. “A Prima Facie Duty Approach to Machine Ethics: Machine Learning of Features of Ethical Dilemmas, Prima Facie Duties, and Decision Principles through a Dialogue with Ethicists.” In Machine Ethics, edited by Michael Anderson, and Suzan Leigh Anderson, 476-492. New York and Cambridge: Cambridge University Press, 2011.
Anderson, Michael, and Suzan Leigh Anderson. “ETHEL: Toward a Principled Ethical Eldercare System.” Proceedings of the AAAI Fall Symposium: New Solutions to Old Problems. Technical Report FS-08-02. Arlington, VA, 2008.
Anderson, Michael, and Suzan Leigh Anderson. “Guest Editors’ Introduction: Machine Ethics.” IEEE Intelligent Systems 21, no. 4 (2006): 10-11.
Anderson, Michael, and Suzan Leigh Anderson. “Machine Ethics: Creating an Ethical Intelligent Agent.” AI Magazine 28, no. 4 (2007): 15-26.
Anderson, Michael, and Suzan Leigh Anderson. “MedEthEx: A Prototype Medical Ethics Advisor.” Proceedings of the 21st National Conference on Artificial Intelligence and the Eighteenth Innovative Applications of Artificial Intelligence Conference, 1759-1765. Boston, MA: AAAI Press, 2006.
Anderson, Michael, and Suzan Leigh Anderson. “Robot Be Good.” Scientific American 303, no. 4 (2010): 72-77.
Anderson, Michael, and Suzan Leigh Anderson. “The Status of Machine Ethics: A Report from the AAAI Symposium.” Minds & Machines 17 (2007): 1-10.
Anderson, Michael, and Suzan Leigh Anderson. “Toward Ensuring Ethical Behavior from Autonomous Systems: A Case-supported Principle-based Paradigm.” Industrial Robot 42, no. 4 (2015): 324-331.
Anderson, Michael, Suzan Leigh Anderson, and Chris Armen, eds. Machine Ethics: Papers form AAAI Fall Symposium, 2005. Technical Report FS-05-06. Menlo Park, CA: Association for the Advancement of Artificial Intelligence, 2005. ttps://www.aaai.org/Library/Symposia/Fall/fs05-06.php.
Anderson, Michael, Suzan Leigh Anderson, and Chris Armen. “An Approach to Computing Ethics.” IEEE Intelligent Systems 21, no. 4 (2006): 65-63.
Anderson, Michael, Suzan Leigh Anderson, and Chris Armen. “Toward Machine Ethics: Implementing Two Action-Based Ethical Theories.” In Machine Ethics, Papers form AAAI Fall Symposium, 2005, edited by Michael Anderson, Suzan Leigh Anderson, and Chris Armen, Technical Report FS-05-06. Menlo Park, CA: Association for the Advancement of Artificial Intelligence, 2005.
Anderson, Michael, Suzan Leigh Anderson, and Chris Armen. “Towards Machine Ethics.” In Proceedings of the AAAI-04 Workshop on Agent Organizations: Theory and Practice, 53-59. San Jose, CA, 2004.
Anderson, Suzan Leigh, and Michael Anderson. “Towards a Principle-Based Healthcare Agent.” In Machine Medical Ethics, edited by S. van Rysewyk, and M. Pontier, 67-77. Cham: Springer, 2015.
Anderson, Suzan Leigh. “Asimov’s ‘Three Laws of Robotics’ and Machine Metaethics.” AI and Society 22 (2007): 477-493.
Anderson, Suzan Leigh. “Machine Metaethics.” In Machine Ethics, edited by Michael Anderson, and Suzan Leigh Anderson, 21-27. New York and Cambridge: Cambridge University Press, 2011.
Anderson, Suzan Leigh. “Philosophical Concerns with Machine Ethics.” In Machine Ethics, edited by Michael Anderson, and Suzan Leigh Anderson, 162-167. New York and Cambridge: Cambridge University Press, 2011.
Anderson, Suzan Leigh. “The Unacceptability of Asimov’s Three Laws of Robotics as a Basis for Machine Ethics.” In Machine Ethics, edited by Michael Anderson, and Suzan Leigh Anderson, 285-296. New York and Cambridge: Cambridge University Press, 2011.
Asimov, Isaac. “The Bicentennial Man.” In Philosophy and Science Fiction, edited by Michael Phillips, 183-216. Buffalo, NY: Prometheus Books, 1984.
Awad, Edmond, Michael Anderson, Suzan Leigh Anderson, and Beishui Liao. “An Approach for Combining Ethical Principles with Public Opinion to Guide Public Policy.” Artificial Intelligence 287 (2020): article 103349.
Awad, Edmond, Sohan Dsouza, Azim Shariff, Iyad Rahwan, and Jean-Francois Bonnefon. “Universals and Variations in Moral Decisions Made in 42 Countries by 70,000 Participants.” Proceedings of the National Academy of Sciences 117, no. 5 (2020): 2332-2337.
Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan. “The Moral Machine Experiment.” Nature 563, no. 7729 (2018): 59-64.
Beauchamp, Tom Lamar, and James Franklin Childress. Principles of Biomedical Ethics. Oxford, UK: Oxford University Press, 1979.
Bentham, Jeremy. An Introduction to the Principles of Morals and Legislation. Edited by J. Burns, and H. Hart. Oxford: Clarendon Press, 1789.
Bonnefon, Jean-Francois, Azim Shariff, and Iyad Rahwan. “The Social Dilemma of Autonomous Vehicles.” Science 352, no. 6293 (2016): 1573-1576.
Bostrom, Nick. “Existential Risk Prevention as Global Priority.” Global Policy 4, no. 1 (2013): 15-31.
Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press, 2014.
Clarke, Roger. “Asimov’s Laws of Robotics: Implications for Information Technology. Part I.” Computer 26, no. 12 (1993): 53-61.
Clarke, Roger. “Asimov’s Laws of Robotics: Implications for Information Technology. Part II.” Computer 27, no. 1 (1994): 57-66.
Clifford, Catherine, and Elon Musk. “Mark my Words – A.I. is far more Dangerous than Nukes.” CNBC, March 13, 2018. https://www.cnbc.com/2018/03/13/elon-musk-at-sxsw-a-i-is-more-dangerous-than-nuclear-weapons.html.
Davis, Martin. Computability and Unsolvability. New York: McGraw-Hill, 1958.
Dennett, Daniel. “Cognitive Wheels: The Frame Problem of AI.” In Minds, Machines and Evolution: Philosophical Studies, edited by C. Hoockway, 129-152. Cambridge: Cambridge University Press, 1984.
Dennett, Daniel. “When Hal Kills, Who’s to Blame? Computer Ethics.” In Hal’s Legacy: 2001’s Computer as Dream and Reality, edited by David G. Stork, 351-365. Cambridge, MA: MIT Press, 1997.
Dennett, Daniel. Brainstorms: Philosophical Essays on Mind and Psychology. Cambridge, MA: MIT Press, 1978.
Dreyfus, Hubert Lederer. What Computers Still Can’t Do: A Critique of Artificial Reason. Cambridge, MA: MIT Press, 1992.
Feenberg, Andrew. “Subversive Rationalization: Technology, Power, and Democracy.” In Technology and the Politics of Knowledge, edited by Andrew Feenberg, and Alastair Hannay, 3-11. Bloomington and Indianapolis: Indiana University Press, 1995.
Feenberg, Andrew. Questioning Technology. London, New York: Routledge, 1999.
Floridi, Luciano, and J. W. Sanders. “On the Morality of Artificial Agents.” Minds and Machines 14 (2004): 349-379.
Fodor, Jerry Alan. The Modularity of Mind. Cambridge, MA: MIT Press, 1983.
Gounaris, Alkis, and George Kosteletos. “Licensed to Kill: Autonomous Weapons as Persons and Moral Agents.” In Personhood, edited by Dragan Prole, and Goran Rujiević, 137-189. Hellenic-Serbian Philosophical Dialogue Series, vol. 2. Novi Sad: The NKUA Applied Philosophy Research Lab Press, 2020.
Gunkel, David. The Machine Question: Critical Perspectives on AI, Robots and Ethics. Cambridge, MA: MIT Press, 2012.
Hao, Karen. “Should a Self-driving Car Kill the Baby or the Grandma? Depends on where You’re from.” MIT Technology Review, October 14, 2018, https://www.technologyreview.com/2018/10/24/139313/a-global-ethics-study-aims-to-help-ai-solve-the-self-driving-trolley-problem/.
Harari, Yuval Noah. 21 Lessons for the 21st Century. New York: Spiegel & Grau, 2018.
Harris, Ricki. “Elon Musk: Humanity Is a Kind of ‘Biological Boot Loader’ for AI.” Wired, January 9, 2019, https://www.wired.com/story/elon-musk-humanity-biological-boot-loader-ai/.
Hobbes, Thomas. Leviathan, or The Matter, Forme and Power of a Commonwealth Ecclesiastical and Civil. Edited by A. R. Waller. Cambridge: Cambridge University Press, 1904.
Hoffmann, Christian Hugo, and Benjamin Hahn. “Decentered Ethics in the Machine Era and Guidance for AI Regulation.” AI & Society 35, no. 3 (2009): 635-644.
Kant, Immanuel. Critique of Practical Reason. Translated by Mary Gregor. Cambridge: Cambridge University Press, 2015.
Kant, Immanuel. Lectures on Ethics. Translated by L. Infield. New York: Harper & Row, 1963.
Kant, Immanuel. The Groundwork for the Metaphysics of Morals. Translated by Allen W. Wood. New Haven and London: Yale University Press, 2002.
Kleene, Stephen Cole. Introduction to Metamathematics. Amsterdam: North-Holland, 1952.
Leibniz, Gottfried Wilhelm. “Principles of Nature and Grace, Based on Reason.” In Gottfried Wilhelm Leibniz, Philosophical Papers and Letters, edited by Leroy E. Loemker. Dordrecht: Springer, 1989.
Leibniz, Gottfried Wilhelm. Dissertatio de arte combinatoria. Paris: Hachette Livre-BNF, 2018.
Levy, David. “The Ethical Treatment of Artificially Conscious Robots.” International Journal of Social Robotics 1, no. 3 (2009): 209-216.
Malle, Bertram F., Thapa Stuti Magar, and Matthias Scheutz. “AI in the Sky: How People Morally Evaluate Human and Machine Decisions in a Lethal Strike Dilemma.” In Robotics and Well-Being, edited by Maria Aldinhas Ferreira, João Silva Sequeira, Gurvinder Singh Virk, Mohammad Tokhi Osman, and Ender E. Kadar, 111-133. Cham: Springer, 2019.
McCarthy, John, and Patrick J. Hayes. “Some Philosophical Problems from the Standpoint of Artificial Intelligence.” In Machine Intelligence, vol. 4, edited by Bernard Meltzer, and Donald M. Michie, 463-502. Edinburgh: Edinburgh University Press, 1969.
McCulloch, Warren S., and Walter H. Pitts. “A Logical Calculus of the Ideas Immanent in Nervous Activity.” Bulletin of Mathematical Biophysics 5 (1943): 115-33.
Minsky, Marvin. Computation: Finite and Infinite Machines. New Jersey: Prentice-Hall, 1967.
Mitcham, Carl. Thinking through Technology: The Path between Engineering and Philosophy. Chicago: The University of Chicago Press, 1994.
Moor, James H. “The Nature, Importance, and Difficulty of Machine Ethics.” IEEE Intelligent Systems 21, no. 4 (2006): 18-21.
Newell, Allen, and Herbert Alexander Simon. “Computing Science as Empirical Enquiry: Symbols and Search.” Communications of the Association for Computing Machinery 19 (1976): 113-126.
Newell, Allen, and Herbert Alexander Simon. “GPS-A Program that Simulates Human Thought.” In Lernende Automaten, edited by Heinz Billing, 109-124. Münich: Oldenburg, 1961.
Newell, Allen, and Herbert Alexander Simon. “The Logic Theory Machine: A Complex Information-Processing System.” IRE Transactions on Information Theory 2, no. 3 (1956): 61-79.
Newell, Allen, and Herbert Alexander Simon. Current Developments in Complex Information Processing: Technical Report P-850. Santa Monica, CA: Rand Corporation, 1956.
Newell, Allen, and John Crosley Shaw. “Programming the Logic Theory Machine.” In IRE-AIEE-ACM ‘57 (Western): Papers Presented at the February 26-28, 1957, Western Joint Computer Conference: Techniques for Reliability, 230-240. New York: Association for Computing Machinery, 1957.
Newell, Allen, John Crosley Shaw, and Herbert Alexander Simon. “Element of a Theory of Human Problem Solving.” Psychological Review 65 (1958): 151-166.
Newell, Allen. “Physical Symbol Systems.” Cognitive Science 4 (1980): 135-183.
Owen, Jonathan, and Richard Osley. “Bill of Rights for Abused Robots: Experts Draw up an Ethical Charter to Prevent Humans Exploiting Machines.” The Independent, September 17, 2011, https://www.independent.co.uk/news/science/bill-of-rights-for-abused-robots-5332596.html.
Pylyshyn, Zenon W., ed. The Robot’s Dilemma: The Frame Problem in Artificial Intelligence. Norwood, NJ: Ablex, 1987.
Rawls, John. “Justice as Fairness: Political not Metaphysical.” Philosophy and Public Affairs 14 (1985): 223-251.
Rawls, John. A Theory of Justice, 2nd edition. Cambridge, MA: The Belknap Press of Harvard University Press, 1999.
Rawls, John. A Theory of Justice. Cambridge, MA: The Belknap Press of Harvard University Press, 1971.
Rawls, John. Justice as Fairness: A Restatement. Cambridge, MA: Harvard University Press, 2001.
Ross, William David. The Right and the Good. Oxford: Clarendon Press, 1930.
Russell, Stuart, and Max Tegmark. “Autonomous Weapons: Αn Open Letter from AI & Robotics Researchers.” Future of Life Institute. https://futureoflife.org/open-letter-autonomous-weapons/.
Singer, Peter. “All Animals Are Equal.” In Animal Ethics: Past and Present Perspectives, edited by Evangelos D. Protopapadakis, 163-178. Berlin: Logos Verlag, 2012.
Singer, Peter. Animal Liberation: A New Ethics for our Treatment of Animals. New York: New York Review of Books, 1975.
Singer, Peter. Practical Ethics, 2nd edition. Cambridge: Cambridge University Press, 1993.
Soares, Nate. “The Value Learning Problem.” In Artificial Intelligence, Safety and Security, edited by Roman V. Yampolskiy, 89-97. Boca Raton, FL: CRC Press, 2019.
Sparrow, Robert. “Killer Robots.” Journal of Applied Philosophy 24, no. 1 (2007): 62-77.
Sparrow, Robert. “The Turing Triage Test.” Ethics and Information Technology 6 (2004): 201-213.
Taylor, Steve, Brian Pickering, Michael Boniface, Michael Anderson, David Danks, Asbjørn Følstad, Matthias Leese, Vincent Müller, Tom Sorell, Alan Winfield, and Fiona Woollard. “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation.” Zenodo, July 2, 2018.
Tegmark, Mark. “Benefit and Risks of Artificial Intelligence.” Future of Life Institute. https://futureoflife.org/background/benefits-risks-of-artificial-intelligence/.
Tegmark, Mark. Life 3.0: Being Human in the Age of Artificial Intelligence. New York: Knopf, 2017.
Thompson, Nicholas. “Will Artificial Intelligence Enhance or Hack Humanity?” Wired, April 20, 2019, https://www.wired.com/story/will-artificial-intelligence-enhance-hack-humanity/.
Tooley, Michael. “In Defense of Abortion and Infanticide.” In The Abortion Controversy: A Reader, edited by Luis P. Pojman, and Francis J. Beckwith, 186-213. Boston, MA: Jones & Bartlett, 1994.
Turing, Alan Mathison. “Computing, Machinery and Intelligence.” Mind 59 (1950): 433-460.
Turing, Alan Mathison. “Intelligent Machinery.” Ιn Machine Intelligence 5, edited by B. Meltzer, and D. M. Michie, 3-23. Edinburgh: Edinburgh University Press, 1969.
Turing, Alan Mathison. “On Computable Numbers, with an Application to the Entscheidungsproblem.” In The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life, edited by Jack B. Copeland, 58-90. Oxford: Oxford University Press, 2004.
Turing, Alan Mathison. “On Computable Numbers, with an Application to the Entscheidungsproblem. A Correction.” Proceedings of the London Mathematical Society 43 (1938): 544-546.
Wallace, Gregory. “Elon Musk Warns against Unleashing Artificial Intelligence ‘Demon.’” CNN Business, October 26, 2014, https://money.cnn.com/2014/10/26/technology/elon-musk-artificial-intelligence-demon/.
Warren, Mary Anne. “On the Moral and Legal Status of Abortion.” In Contemporary Moral Problems, edited by J. White, 144-155. Belmont, CA: Wadsworth/Thompson Learning, 2003.
Wheeler, Michael. “Cognition in Context: Phenomenology, Situated Robotics, and the Frame Problem.” International Journal of Philosophical Studies 16, no. 3 (2008): 323-349.
Wheeler, Michael. Reconstructing the Cognitive World: The Next Step. Cambridge, MA: MIT Press, 2005.
Winner, Langdon. “Citizen Virtues in a Technological Order.” Inquiry 35, nos. 3-4 (1992): 341-361.
Winner, Langdon. “Technè and Politeia: The Technical Constitution of Society.” In Philosophy of Technology, edited by Paul T. Dubrin, and Friedrich Rapp, 97-111. Dordrecht, Boston, Lancaster: D. Reidel, 1983.
Yampolskiy, Roman. “Artificial Intelligence Safety Engineering: Why Machine Ethics is a Wrong Approach.” In Philosophy and Theory of Artificial Intelligence. Studies in Applied Philosophy, Epistemology and Rational Ethics, edited by Vincent Müller, 389-396. Berlin, Heidelberg: Springer, 2013.
Yudkowsky, Eliezer. “Complex Value Systems in Friendly AI.” In Artificial General Intelligence, edited by Jürgen Schmidhuber, Kristinn R. Thórisson, and Moshe Looks, 388-393. Berlin, Heidelberg: Springer, 2011.
Yudkowsky, Eliezer. “The Value Loading Problem.” EDGE, July 12, 2021, https://www.edge.org/response-detail/26198.