In this interview, we invited Prof. Rafael Capurro to share his insights on past, current, and future trends in robot ethics.
Interviewee: Prof. Rafael Capurro, Founder, International Center for Information Ethics (ICIE) and Member of IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
Interviewer: Dr. Yueh-Hsuan Weng, Co-founder of ROBOLAW.ASIA and Assistant Professor at Frontier Research Institute for Interdisciplinary Studies, Tohoku University (from March 2017)
As a pioneer of Information Ethics, Prof. Capurro created “The Quest for Roboethics: A Survey” and since 2010 continues to update relevant information about ethical issues in robotics. He’s participated in key European projects, such as ETHICBOTS (2005-2008), ETICA (2009-2012), and CA-RoboCom, which was a candidate project of EU FET Flagship initiatives. He has also collaborated with Asian researchers from Remin University of China and University of Tsukuba, so he possesses a unique point of view about robot ethics from Eastern traditions.
WENG: Maybe we can start with your academic background. Could you please give a brief introduce of yourself and the International Center for Information Ethics (ICIE) to our readers?
CAPURRO: I have a Licentiate (University of El Salvador, Argentina, 1971) and a PhD (Düsseldorf University, Germany, 1978) in Philosophy as well as a post-doctoral degree in Ethics (Stuttgart University, Germany, 1989). I have been working in the field of Documentation/Information Science since the early seventies in Germany. I was a Professor of Information Management and Information Ethics at Stuttgart Media University from 1986 until 2009 when I retired. I was Lecturer in Philosophy/Ethics at Stuttgart University between 1989 until 2004.
In 1999 I founded the International Center for Information Ethics (ICIE). ICIE soon became a community of some 300 colleagues from different countries and fields interested in the ethical issues associated with new technologies. Jared Bielby (Canada) is ICIE Co-Chair. The ICIE (co-)organizes international meetings and has, since 2004, published an open access online journal, the International Review of Information Ethics (IRIE) of which I am the Editor-in-Chief. There is an ICIE book series published by Fink (Munich) which currently has published five volumes (including contributions in German and English). ICIE membership is free of charge. The contact with the members is through a mailing list. We inform our members about meetings and publications that are added to the website.
WENG: In 2010 you created “The Quest for Roboethics: A Survey” and keep updating relevant information in regards to Robot Ethics. As a pioneer of Information Ethics, what made you focus on this emerging subject in techno-ethics?
CAPURRO: This survey was originally a contribution to a workshop held on September 30, 2009 and organized by Cybernics, University of Tsukuba (Japan). It was published in Cybernics Technical Reports: Special Issue on Roboethics (University of Tsukuba 2011, pp. 39-59). At that time, issues dealing with ethics and robotics were relatively new but the field expanded quickly— so much so I decided to update this survey regularly—albeit without seeing it as needing to be an exhaustive overview of the field. It is just a means, for myself and maybe for others, to help to keep pace with a range of ongoing robot ethics discussions.
My interest in robotics and AI goes back to the mid-80s: I participated in a group in charge of developing a Code of Ethics for the German Society for Informatics. I read Joseph Weizenbaum‘s Computer Power and Human Reason (1977), Pamela McCorduck‘s Machines Who Think (1979), David W. Johnson and J.W. Snapper (eds.) Ethical Issues and the Use of Computers (1985), Deborah Johnson’s Computer Ethics (1985), Terry Winograd and Fernando Flores’s Understanding Computers and Cognition (1986), Hans Moravec’s Mind Children (1988), to mention just a few. My first book publication dealing
with these issues was a collection of papers with the title Living in the Information Age (1995).
WENG: In your previous presentation at University of Tsukuba you mentioned that an Eastern tradition of Buddhism to robot ethics is that “robots as one more partner in the global interaction of things”. Could you please explain this?
CAPURRO: I visited Japan for the first time while on sabbatical in 1998 as a guest of the University of Library and Information Science (ULIS), which was integrated later on into the University of Tsukuba. I had conversations on ethical issues associated with the information society with ULIS staff and students as well as on philosophy with Prof. Riuji Endo. I also met Prof. Koichiro Matsuno (Nagaoka University of Technology) with whom I exchanged views on the concept of information. Since then, I have been invited to Japan several times by Prof. Makoto Nakada (Univ. of Tsukuba) with whom I have had the honor, and the pleasure, to learn about robot ethics as they are viewed from within Eastern traditions.
Prof. Nakada and I discussed issues of privacy from an intercultural perspective. The difference associated with how robots are understood in the “Far East” and the “Far West,” to use the terms coined by French sinologist François Jullien, can be traced back to the Cartesian split which conceives the human psyche as separated from the so-called “outside world.” Buddhism and Taoism open different paths of thought, particularly with regard to the Japanese concept of Ba (situation). This dialogue on intercultural robot ethics between Prof. Nakada and myself can be found here.
I would also like to mention two other events that deal with these issues. The first is the international symposium organized by The Uehiro Foundation on Ethics and Education and The Carnegie Council for Ethics in International Affairs (held in December 2010 in Oxford). More details on this can be found in my paper “Beyond Humanisms”. The other event was the International Conference on China’s Information Ethics (organized by Renmin University of China, School of Philosophy (Prof. LI Maosen), and the International Center for Information Ethics (ICIE) held at Renmin University, Beijing, P.R. China, October 28-29, 2010). I presented a paper at the conference entitled: The Dao of the Information Society in China and the Task of Intercultural Information Ethics. The paper was translated into Chinese by Junlan Liang (Chinese Academy of Social Sciences) and published in the journal Social Sciences Abroad ( 2011, Vol. 5, pp. 83-88).
In December 2009, I was invited to participate at the Global Forum on Civilization and Peace organized by The Academy of Korean Studies in Seoul where I was asked to speak about “Digital Ethics.” It was the first time I used this terminology rather than the usual one, namely, Information Ethics. The proceedings were published by The Academy of Korean Studies (ed.): 2009 Civilization and Peace, Korea: The Academy of Korean Studies 2010.
WENG: You and Prof. Nagenborg have pointed out that “Robots are and will remain in the foreseeable future dependent on human ethical scrutiny as well as on the moral and legal responsibility of humans”. In this case, how can we regulate Robo-morality?
CAPURRO: This statement is based on our research within two European projects, namely
– ETHICBOTS (2005-2008), as well as the book edited by me and Michael Nagenborg Ethics and Robotics (Heidelberg, 2009) and ETICA (2009-2012).
To “regulate Robot-morality” means to develop different kinds of standards for robot behavior in different contexts and cultures which is not the same as a sustainable ethical scrutiny on the development and use of robots in society, which is a larger research issue. The idea that robots would some day become philosophers or ethicists — being able to critically reflect on who they are — belongs to the field of science fiction.
WENG: What is your comment to “Ethical Robots”? Do you believe that Asimov’s Three Laws of Robotics will be feasible principles to ensure the safety of physical human-robot interactions?
CAPURRO: As you know, Isaac Asimov (1920-1992) was a science-fiction author. The “laws” were published for the first time in the volume I, Robot (1950). If we look for a legal framework to ensure the safety of humans in human-robot interactions we should pay attention to the issue that humans, as well as other living beings, deserve legal protection in order to ensure their safety. It makes no sense to legally protect robots beyond seeing them as just another device that is the property of some other person or corporation. It makes also no sense to speak about “the safety of physical human-washing machine interactions” aiming for ensuring the safety of washing machines (as if they were a kind of person who deserves protection). This is in line with what Asimov imagined in the title of his book I, Robot. Essentially, we need a broad legal and ethical discussion on the uses and misuses of robots in society.
WENG: Could you please tell us more about the state of arts in Machine Ethics? Are there any progress in terms of programming machines to act with human beings in an ethical way?
CAPURRO: “Machine Ethics” means the ethical reflection on machines done by humans, not machines engaging in such reflection on themselves (and on us). Ethics is done by members of human society reflecting critically on the customs (Greek: ethos, Latin: mores) underlying their being-together. “Machine Ethics” is, from this perspective, i.e., as a critical reflection done by machines, an oxymoron. If we want machines to act “in an ethical way” then we have to provide them with some kind of moral (and legal) rules of behavior. We are responsible for their formulation, fixation and interpretation. It makes no sense to make machines “morally responsible” for their actions.
WENG: “BS 8611: Ethics design and application robots” is a standard highlighting the ethical hazards of robots. Do you think that Ethics by Design might be a crucial factor to future safety standards for service robots?
CAPURRO: Once again, when we talk about the “ethical hazards of robots” the meaning of the “of” is of the kind already explained, i.e. it is genitivus obiectivus. The “Ethics by Design” concerns moral (and legal) rules, not the ethical reflection on such rules nor their interpretation and application. Who is to blame should an accident occur? Machines are per se a-moral which does not mean that they are neutral. Nothing we invent is neutral in the sense that it is already part of human life and it changes, for better or worse, our lives. The idea that machines are neutral because we can use them for different purposes disregards this basic issue. Machines do not behave at all in the way humans deal with each other (with possibilities of respect or disrespect concerning who we are). It is up to us to reflect on the risks we want to take for ourselves and for others when we program a machine to behave in this or that way in a given situation. As it is impossible to foresee all possible situations, every fixation of rules means making a selection of situations. Dilemmas arise when fixed rules do not match unforeseeable situations. Humans can be creative in such cases as they are not bound to fixed rules. This does not mean, of course, that a human decision can end less tragically than a “decision” made by a machine. In sum, we should let robots act in situations in which we can exclude the unforeseeable as far as possible. Beyond that we should take responsibility for the solution of the dilemma the machine is facing; it is our dilemma from the very moment we fix their rules of behavior.
WENG: In your opinion what is the major challenge to develop robot ethics for the following decade?
CAPURRO: This is an international and intercultural challenge since it deals with a critical reflection on societal customs, or mores, and the different kinds of moral and legal rules that will be developed for robots. There might be some global standards of such rules, but if robots are used in specific situations then the social expectations might differ and a translation process is necessary. This can be a very practical process but if we want to question why people expect this or that, in this or that context in different societies, then we have to go deep into underlying moral and ethical traditions that were built over centuries. Moral and legal rules are a kind of “symbolic immune system” for societies according to the German philosopher Peter Sloterdijk. Symbolic immune systems, no less than biological ones, can themselves become a danger if they are not flexible enough to change according to new situations. Robot ethics is similar to immunology in the sense that it deals with monitoring the efficiency and effectiveness of symbolic immune systems when we inoculate moral (and legal) rules into robots.
If you liked this article, you may also be interested in:
See all the latest robotics news on Robohub, or sign up for our weekly newsletter.