Robot ethics


Robot ethics, sometimes known by the short expression "roboethics", concerns ethical problems that occur with robots, such as whether robots pose a threat to humans in the long or short run, whether some uses of robots are problematic, and how robots should be designed such as they act 'ethically'. Alternatively, roboethics refers specifically to the ethics of human behavior towards robots, as robots become increasingly advanced. Robot ethics is a sub-field of ethics of technology, specifically information technology, and it has close links to legal as well as socio-economic concerns. Researchers from diverse areas are beginning to tackle ethical questions about creating robotic technology and implementing it in societies, in a way that will still ensure the safety of the human race.
While the issues are as old as the word robot, serious academic discussions started around the year 2000. Robot ethics requires the combined commitment of experts of several disciplines, who have to adjust laws and regulations to the problems resulting from the scientific and technological achievements in Robotics and AI. The main fields involved in robot ethics are: robotics, computer science, artificial intelligence, philosophy, ethics, theology, biology, physiology, cognitive science, neurosciences, law, sociology, psychology, and industrial design.

History and events

Some of the central discussion of ethics in relation to the treatment of non-human or non-biological things and their potential "spirituality". Another central topic, has to do with the development of machinery and eventually robots, this philosophy was also applied to robotics. One of the first publication directly addressing and setting the foundation for robot ethics was Runaround, a science fiction short story written by Isaac Asimov in 1942 which featured his well known Three Laws of Robotics. These three laws were continuously altered by Asimov, and a fourth, or zeroth law, was eventually added to precede the first three, in the context of his science fiction works. The short term "roboethics" was most likely coined by Gianmarco Veruggio.
An important event that propelled the concern of roboethics was the First International Symposium on Roboethics in 2004 by the collaborative effort of Scuola di Robotica, the Arts Lab of Scuola Superiore Sant'Anna, Pisa, and the Theological Institute of Pontificia Accademia della Santa Croce, Rome. "After two days of intense debating, anthropologist Daniela Cerqui identified three main ethical positions emerging from two days of intense debate:
  1. Those who are not interested in ethics. They consider that their actions are strictly technical, and do not think they have a social or a moral responsibility in their work.
  2. Those who are interested in short-term ethical questions. According to this profile, questions are expressed in terms of “good” or “bad,” and refer to some cultural values. For instance, they feel that robots have to adhere to social conventions. This will include “respecting” and helping humans in diverse areas such as implementing laws or in helping elderly people..
  3. Those who think in terms of long-term ethical questions, about, for example, the “Digital divide” between South and North, or young and elderly. They are aware of the gap between industrialized and poor countries, and wonder whether the former should not change their way of developing robotics in order to be more useful to the South. They do not formulate explicitly the question what for, but we can consider that it is implicit".
These are some important events and projects in robot ethics. Further events in the field are announced by the , and by :
Computer scientist Virginia Dignum noted in a March 2018 issue of Ethics and Information Technology that the general societal attitude toward artificial intelligence has, in the modern era, shifted away from viewing AI as a tool and toward viewing it as an intelligent “team-mate”. In the same article, she assessed that, with respect to AI, ethical thinkers have three goals, each of which she argues can be achieved in the modern era with careful thought and implementation. The three ethical goals are as follows:
Roboethics as a science or philosophical topic has begun to be a common theme in science fiction literature and films. One film that could be argued to be ingrained in pop culture that depicts the dystopian future use of robotic AI is The Matrix, depicting a future where humans and conscious sentient AI struggle for control of planet earth resulting in the destruction of most of the human race. An animated film based on The Matrix, the Animatrix, focused heavily on the potential ethical issues and insecurities between humans and robots. The movie is broken into short stories. Animatrix's animated shorts are also named after Isaac Asimov's fictional stories.
Another facet of roboethics is specifically concerned with the treatment of robots by humans, and has been explored in numerous films and television shows. One such example is, which has a humanoid android, named Data, as one of its main characters. For the most part, he is trusted with mission critical work, but his ability to fit in with the other living beings is often in question. More recently, the movie Ex Machina and TV show Westworld have taken on these ethical questions quite directly by depicting hyper-realistic robots that humans treat as inconsequential commodities. The questions surrounding the treatment of engineered beings has also been key component of Blade Runner for over 50 years. Films like Her have even distilled the human relationship with robots even further by removing the physical aspect and focusing on emotions.
Although not a part of roboethics per se, the ethical behavior of robots themselves has also been a joining issue in roboethics in popular culture. The Terminator series focuses on robots run by an conscious AI program with no restraint on the termination of its enemies. This series too has the same archetype as The Matrix series, where robots have taken control. Another famous pop culture case of robots or AI without programmed ethics or morals is HAL 9000 in the Space Odyssey series, where HAL kills all the humans on board to ensure the success of the assigned mission after his own life is threatened.

Robot Ethics and Law

With contemporary technological issues emerging as society pushes on, one topic that requires thorough thought is robot ethics concerning the law. Academics have been debating the process of how a government could go about creating legislation with robot ethics and law. A pair of scholars that have been asking these questions are Neil M. Richards Professor of Law at Washington University in St. Louis as well as, William D. Smart Associate Professor of Computer Science at Washington University in St. Louis. In their paper "How Should Robots Think About Law" they make four main claims concerning robot ethics and law. The groundwork of their argument lays on the definition of robot as "non-biological autonomous agents that we think captures the essence of the regulatory and technological challenges that robots present, and which could usefully be the basis of regulation." Second, the pair explores the future advanced capacities of robots within around a decades time. Their third claim argues a relation between the legal issues robot ethics and law experiences with the legal experiences of cyber-law. Meaning that robot ethics laws can look towards cyber-law for guidance. The "lesson" learned from cyber-law being the importance of the metaphors we understand emerging issues in technology as. This is based on if we get the metaphor wrong for example, the legislation surrounding the emerging technological issue is most likely wrong. The fourth claim they argue against is a metaphor that the pair defines as "The Android Fallacy". They argue against the android fallacy which claims humans and non-biological entities are "just like people".