This article was submitted to Robot Learning and Evolution, a section of the journal Frontiers in Robotics and AI
This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
Rapid developments in evolutionary computation, robotics, 3D-printing, and material science are enabling advanced systems of robots that can autonomously reproduce and evolve. The emerging technology of robot evolution challenges existing AI ethics because the inherent adaptivity, stochasticity, and complexity of evolutionary systems severely weaken human control and induce new types of hazards. In this paper we address the question how robot evolution can be responsibly controlled to avoid safety risks. We discuss risks related to robot multiplication, maladaptation, and domination and suggest solutions for meaningful human control. Such concerns may seem far-fetched now, however, we posit that awareness must be created before the technology becomes mature.
香京julia种子在线播放
Surprisingly, the idea of robot evolution is one hundred years old. The famous play by Karel Čapek that coined the word “robot” was published in 1920 (
Towards the end of the twentieth century the principles of biological evolution were transported to the realm of technology and implemented in computer simulations. This brought on the field of Evolutionary Computing, and evolutionary algorithms proved capable of delivering high quality solutions to hard problems in a variety of scientific and technical domains, offering several advantages over traditional optimization and design methods (
Up till now, work on evolutionary robotics has mostly been performed in computer simulations, safely confined to a virtual world inside a computer [e.g (
Some of the landmarks of the history of robot evolution. We show examples of systems that demonstrated robot reproduction or evolution incarnated in the real world.
However, this situation is changing rapidly and after the first major transition from “wetware” to software in the 20th century, evolution is at the verge of a second one, this time from software to hardware (
To make robots evolvable selection and reproduction need to be implemented. Selection of “robot parents” can be done by evaluating the robot’s behavior and allocating higher reproduction probabilities to robots that work well. For reproduction two facets of a robot should be distinguished, the
A robotic genotype obtained by mutating the genotype of one robot or recombining the genotypes of two parent robots encodes a new robot, the offspring. This offspring could be constructed by feeding the genotype to a 3D printer that makes a robot as specified by this genotype. However, currently there are no 3D printers that can produce a fully functional robot including a CPU, battery, sensors, and actuators. Arguably, this problem is temporary, and rapid prototyping of such components will be possible in the (near) future. A practicable alternative for now is to combine 3D printing, prefabricated functional components stored in a repository (e.g., CPUs, batteries, sensors, and actuators), and automated assembly. In such a system, the genotype specifies a number of 3D printable body parts with various shapes and sizes, the types, numbers and geometrical positions of the prefabricated body parts and the properties of an adequate software “brain” to control the given body. The production of a new robot can be done by industrial robot arms that retrieve the 3D printed body parts from the printers, collect the necessary prefabricated components from the storage, and assemble them into a working robot. After that, the software can be downloaded and installed on the CPU and the new robot can be activated.
Examples of robot reproduction facilities. Photos of two (semi) automated robot reproduction facilities.
For practitioners, evolution serves as an approach to adjust optimal robot designs on-the-fly in dangerous or inaccessible places [19], such as mines, nuclear power plants, or even extraterrestrial locations (see
Artist impression of evolving robots in space.
A key insight of this paper is that the science and technology of robot evolution are elevating the known concerns regarding AI and robotics to a new level by the phenomenon we call
The new ethical challenges related to robot evolution are rooted in the inherent inefficiency and unpredictability of the evolutionary process. Evolution proceeds through the generation of heritable variation (recombination and mutation) in combination with selection that favors more successful forms at the cost of large numbers of failures (
Whenever there is a technology that is not directly under human control–technologies without a “steering wheel”–and whenever the process is unpredictable, questions about risks and responsibilities arise (
It is hard to overstate the possible implications of the two key enabling features in evolving robots: self-replication and random change in robot form and behavior. First, self-replication allows robots to multiply without human intervention and thus would raise the need for control over their reproduction. Second, mutation or random evolutionary changes in the design of the robots could create undesired robotic behaviors that may harm human interests. Before developing any new technology with such potentially large ramifications, we should determine the acceptability of its consequences and identify ways to anticipate unwanted effects (
Several other fields of science have faced similar safety dilemmas during developments of new technology and subsequent experimentation. In health sciences, biomedical ethical dilemmas are typically evaluated using a principle-based approach, based on the four principles of Beauchamp and Childress (
In evolutionary robotics all of these principles have clear relevance, but, most pressingly, the risk of harm and the question of responsibility need to be considered in more detail. These, in turn, are intimately related to the crucial issue of control and the potential loss of it. In order for a particular human being or group of human beings to be responsible for some process or outcome, it is usually thought that they need to have some degree of control of the process or outcome. Moreover, loss of control can be viewed as a form of harm, because it is typically seen as undermining human autonomy, and it may compromise other values, such as well-being, which depend to some extent on our ability to control what happens around us.
The issue of risk in the field of AI has previously been considered in relation to control concerns associated with the development of superintelligence (
In contrast, evolving physical robots need not possess human level intelligence; animal level intelligence in such robots could be sufficient to do significant harm because of their physical features. Even without much individual intelligence and power, the evolved robots could potentially collaborate efficiently and perform much more complex tasks together than they could on their own. In other words, similar to highly social animals such as ants and wasps in the natural world, the number and cooperation among robots could be decisive factors. Therefore, the plausibility of a harmful scenario with evolving robots is all but trivial, and issues of control and the potential loss of it should be considered.
The most difficult aspect in anticipating possible risks of evolving robots is that we would be dealing with an
The risks of harm associated with robot evolution as identified above all arise from the underlying 1) 2) 3)
These control measures, meaningful as they are, can leave humans vulnerable because of the very nature of evolving systems, in which change is inherent. Evolving robots represent a whole new breed of machines that can and will change their form and behavior. This implies that robots could adapt their behavior to escape the implemented control measures. Therefore, controlling evolving robots is different from controlling the production of fixed entities, such as cars. One would therefore need to continuously adjust the control measures to stay ahead of evolutionary escape routes, not unlike a co-evolutionary arms race (
First, the robots could develop solutions to circumvent the technological safeguards that have been put into place. A very unlikely, but conceivable escape route is the “Jurassic Park scenario,” where the robots find an alternative way of reproducing outside the central reproduction facility. To mitigate this risk, additional reproductive constraints may be necessary, e.g., using an ingredient that is necessary for being viable and controlling its supply (
Second, while Bostrom [36] suggests “value loading” for robotic and AI systems, in the case of evolving robot populations it is important to realize that it would be risky to rely on the (current) features of individual robots. In an evolutionary process the robot’s features undergo change. This does not mean that creating certain features (such as values or goals) in the robots is without merit, but it should be combined with some form of verification that the goals/values continue to be present in the newly produced robots. This requires new technologies that effectively combine immutable values with adaptable robot features and protocols for a thorough screening of “newborn” robots before they are allowed to leave the reproduction facility.
A third possibility for evolving robots to escape human control is non-technological, exploiting deep-seated emotional response patterns. Specifically, humans may grow fond of robots, developing feelings of “affection” towards them (
These sensibilities can be exploited if robots evolve features humans tend to like such as, possibly, big eyes, certain locomotion patterns or “lovely” sounds and gestures. Such features can increase attachment, undermine human controller’s ability to remain objective and provide an evolutionary advantage on the long run. For instance, a robot could entice a human into supplying it with extra energy or allowing it to reproduce. Similarly, a “lovable” robot could prevent a human from switching off the robot or using the “kill switch” to shut down the evolution of the whole robotic species. These scenarios illustrate how emotions could get in the way of strict human control and induce an evolutionary bias [cf. (
The above-mentioned considerations concern ways of controlling the process of robot evolution. But there are more conceptual–ethical–concerns as well. Being able to ascribe responsibility is always important when risks are involved, both from an ethical and a legal point of view. The relevant form of responsibility here does not only have a backward-looking component (who can be blamed when things have gone wrong?), but is also forward-looking and clarifies who should do what in order to maintain control, e.g., mitigating risks and taking precautions (
At this point it may be instructive to refer to recent work by Santoni de Sio and Van den Hoven (
The track-and-trace theory, understood as including the monitoring condition, looks promising from an ethical perspective for robot evolution. If the robot evolution is tracking human interests, if there are people who understand the process and its moral significance, and are able to monitor the robot evolution, then we can tentatively say that meaningful human control over this process has been achieved. If those conditions are fulfilled, that could help to fill any potential responsibility gaps.
The control solutions suggested above cover the “tracking” requirements from the track-and trace theory to a significant extent. The centralized, externalized reproduction centers would allow humans to monitor the numbers and types of robots produced each day, while the crystal ball would give insight into the future directions of the evolutionary path of the robots. Being able to monitor robot development in these ways, the humans involved would be able to observe whether human interests are being tracked. If not, they could use the “kill switch.” The tracing part however, would need to be developed further as, at the moment, we do not have an appropriate level of understanding nor control of how the evolutionary process unfolds. At the same time, if studying these evolutionary processes in robots would deepen our scientific understanding of evolution, this could in effect help to also fulfil the tracing condition.
That being said, the big challenge here is, again, the inherent variability of an evolutionary system where new features emerge through random mutations and recombination of parental properties. Even though the whole system, specifically the genetic code (the robotic DNA), the mutation operators, and recombination operators are designed by humans, it is not clear to what extent these humans can be held responsible for the effects over several generations. On the positive side, let us reiterate that robots are observable, thus the genetic material and genealogy tree of an evolving population can be logged and inspected. In principle, it is possible to examine a newly created genotype (the robotic zygote) before the corresponding phenotype (the robot offspring) is constructed and destroy the genotype if it fails a safety test.
In the sections above, our main concern was to protect the human race from evolving robots. However, the matter can be inverted if we conceive of robots that can evolve and learn as a form of artificial
First, these robots have the possibility of reproduction, and in biology the crucial difference between life and non-life is reproduction. In addition, these robots share other characteristics with other life forms, such as movement and energy consumption. Second, the robots are not only able to reproduce; they themselves have also evolved. In other words, these robots are not (just) the result of human design, but of an evolutionary process. If humans, generally, start to feel that these robots are
Second, it could be questioned whether certain control-interventions, such as the use of the “kill switch”, are ethical regarding such forms of artificial life. An essential question here is if terminating evolutionary robots should be seen as switching off a machine or as killing a living being (
Robot evolution is not science fiction anymore. The theory and the algorithms are available and robots are already evolving in computer simulations, safely limited to virtual worlds. In the meanwhile, the technology for real-world implementations is developing rapidly and the first (semi-)autonomously reproducing and evolving robots are likely to arrive within a decade (
A key insight of this paper is that the practice of second order engineering, as induced by robot evolution, raises new issues outside the current discourse on AI and robot ethics. Our main message is that awareness must be created before the technology becomes mature and researchers and potential users should discuss how robot evolution can be responsibly controlled. Specifically, robot evolution needs careful ethical and methodological guidelines in order to minimize potential harms and maximize the benefits. Even though the evolutionary process is functionally autonomous without a “steering wheel” it still entails a necessity to assign responsibilities. This is crucial not only with respect to holding someone responsible if things go wrong, but also to make sure that people take responsibility for certain aspects of the process–without people taking responsibility, the process cannot be effectively controlled. Given the potential benefits and harms and the complicated control issues, there is an urgent need to follow up our ideas and further think about responsible robot evolution.
AE initiated the study and delivered the evolutionary robotics perspective. JE validated the biological soundness and brought the evolutionary biology literature GM and SN bridged the area of (AI) ethics and the evolutionary robotics context.
SN’s work on this paper is part of the research program Ethics of Socially Disruptive Technologies, which is funded through the Gravitation program of the Dutch Ministry of Education, Culture, and Science and the Netherlands Organization for Scientific Research (NWO grant number 024.004.031).
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
The handling Editor declared a past co-authorship with one of the authors (AE).
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
We do not consider evolutionary soft robotics here, because that field mainly focuses on actuators and sensors, not on fully autonomous, untethered (soft) robots.