Learning Community der TUHH

From Military Robots to Self-driving Pizza Delivery


#1

This chapter will give you a short introduction to roboethics and address some ethical issues of recent robot technology. The intention here is not to give definitive answers but to present some ethical issues of current robot technology[^1] .

Now, without further ado, here is what we will do in this chapter:
(1) In the first section, to get things going, we will have a brief cultural-historical look at our obsession with artificial creatures.
(2) Then, we will turn to roboethics and what it is concerned about.
(3) Next, we will address some ethical issues regarding current robotic technology. Particularly, military robots, companion robots, care robots and self-driving cars.

(1) A little history to begin with

Humans have been obsessed with artificial creatures for a long time now. Just consider Talos, from Greek mythology, which is a giant bronze automaton that is supposedly good at crushing your enemies (as early as 400BC). Then, of course, there is the Golem of the Jewish tradition, a creature made out of non-organic material, such as clay, and that comes to life through magic. Another example that is closer to robots comes to us from Leonardo da Vinci, who devised a mechanical humanoid knight (around 1490). Our obsession with artificial creatures generally, and mechanical automatons in particular, is nowhere more evident than in movies and literature. To name just two historic examples here: There is E.T.A Hoffman’s famous story Der Sandmann (1816) that features an artificial woman named Olimpia and there is the classical movie Metropolis (1927) by Fritz Lang, where the artificial creature Maria stirs unrest. Of course, we could continue this list of examples until we arrive at the latest instalments in pop culture ranging from cute little robots like Wall-E (2008) to cunning murder machines like in the movie Ex Machina (2014). So, taking into account our obsession with artificial creatures, it may not come as a surprise that we are at a stage of technical development where vacuum robots like Roomba clean our apartments, self-driving cars are likely to hit the streets in the near future, and care robots are deployed in hospitals and retirement homes[^2] .

(2) Roboethics

Before we will come to roboethics, a quick word on the classification of robot technology. As one may expect, there are many ways to classify robot technology. Here is one example taken from Kopacek (2013):

For the purpose of the chapter, however, let us use a simpler classification that divides robots into industrial (which we will not address here) and non-industrial robots and then divides these kinds further. Here is a visualization of this simple classification:

Accordingly then, roboethics can be split up into assistive roboethics, military roboethics, and so forth.

Now, what is roboethics? To answer this, we will first take a look at ethics generally and then shift to roboethics. Although in ordinary contexts ethics and morality are used interchangeably it is customary, at least in philosophy, to distinguish the two. Morality refers to the collection of norms and values that people hold, whereas ethics is the investigation and reflection on morality. Simply put, ethics reflects on right and wrong conduct, so please keep in mind that ethics is also concerned with the justification of our conduct. That means giving reasons for or against something. So, for example, when we say ‘Hitting a child is wrong’, we pass a normative judgment. In turn, judging that something is good or bad, right or wrong, is not enough. We also have to provide reasons (that is, a justification) for why we think that this is the right or the wrong conduct[^3] .

Traditionally, ethics is concerned with the proper conduct towards other human beings and towards non-human living beings. However, ethics nowadays also includes reflecting on the right and wrong actions regarding the environment, and recently it has come to include the reflection on how we should treat our robots. We will come back to the behavior towards our own creation in the last section. For now, let us turn to roboethics.

Generally speaking, roboethics is concerned with the examination and analysis of the ethical issues associated with the design and use of robots. For example, whether some robots should not bee used because they are a threat to human wellbeing or whether some robots infringe on values and human interests like privacy. And, again, ethical examination here is also concerned with providing reasons for positions and for certain actions. Please note that in this chapter we will focus on robots that possess a certain level of autonomy. Why the talk about autonomy here? Well, admittedly all robots raise some ethical questions but as a general rule the more autonomy the robot has, the more moral sensibility and more scrutinizing is required. So, the focus in in the next section is on robot technology that exhibits a certain amount of autonomy or ‘intelligence’, that is to say they are able to carry out certain tasks without human intervention. Keep in mind that autonomy of robots should not be confused with autonomy in humans, where it usually means to conduct one’s life according to one’s own reasons.

A short summary of what we have addressed so far. First, we briefly looked at our obsession with artificial creatures and robots. Then, we introduced a simple way of classifying robots. Most importantly, we addressed ethics and roboethics. In the next section, we will look at some ethical issues that arise in connection to particular robot technologies. Specifically, we concentrate on 4 types of robot technologies: military robots, companion robots, assistive robots and last but not least, autonomous vehicles.

(3) Robots and ethics

The most natural question on a lot of people’s mind when it comes to robots is: How do we get robots to behave in a way that we deem appropriate? In his novels, the author Isaac Asimov presents an answer to this question. He puts forth the idea that robots may be programmed to behave according to moral rules or laws. So, for example, the robot could be programmed to do x, but not do y. The rules that he introduced have come to be known as ‘Asimov’s laws of robotics’ and they are as follows:

  1. First law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. Second law: A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law
  3. Third law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
  4. Law Zero (added later): No robot may harm humanity or, through inaction, allow humanity to come to harm.

Now, on a first glance, the idea to program robots to behave according to a set of rules seems like a very reasonable thing to do. However, there are some well-known problems with this approach (For more on the shortcomings of Asimov’s laws and an alternative see Murphy and Woods 2009). Asimov was well aware of these problems and used them as a device to propel the narrative of his science-fiction stories. One problem concerns the vagueness of the terms used in the laws. For example, it is not clear what the term ‘human’ means in the first law, or what ‘robot’ and ‘doing harm’ means precisely. Further, there is the issue of a bloat of rules. That means that the world is a messy place and we need a lot of rules and rules for the exception to the rules in order to address all the circumstances that a robot may find itself in. This, however, seems to be an impossible task. The most obvious problem, though, is that there are a lot of situations where one rule will conflict with another rule. Consider the well-known trolley scenario, where an out of control trolley runs along a track on which there are 5 people. Yet, the trolley can be diverted to run along on another track. Unfortunately, there is a person on this other track. So, a decision needs to be made between diverting the trolley to the track where it will run over, and presumably kill, the one person, or letting the trolley stay on track and letting it run over the group of five. How should a robot in this situation behave, given that it is supposed to save human lives? (For that matter, how are humans supposed to act in such a situation?). Last but not least, another problem is that Asimov’s rules may not be feasible in some contexts. There may be contexts in where we expect the robot to harm a human being for example. This brings us to the first robot technology that we will take a closer look at: military robots.

Military robots

Not surprisingly, the military is at the forefront when it comes to robot technology. Military robots are here and they are here to stay. For example, in 2005 the New York Times reported plans of the Pentagon to replace soldiers with robots, and only 5 countries backed a UN resolution to ban killer robots. It is worth pointing out here that fully autonomous weapons already exist (Please recall, that autonomous here means that the robot goes about its task without human intervention). South Korea has an automatic machine gun that can identify and shoot targets without human commands. Another example comes from Russia, where the military uses autonomous tanks to patrol sensitive areas.

Just for the fun of it, here are two more examples: Dubai recently showcased one of its new Robocops that is supposed to patrol the streets in the near future. Russia is developing a Terminator-look-alike that can actually fire a gun! (Or two guns, if necessary).

Now, despite their ability to shoot people and their occasional intimidating looks, using military robots could have some beneficial consequences that may be taken as reasons to ethically justify their deployment. For example, military robots may reduce casualties because you need fewer humans to fight your war. Of course, obviously, this advantage only applies to the side that has military robots. Further, robots are not subject to psychological stress like human beings. Given that a lot of soldiers suffer from PTSD (posttraumatic stress disorder) after returning from the battlefield, it seems to be a good idea to reduce this kind of suffering by using robots instead of humans. Another advantage is that robots do not give in to emotions and rage and, unlike human soldiers, blindly obey the commands given to them.

Despite these (potential) advantages there are some crucial ethical concerns that need to be addressed: One of the pressing issues is whether military robots should be given the authority to fire at humans without a human in the loop. This is particularly important, because we need to make sure that robots are sufficiently able to distinguish between combatants and civilians. Further, the availability of military robot may decrease the threshold of armed conflicts. After all, if you have a bunch of robots that can fight for you without human losses on your side (!), then the motivation to start an armed conflict may be higher. A related issue is that the potential ease of using robots may foster an attitude that takes military robots to be a ‘technical fix’ to problems, so that other, more peaceful, solutions drop out of sight. Also, there is the question of how responsibility is to be distributed, especially when the military robot harms people that it was not supposed to harm. How do we determine and distribute who is responsible for the behavior of military robots, particularly when they are autonomous? This issue is very complex because we have to take into account the multitude of players that are involved: the creators of the robot (including IT companies that provide the software, and other research institutions), the military (for example the people in the chain of command like commanders and soldiers). Or maybe we can attribute responsibility to the robot itself? Now, it is not surprising that philosophers have a lot to say about this issue. Some authors have argued that it is impossible to attribute responsibility to any of the players when it comes to military robots (Sparrow 2007), whereas other authors have suggested a way of attributing responsibility (e.g., Schulzke 2013)[^4] .

Companion robots

After the rather bleak topic of killer machines, let us now turn to more uplifting machines: companion robots. Usually, these robots are set up to allow some kind of interaction, such as speech or gestures. In short, companion robots are robots that, as one would expect from the name, keep people company at home, at work, in hospitals and retirement homes. The classic example here is Paro the fluffy robot seal that can be used in retirement homes to cognitively stimulate people with dementia or calm them down. Two more recent companion robots are Kuri and Buddy. These two are supposed to be all-round companions that can play music, remind people of tasks and duties, and, with the build in camera, you can send it to specific places in our house to check something out.

There are some things that speak in favor or having companion robots. There is some indication that companion robots increase interaction and communication of autistic children (Scassellati, Admoni, Matarić 2012). Also, companion robots may also ameliorate loneliness in some people, especially when they are elderly or socially isolated (Bemelmans et al. 2012). However, the cuteness and cuddliness of companion robots should not blind us to the ethical issues that need to be addressed. One of the problems concerns attachment and deception: Should we really create things that have a high potential for attachment on part of the user but where this attachment ultimately rests on a deception? After all, the robot pretends to be something that he is not: a friend or companion. In other words, do the benefits that a companion robot may bring outweigh the cost that said benefit is achieved by deceiving a human into thinking that he or she has a reciprocal relationship with it? (Sparrow & Sparrow 2006). Another ethically relevant issue is data security because people interact and talk to these companion robots in intimate settings like their home. The information gathered in these interactions should be protected and stored securely, so as not to allow access from unauthorized third parties. Also, it is worthwhile to think about the ownership of the data that are gathered in these intimate contexts. Should the ownership of the data reside with the person that interacts with the companion robot, or is it legitimate that the company that produced these robots has ownership? (A similar concern can be raised regarding other technologies as well. For example, think of devices and services like Amazon’s Alexa or Microsoft’s Cortana). Another ethical issue concerns the level of authority and autonomy that we give to our companion robots. Should a companion robot that is ‘tasked’ with keeping a young child company be able to intervene, when the child is about to do something that she is not supposed to do; eating candy for example? Some of these ethical issues just addressed also apply to assistive or care robots, to which we will turn next.

Care Robots

Care robots are robots that fulfill crucial tasks in the care for other people, primarily the elderly or bodily disabled. Such tasks may include grasping and lifting objects, or carrying and feeding people. An example for a state of the art care bot is the so-called Care-O-bot developed by the Fraunhofer Institute that is equipped with a tray for bringing things and a tablet interface for displaying websites. Further, the robot can remind its user to take medicine or call help when the user has fallen and cannot get up.

There are clear advantages of care robots. Obviously, they can support elderly and ill people in their home, which increases their independence and quality of life. Care robots could also promote mental welfare in that they may prevent feelings of loneliness. Further, they potentially prevent danger and save lives when they are equipped with the capability to monitor the health and behavior of people. Lastly, the introduction of care robots may be a way to address the so-called care gap in an aging society, in that they take some burden off of care personnel.

However, we should no be so careless as to neglect some crucial ethical issues when it comes to care robots. One of the most pressing issues is the potential conflict between the values of autonomy and freedom of choice on part of the user and the level of interference in the life of the elderly. For example, how persistent should the robot be if a person refuses to take the medicine? Another obvious issue concerns data security. Care robots are used in a sensitive environment and may also have access to medical and other personal data of the owner, so it needs to be ensured that the data is safe and that they do not get into the hands of people that exploit these data. Further, care robots may lead to a decrease in social contact on part of the elderly because relatives may choose to deploy a robot instead of a human caretaker or visit less frequently because grandma has a robot companion. Also, people that are cared for by robots may feel objectified by being handled by a machine. Further, as with companion robots above, the issue of deception lurks. It may be argued that care robots create the illusion of relationship because they ‘deceive’ the user or patient by pretending to be a companion or friend although in reality they do not care. Ultimately, when it comes to care robots, there are also some broader societal issues that we have to take into account. We should ask ourselves in what kind of society we want to live. Do we want to give our most vulnerable members of society over into the care of robots and if so, to what extent exactly? The answer to questions like this should concern everyone and should not be left exclusively to the people that drive technological development. Speaking of driving, the last robot technology that we will have a closer look at is self-driving cars.

Autonomous vehicles

If you follow the media, you will be familiar with both Tesla’s and Google’s self-driving cars. However, given the price of a Tesla car, maybe a more relatable example is the self-driving pizza car that is being tested in a collaboration between Ford and the pizza chain Dominos. This is how the self-driving pizza car is supposed to work: You order the pizza and an employee puts the pizza into the self-driving pizza delivery vehicle. Then, the car finds its way to your house autonomously. When the car with the pizza arrives at your place, you take out the pizza and the car drives off to the pizza place again. It is likely that we will actually see self-driving pizza cars in the not far future because other companies have entered the race. Recently, Pizza Hut has teamed up with Toyota to work on its own version of an autonomous pizza delivery vehicle.

Having your delicious pizza pie delivered by an autonomous vehicle has some well-known advantages that also apply to self-driving cars in general. Most traffic accidents are due to human error. There are some estimates that self-driving cars could reduce traffic death by 90 percent. Saving lives is valuable, so that speaks in favor of self-driving cars. Also, self-driving cars potentially lead to fewer cars on the road and a better traffic flow because of the potential capability of these cars to connect to each other and communicate traffic data. This will benefit cities, the environment, and individuals because it ultimately means less traffic related pollutants that are one of the culprits in such ailments like asthma.

Nevertheless, despite the advantages of self-driving cars, some ethical issues need to be discussed. Similar to the military robot technology that we have addressed above, there is the issue of responsibility ascription and distribution. Who should we hold responsible when a self-driving car caused an accident? A related issue concerns what kind of decision capabilities we want in a self-driving car. Think about a critical traffic situation, for example a version of the trolley scenario that we have looked at in the section on Asimimov’s laws. Imagine there is a group of people ahead, and a choice needs to be made between running over the group of people, steering to the left and running over one person or steering to the right and crashing into a wall, possible injuring the people in the car. Here the question naturally arises, based on which criteria the autonomous car is supposed to decide. One option is to have no decision power in these situations and leave it up to the driver. However, what if the driver is not attentive? Should the car then be allowed to decide on an option? Ultimately, we have to ask ourselves what risk we want to take as a society and whether the benefits of having self-driving cars on the street outweigh the dangers and risks. Another crucial and not to be neglected ethical issue is the potential loss of jobs that comes with self-driving cars. According to the American Trucking Associations there are 3,5 million truck drivers in the US. You would not need them anymore if trucks could drive autonomously. The same goes for our self-driving pizza delivery vehicle because it eliminates the human element in pizza delivery. In the concluding section, we will see that robots may not only come for our jobs but also for your rights.

Ethical treatment of robots?

Remember, at the beginning of this chapter we said that ethics not only deals with the justifiable conduct regarding other people and non-human animals but that ethics nowadays is also concerned with the right conduct towards artificial products. Consider this example: In October 2017, Saudi Arabia granted citizen rights to the sophisticated humanoid robot called Sophia. This is the first robot to receive citizenship in the world. This incident suggests that we may want to start thinking about how we treat robots and what part they will play in our social world. Should we regard them as persons and grant them rights? After all, we regard companies as persons and grant them certain rights. Further, is it possible to treat robots in an unethical way (e.g., by harming them)? We will likely be confronted with these and similar questions in the future. Even more so, because robots will likely reach a level of sophistication that will prompt us to rethink what it is that distinguishes us from them. So, we better get a head start in thinking about these issues instead of trying to catch up with the technical development.

[^1]: There are a lot of excellent introductions to the field of roboethics. The book ‘Robot ethics: the ethical and social implications of robotics’ (2012, Eds. Lin, Abney & Bekey) is an excellent introduction to the ethical issues of robotics. Recently, Spyros Tzaphestas (2016) has published an introduction to robot ethics that covers a lot of relevant issues and is accessible to the general public.

[^2]: If you want to delve deeper into the history of automatons (what we today call robots), Kang in his book ‘Sublime Dreams of Living Machines: The Automaton in the European Imagination’ (2011) provides an intellectual history of mechanical beings.
The Czech author Karel Čapek was the first to introduce the term ‘robot’ in his play Rossum’s Universal Robots (1920). Interestingly, in this play the robots are trying to overpower its human masters. This is another example, like Olimpia in Fritz Lang’s movie or the humanoid robot in Ex Machina, of both our obsession but also fear of our own creations.

[^3]: If you want to know more about the distinction between morality and ethics, the BBC has a homepage devoted to the question ‘What is ethics?’. If you want more in-depth material on ethics and ethical theories please visit the Internet Encyclopedia of Philosophy on ethics.

[^4]: Because of the risks and moral dilemmas involved in military robots, some people, including Stephen Hawking and Elon Musk, have called for a ban of ‘killer robots’.

Further readings

Bemelmans, R. et al. (2012). Socially Assistive Robots in Elderly Care: A Systematic Review into Effects and Effectiveness, Journal of the American Medical Directors Association , 13 (2), 114 - 120.

Kang, M. (2011). Sublime dreams of living machines: the automaton in the European imagination. Cambridge, Mass: Harvard University Press.

Kopacek, P. (2013). Development trends in robotics, Elektrotechnik und Informationstechnik “e&i”, 2, 42-47.

Lin, P., Abney, K., & Bekey, G. A. (2012). Robot ethics: the ethical and social implications of robotics. Cambridge, Mass.: MIT Press.

Murphy, R., & Woods, D. D. (2009). Beyond Asimov: The Three Laws of Responsible Robotics. IEEE Intelligent Systems, 24(4), 14–20. https://doi.org/10.1109/MIS.2009.69

Scassellati, B., Admoni, H., Matarić, M. (2012). Robots for Use in Autism Research, Annual Review of Biomedical Engineering 14 (1), 275-294.

Schulzke, M. (2013). Autonomous Weapons and Distributed Responsibility. Philosophy & Technology, 26(2), 203–219. https://doi.org/10.1007/s13347-012-0089-0

Sparrow, R. (2007), Killer Robots. Journal of Applied Philosophy, 24: 62–77.

Sparrow, R., & Sparrow, L. (2006). In the hands of machines? The future of aged care. Mind and Machine, 16, 141–161.

Tzaphestas, S. G. (2016). Roboethics: a navigating overview. Springer.

Related videos

Dr. Steffen Steinert - Roomba, Drones and Terminator - The ethical implications of robotic technology
https://www.youtube.com/watch?v=5tTEEGRAHsI&t=11s