AI and Robotics
The link between technology, artificial intelligence (AI) and robotics is probably so evident that does not need much analysis. What should instead be at the centre of the present work is the ethical aspects that are peculiar to these two -often intertwined- spheres of inquiry: AI and robotics. Before proceedings further, we should thus clarify how we differentiate between the two.
AI refers to a more complex and theoretical concept (intelligence), and it is therefore in itself a more aleatory concept that can span through a number of fields of research. Most relevantly however, we could ascribe AI to belong to computer science -representing a branch with a particular emphasis on the creation of intelligent machines as human as possible. There are a number of standard tasks that modern, widespread machineries are already capable of doing thanks to their AI. For example, speech recognition allows our smartphones (remember that smart here means intelligent) to make a phone call or give us directions based on their capacity to recognize our voices, process what we are asking them and then provide us with an answer. Or make an action.
Hence, it is not surprising that society is currently pushing for the creation for a number of life changing innovations -such as self-driving cars for example- where there is a need for an enormous amount of data processed. Machines can interact intelligently only if they are provided with ways of categorizing the world that is commonly defined as knowledge engineering. Equally important is machine learning: without supervision, learning needs clear patterns in streams of inputs, while when supervision is available, numerical regressions and classification are involved.
Robotics is the other side of the coin of AI. Robots require (different degrees of) AI in order to successfully handle the job we assign to them.
In some instances, however, robots can also be partially developed in terms of “independent thinking”, or not depending on AI but rather that of humans.
For example, the use of a robotic arm in a study conducted at Brown University (https://news.brown.edu/articles/2012/05/braingate2) aimed specifically at using the intelligence of the patient included in the study (in this cases for therapeutic reasons, as they were tetraplegic).
Although such examples are defined as Brain-Computer Interface (BCI), these type of use of a computer is more “mechanical”, and it assumes a supervision -if not a full engagement- of human intelligence. Hence making the distinction between AI and Robotics more evident.
What kind of ethical problems relate to AI and Robotics?
Improvements related to the implementation of AI and robot are various: from the use of intelligent technology in medical contexts to the increased safety and personalization of many of our everyday gadgets, it would be pointless to deny that life can be made easier by these innovations.
Yet, many ethical issues arise from these new discoveries as well, and we should look into some of the most prominent ones.
Self-driving cars: who should AI let die?
Self-driving cars (or Autonomous Vehicles) are in the making. From BMW to Uber, major transport companies are investing in projects aimed at providing with a full-scale autonomous, intelligent car able to drive without anyone directing it.
This would represent a huge revolution for transports: we would not have to worry about parking somewhere downtown or if we had too much wine at dinner, the car will not be affected by it. What will affect the car behaviour however, is the surroundings along which it will ride. Aside from the interaction with other vehicles, the most impelling question is: how would the car react to a situation of foreseeable accident?
A very successful website put online by MIT (http://moralmachine.mit.edu/) has been trying to create a number of statistical data showing how we would respond to relatively similar moral scenarios. Building on a famous philosophical thought experiment referred to as the “Trolley Problem” (we will explain more in depth this in the “Neuroscience and the Law” Chapter) the website puts the visitor in a position to choose what in her/his opinion would be the right behaviour (or else, what s/he would do in such a situation) for the car to have. Would you rather run over two old ladies or a healthy young runner? Should one’s social status and past behaviour be taken into account when deciding whose life should be spared in such extreme cases?
The website is very interesting because: a) it shows how inconsistent human’s choices might be -raising suspects that a case to case moral assessment of the scenario could produce injustice; and b) for the opposing reasons, the experiment shows how a “human eye” could be able to detect important, specific variables that could make the exception to the rule morally acceptable -or even morally required.
Robots and relationships
While waiting to disentangle how robots might be driving in our streets, much research has been put forward to use them in more and more intimate part of our lives, allowing robots to enter our homes with augmented centrality and freedom.
This is the case of care robots -robots programmed to help a number of people in society that can finally rely on constant helper by their side when needed. Groups include people with different forms of physical impairments and disabilities (be them temporary or permanent), as well as segments of the population that structurally needs more attention -for now it is the elderly, but it is not that unthinkable to imagine a kindergarten “roboteacher” in the near future.
Though probably moved by good intentions the results of this trend in dealing with human relationships poses a number of questions that are everything but banal. We will get back to those in a second, but first it should be pointed out that a threat that often goes with technology has perhaps gone overlooked in the case of robots: they can be hacked (https://www.ft.com/content/1552b080-fe1c-11e6-8d8e-a5e3738f9ae4?mhq5j=e3) like any other technological gadget -posing serious concern on how this option could be used by criminals.
For example, it would be sufficient to connect to the robot inside the house for a few moments to allow robbers into the house and empty (hopefully without the owner home). The increase in dependence from these new “collaborators” opens new potential worries that should not be underestimated for the sake of preserving the hype for innovation and “technology at all costs” we currently live in.
An even more specific ramification of robotics is represented by what are commonly referred to as sexbots (robots programmed to satisfy our sexual needs). Also here -if not perhaps more- there is room for the possibility for this technology to help some people in particular situations (for example replacing sex worker in addressing this specific need for people unable to engage in “normal” relationships due to a particularly dramatic accident or physical impairment.
Still, despite the social value that those robots could represent, there is concern on how this revolution in our sexual sphere could end up affecting society, and considerations on the impact of these intimate entertainments should abound.
To begin with, the availability of passive, programmable sexual partners could create a number of societal dysfunctions: starting from our unidirectional way of interacting with a “partner” in an intimate and very relevant part of our lives, we could soon develop similar attitudes outside this relationship and behave accordingly when interacting with other human beings -within the sexual sphere, but more broadly still.
This repercussion could easily fuel an already problematic gradual detachment from “the other”, rendering our civilization poorer in terms of solidarity, empathy and other key values for a well-functioning society.
Robots and wars
In relation to the issue of preserving our human side, another version of robots need analysis. In recent years, there have been a gradual implementation of unmanned airplanes (drones) in wars. Remotely directed from pilots sitting in sophisticated labs on the other side of the planet often, drones are now widely used in military operations by -mostly- Western countries basing their moral legitimacy on the ground that they limit the number of casualties and help preserving human lives (in this case, the lives of soldiers). Leaving aside the political analysis concerning the possible discrepancy of value within equal lives in this instance, we need to investigate further into the implications of this technology as a more complex issue than so described, as it would be too easy to see this only in a positive light if this was the case.
Somehow related to the concern raised above, one should question the impact that this “detaching” technology could have (or does have already) on our way of interacting with other human beings and the way in which it can shape our behaviour beyond the supposed given functionality.
For example, those pilots that are manoeuvring the drones from their home countries, finish their shift (during which they might as well have killed some innocent babies by accident) and get catapulted back into their “normal” lives. Paradoxically this schizophrenic way of living their lives seems to produce more problems and imbalances than initially thought, and the effects needs to be considered with attention.
Also important, is to consider the increasingly realistic dystopian scenario of war robots allowed to make a call “on their own” as to whether to shoot or not. There wide consensus that we should not aim at programming a robot able to shoot a human being without the green light from another human being, but it is realistic to imagine contexts in which such an option would not be seen as particularly problematic. Even more relevantly, we should bear in mind the “hackability” of these technologies and wonder about the potentially catastrophic effects of leaving them in the wrong hands.
Robots and automation
Lastly, it is inescapable to point out the socio-economic impact of robots. Politicians, academics and policy makers have begun to engage with the phenomenon of increased industrial automation with more attention, as this represents for many a threat to the jobs of many.
In the attempt to clarify who is to blame in the case of an accident resulting from a mistake made by a robot the European parliament has recently proposed to grant some rights to machines, so to make them legal entities. (http://www.europarl.europa.eu/news/en/press-room/20170210IPR61808/robots-and-artificial-intelligence-meps-call-for-eu-wide-liability-rules) Not surprisingly, this had generate an intense discussion among scholars as well as the general public, as many see this move as the first step towards the creation of additional competitors in a world already short of jobs and overpopulated. In line with an ever-increasing automation of our chains of productions, sceptics of the positivity of the “roborevolution” see they increase of independence and consideration of robots as directly related to a decrease of value of human beings -workers or otherwise.
Technoenthusiasts instead, affirm that this is indeed the path towards greater social justice and individual growth: by allowing robots to independently deal with mechanical jobs and alienating jobs, we will ensure more opportunities for human beings to follow their own creativity and tailor themselves with a more unique profession. This optimistic view is of course very tempting, but the recent failure of the “internet experiment” (expected to guarantee a drastic increase in democracy and moral growth for humanity) demands from us to be careful in the assessment of how to move next.
Pitsch, K. 2016. Limits and opportunities for mathematizing communicational conduct for social robotics in the real world? Toward enabling a robot to make use of the human’s competences. AI & Society 31(4), pp. 587-593.
Scheutz, M. & T. Arnold. 2016. Feats without Heroes: Norms, Means, and Ideal Robotic Action. Frontiers in Robotics and AI.
Veruggio, G., Solis, J. & M. Van der Loos. 2011. Roboethics: Ethics Applied to Robotics, IEEE Robotics & Automation Magazine 18(1), pp.21-22.
Weiss, S. 2013. Raunchy robotics: the ethics of sexbots.