1. Chapter: Intelligent machines are on the rise: do we believe in technology as inherently representative of progress or should we fear that a Terminator is on its way?


AI and Robotics

The link between technology, artificial intelligence (AI) and robotics is probably so evident that does not need much analysis. What should instead be at the centre of the present work is the ethical aspects that are peculiar to these two -often intertwined- spheres of inquiry: AI and robotics. Before proceedings further, we should thus clarify how we differentiate between the two.

AI refers to a more complex and theoretical concept (intelligence), and it is therefore in itself a more aleatory concept that can span through a number of fields of research. Most relevantly however, we could ascribe AI to belong to computer science -representing a branch with a particular emphasis on the creation of intelligent machines as human as possible. There are a number of standard tasks that modern, widespread machineries are already capable of doing thanks to their AI. For example, speech recognition allows our smartphones (remember that smart here means intelligent) to make a phone call or give us directions based on their capacity to recognize our voices, process what we are asking them and then provide us with an answer. Or make an action.

Hence, it is not surprising that society is currently pushing for the creation for a number of life changing innovations -such as self-driving cars for example- where there is a need for an enormous amount of data processed. Machines can interact intelligently only if they are provided with ways of categorizing the world that is commonly defined as knowledge engineering. Equally important is machine learning: without supervision, learning needs clear patterns in streams of inputs, while when supervision is available, numerical regressions and classification are involved.

Robotics is the other side of the coin of AI. Robots require (different degrees of) AI in order to successfully handle the job we assign to them.

In some instances, however, robots can also be partially developed in terms of “independent thinking”, or not depending on AI but rather that of humans.

For example, the use of a robotic arm in a study conducted at Brown University (https://news.brown.edu/articles/2012/05/braingate2) aimed specifically at using the intelligence of the patient included in the study (in this cases for therapeutic reasons, as they were tetraplegic).

Although such examples are defined as Brain-Computer Interface (BCI), these type of use of a computer is more “mechanical”, and it assumes a supervision -if not a full engagement- of human intelligence. Hence making the distinction between AI and Robotics more evident.

What kind of ethical problems relate to AI and Robotics?

Improvements related to the implementation of AI and robot are various: from the use of intelligent technology in medical contexts to the increased safety and personalization of many of our everyday gadgets, it would be pointless to deny that life can be made easier by these innovations.

Yet, many ethical issues arise from these new discoveries as well, and we should look into some of the most prominent ones.

Self-driving cars: who should AI let die?

Self-driving cars (or Autonomous Vehicles) are in the making. From BMW to Uber, major transport companies are investing in projects aimed at providing with a full-scale autonomous, intelligent car able to drive without anyone directing it.

This would represent a huge revolution for transports: we would not have to worry about parking somewhere downtown or if we had too much wine at dinner, the car will not be affected by it. What will affect the car behaviour however, is the surroundings along which it will ride. Aside from the interaction with other vehicles, the most impelling question is: how would the car react to a situation of foreseeable accident?

A very successful website put online by MIT (http://moralmachine.mit.edu/) has been trying to create a number of statistical data showing how we would respond to relatively similar moral scenarios. Building on a famous philosophical thought experiment referred to as the “Trolley Problem” (we will explain more in depth this in the “Neuroscience and the Law” Chapter) the website puts the visitor in a position to choose what in her/his opinion would be the right behaviour (or else, what s/he would do in such a situation) for the car to have. Would you rather run over two old ladies or a healthy young runner? Should one’s social status and past behaviour be taken into account when deciding whose life should be spared in such extreme cases?

The website is very interesting because: a) it shows how inconsistent human’s choices might be -raising suspects that a case to case moral assessment of the scenario could produce injustice; and b) for the opposing reasons, the experiment shows how a “human eye” could be able to detect important, specific variables that could make the exception to the rule morally acceptable -or even morally required.

Robots and relationships

While waiting to disentangle how robots might be driving in our streets, much research has been put forward to use them in more and more intimate part of our lives, allowing robots to enter our homes with augmented centrality and freedom.

This is the case of care robots -robots programmed to help a number of people in society that can finally rely on constant helper by their side when needed. Groups include people with different forms of physical impairments and disabilities (be them temporary or permanent), as well as segments of the population that structurally needs more attention -for now it is the elderly, but it is not that unthinkable to imagine a kindergarten “roboteacher” in the near future.

Though probably moved by good intentions the results of this trend in dealing with human relationships poses a number of questions that are everything but banal. We will get back to those in a second, but first it should be pointed out that a threat that often goes with technology has perhaps gone overlooked in the case of robots: they can be hacked (https://www.ft.com/content/1552b080-fe1c-11e6-8d8e-a5e3738f9ae4?mhq5j=e3) like any other technological gadget -posing serious concern on how this option could be used by criminals.

For example, it would be sufficient to connect to the robot inside the house for a few moments to allow robbers into the house and empty (hopefully without the owner home). The increase in dependence from these new “collaborators” opens new potential worries that should not be underestimated for the sake of preserving the hype for innovation and “technology at all costs” we currently live in.

An even more specific ramification of robotics is represented by what are commonly referred to as sexbots (robots programmed to satisfy our sexual needs). Also here -if not perhaps more- there is room for the possibility for this technology to help some people in particular situations (for example replacing sex worker in addressing this specific need for people unable to engage in “normal” relationships due to a particularly dramatic accident or physical impairment.

Still, despite the social value that those robots could represent, there is concern on how this revolution in our sexual sphere could end up affecting society, and considerations on the impact of these intimate entertainments should abound.

To begin with, the availability of passive, programmable sexual partners could create a number of societal dysfunctions: starting from our unidirectional way of interacting with a “partner” in an intimate and very relevant part of our lives, we could soon develop similar attitudes outside this relationship and behave accordingly when interacting with other human beings -within the sexual sphere, but more broadly still.

This repercussion could easily fuel an already problematic gradual detachment from “the other”, rendering our civilization poorer in terms of solidarity, empathy and other key values for a well-functioning society.

Robots and wars

In relation to the issue of preserving our human side, another version of robots need analysis. In recent years, there have been a gradual implementation of unmanned airplanes (drones) in wars. Remotely directed from pilots sitting in sophisticated labs on the other side of the planet often, drones are now widely used in military operations by -mostly- Western countries basing their moral legitimacy on the ground that they limit the number of casualties and help preserving human lives (in this case, the lives of soldiers). Leaving aside the political analysis concerning the possible discrepancy of value within equal lives in this instance, we need to investigate further into the implications of this technology as a more complex issue than so described, as it would be too easy to see this only in a positive light if this was the case.

Somehow related to the concern raised above, one should question the impact that this “detaching” technology could have (or does have already) on our way of interacting with other human beings and the way in which it can shape our behaviour beyond the supposed given functionality.

For example, those pilots that are manoeuvring the drones from their home countries, finish their shift (during which they might as well have killed some innocent babies by accident) and get catapulted back into their “normal” lives. Paradoxically this schizophrenic way of living their lives seems to produce more problems and imbalances than initially thought, and the effects needs to be considered with attention.

Also important, is to consider the increasingly realistic dystopian scenario of war robots allowed to make a call “on their own” as to whether to shoot or not. There wide consensus that we should not aim at programming a robot able to shoot a human being without the green light from another human being, but it is realistic to imagine contexts in which such an option would not be seen as particularly problematic. Even more relevantly, we should bear in mind the “hackability” of these technologies and wonder about the potentially catastrophic effects of leaving them in the wrong hands.

Robots and automation

Lastly, it is inescapable to point out the socio-economic impact of robots. Politicians, academics and policy makers have begun to engage with the phenomenon of increased industrial automation with more attention, as this represents for many a threat to the jobs of many.

In the attempt to clarify who is to blame in the case of an accident resulting from a mistake made by a robot the European parliament has recently proposed to grant some rights to machines, so to make them legal entities. (http://www.europarl.europa.eu/news/en/press-room/20170210IPR61808/robots-and-artificial-intelligence-meps-call-for-eu-wide-liability-rules) Not surprisingly, this had generate an intense discussion among scholars as well as the general public, as many see this move as the first step towards the creation of additional competitors in a world already short of jobs and overpopulated. In line with an ever-increasing automation of our chains of productions, sceptics of the positivity of the “roborevolution” see they increase of independence and consideration of robots as directly related to a decrease of value of human beings -workers or otherwise.

Technoenthusiasts instead, affirm that this is indeed the path towards greater social justice and individual growth: by allowing robots to independently deal with mechanical jobs and alienating jobs, we will ensure more opportunities for human beings to follow their own creativity and tailor themselves with a more unique profession. This optimistic view is of course very tempting, but the recent failure of the “internet experiment” (expected to guarantee a drastic increase in democracy and moral growth for humanity) demands from us to be careful in the assessment of how to move next.

Further readings

Pitsch, K. 2016. Limits and opportunities for mathematizing communicational conduct for social robotics in the real world? Toward enabling a robot to make use of the human’s competences. AI & Society 31(4), pp. 587-593.

Scheutz, M. & T. Arnold. 2016. Feats without Heroes: Norms, Means, and Ideal Robotic Action. Frontiers in Robotics and AI.

Veruggio, G., Solis, J. & M. Van der Loos. 2011. Roboethics: Ethics Applied to Robotics, IEEE Robotics & Automation Magazine 18(1), pp.21-22.

Weiss, S. 2013. Raunchy robotics: the ethics of sexbots.

Ăśber die Kategorie tekethics



We are about to start this first session: ready, steady, go!


Asimov’s 3 laws of robotics

Any thoughts?


Here we are again. Don’t feel shy to ask!


A robot that can love? Does that scare or intrigue you? Or both?


Till next time: 2 October (9am-10am)


I’m scared of the idea, that robots could replace our partners in a relationship. Don’t you want to discuss, to argue, to fight and then to make up again? I think all that’s part of love and being in a relationship - pure giving and following instructions is not.
Really interesting topic though!


The analysis makes it clear that there needs to be some kind of global regulation of these ethical issues. And while saying that I immediately think that corporations, politicians and researchers need to obey decisions made by such committees - will they?

Regarding the invention of nuclear power and the double-edged usage potential (energy for life, weapon to kill) we have a historical discourse that let’s me be skeptic about the impact of regulations driven by ethicists.

On the other hand, the positive half in me certainly supports the ongoing efforts to find a clear position on AI and robotics as they become more and more a part of our society.

Contribution to Further readings

I would like to add two further readings that I stumbled upon recently:

As for AI a huge amount of data is necessary we need to be careful about what we build and what we store. I was happy to read that developers and the like in Silicon Valley right after the election of Trump spoke up and said they would refuse any collaboration on databases that could lead to persecution of minorities in the future:


Thank you Sabrina, glad to know you’re interested! The point you mention about relationships I think is really important and worth of attention. Perhaps also changing the structure of your entry. Instead of thinking about how robots could not cope with our “natural” predispositions towards other human beings, we might ask ourselves: in a society in which there’s less and less time (or inclination) towards long terms commitments, couldn’t a robot represent a way to at least preserve some ways of expressing our emotions?


Thank you Alex for you thoughts and very relevant readings. I think that the optimistic/pessimistic dualism is particularly prominent when we speculate over the moral acceptability of [certain?] technology, but certainly AI does carry an extra layer of “irreversibility” with it. As for the distinction between the political guidelines and what is actually going to happen, I agree: much of it is grey at the moment, and ethicists surely don’t appear to make the real call. Yet, it is our duty to attempt to provide guidelines for those who can…even through internet symposiums! :slight_smile:


I do not want to deny that there might be some benefits regarding robotics and (sexual) relationships. Indeed, if you are not able to have a relationship with e real person - no matter for what reason - it might be an alternative to replace a human partner with a robot. But it sure depends on what you expect from this kind of relationship. I am concerned about what this kind of relationship might change in the way a person loves and looks at (sexual) partners. There’s a robot that does whatever you want it to. You control it. If you have that power for, let’s say a couple of years, can you fall in some kind of love with this robot? Do you forget or unlearn how to cope with human (love) relationships? Could it change the way you interact with people/partners or the way you treat them?
This topic reminds me of Jonze’s movie “her” from 2013, where the protagonist falls in love with a talking operating system. A very fascinating and at the same time bizarre story.


I see the self-driving car as one of the most relevant issues.
Two kinds of problems come to mind. 1) Quoting you: “How much indipendence and autonomy should we grant to machines and how much do we want to rely onn their judgment in certain situations”, I think we should keep in mind that the real issue isn’t strictly related to leaving this “judgment” to a machine, not in the terms of a right choice at least. We can make it drive like a benthamite-car or a deontological-car: choice lies with us. Maybe the biggest problem in this scenario concerns the fact that we can’t feed that machine with the key feature of human being: I called it unpredictability. As human beings we can generate the unpredictable variable that shake the equation. 2) The second one is more ethical and directly linked to the first conclusion. What does remain about human freedom in a world where humankind is relived from the choice? If freedom and responsibility are strictly intertwined in a moral agent, maybe we are going to live in a scenario where the machines become the agents (adult), and mankind remains in a passive (infant) stage?


Surely interesting Sabrina. Ultimately the question can end up being: are we modelling technology to our needs or are we adapting our needs to what technology allows us to achieve? In terms of human relationships, a sexrobot does have the advantage that it could take away some issues related to some segments of society (i.e. exploitation of prostitution), but it could also “unleash” an extremely unemphatic attitude towards potential [sexual] partners.


That’s food for thought Valerio…could you expand a little more on what you mean by being able to generate unpredictable variables in this context? For example in chess humans are bound to not be able to overcome computers anymore because their way of calculating is higher than ours, and ultimately that’s key in chess. Other contexts however, might be more challenging for a machine to process and there humans do (and will have for longer) an advantage. In the case of cars however, it could be an option to have a machine in a sense “shaped” by the specific user…the problem there is the coordination with the rest of society of course!
As for the second point you mention, it’s not uncommon in the literature to find parallels between robots and infants, and I appreciate your challenge in this sense. Yet, one has to wonder if indeed we restricted our need to focus on technical choice (last time you took a flight, did you feel that your freedom was restricted by a quasi total automatic pilot?), wouldn’t our liberty to express ourselves in other contexts be increased?


Very interesting topic and discussion, I just would like to share some thoughts that are not strictly related among them but are very connected to the discussion.

  1. The choice of a robot vs the choice of a man. I think every choice we make is a mixture of rationality and irrational behaviours. While on one hand we try to control and sometimes block our irrational side, we consider that what makes us human. There is a fear that a 100% rational computer would make choices only based on pure utilitarism (for instance killing the driver of the car to save two pedestrians, without considering if the fault was of the pedestrians and so on). I tend more to think that paradoxically there is an irrational part also in a robot, that we usually do not see, that comes from the irrationality its creators used when writing its algorithm or structure. For instance, some feminists movements accused the AI that are now being developed to be very aggressive because the majority of their algorithms were written by males. Eventually, I think the problem lies in the fact that the irrational part of a men evolves with experience, while we are not sure a machine will do the same.
  2. Second point is regarding Machines and workforce. It is true that machines will replace many jobs, not only manual and repetitive ones, but also creative ones (mine included). There is anyway the point that, as any technology, AI will destroy some works but also create new ones, or give the possibility to develop other activities such as art, music or bioethics. The real problems lies in a process that has started a lot ago and is perfectly explained in this article http://strikemag.org/bullshit-jobs/. While I do not agree with all the points in the article, there is a basic topic which has to be tackled: the quality and the type of the new jobs that are created. Many distortions arise from the fact that most of the new jobs are not actually productive. Recently I had a discussion on the fact that is highly unlikely that a salesman working for a technology company (even with great results) will create his own company or product. In mine opinion the problem on the long term is not the creation of an AI, but the model with too many sterile jobs that we are already living in.


I mean “unpredictable variables” not necessarily as advantages and not in the perspective of a strife humankind VS machines. E.g. A well-programmed car (with coordinates based on society and not on the single driver) can’t face the situation of a pedestrian’s panic attack. Humankind represent the fallacy of the moral car. We know how the self-driving will behave (today and forever… until the next update [exluding system errors]), but we don’t know how a man would behave already now. We are their system error. Will Self-driving car end up saving the passenger or the pedestrian in the unexpcted scenario? I honestly wouldn’t like to walk/drive to those streets.
As for the second point I really can’t explain why I can’t homologate a car trip to a flight. I have to think about it. Maybe we don’t really have a choice, because we don’t know how to pilot a boeing.


Thank you Francesco for the valuable contribution you brought in. Concerning your two points, I’d like to add some thoughts.

  1. Surely the “limited” way in which we can define humanity to have reached an “ethical consensus” on issues (supposedly, we have agreed to defend and foster the declaration of human rights, but there’s still much discussion on how to interpret it by different groups of people) does not help us to support the claim that we can infuse morality into a robot. Just yesterday, it was published an interesting article on these dynamics worth looking at: http://www.bbc.com/news/magazine-41504285?ocid=socialflow_twitter
  2. As for the quality of work argument, I think that it is surley very tempting to think that -in time- we’ll manage to leave to robots to do less rewarding jobs, while we humans could concentrate on more gratifying activities. Yet, as you point out, the “individual quality” of a certain job (i.e. being able to work from anywhere on the planet as long as one has internet and a laptop) does not necessarily mean that we are actively contributing in shaping a future, sustainable job market, and that’s problematic of course…


Fair enough. On the first point (following from the affirmation you make that you wouldn’t want to walk on these streets), it’d interesting to explore all the implications related to a citizen living in the “affected” areas: your only real choice would be to leave the city as any others will be (literally) crossing the option you didn’t subscribe to!
As for the second. Maybe. Or maybe we somehow elevate the status of the plane to such a more sophisticated machine that we question it less.
PS: we do have a choice: take the automatic train! :slight_smile: