Loveable Robot Companions and AI Ethics

It’s a very popular sci-fi trope, a cutesy robot that the viewer develops warm feelings for as the machine proves itself to be capable of acting as a human would, even exhibiting some semblance of emotion. These robots are, of course, not human, yet we can’t help but feel sad if something were to happen to them, how come, and how close are we to that reality? It’s interesting that despite the barrier between us as humans and robots seems to fade away when they begin to exhibit certain ‘human’ traits that we look for in our own human companions. Stereotypically, the ‘lovable robot’ trope in sci-fi has three important features: compassion, loyalty, and helpfulness, usually with a bit of sass or robot sarcasm.

Of course, it’s easier to love a robot figure that was written and designed by a human to reflect traits that we would want in a human companion as well. Actual AI have been a lot harder to get attached to, and logically so. Most of the ‘assistant’ AI we have today is largely devoid of personality and uniqueness. But as AI gets more and more advanced and commonplace, we are definitely beginning to sense a sort of attachment to real AI. Alexa has the ability to read your children a bedtime story, or tell you a joke. Some parents even reported their kids speaking fondly of and getting attached to Amazon’s Alexa.

We haven’t reached a point where the AI we create has enough originality to warrant its on ‘personhood’ per say. If your family Alexa breaks, you can always buy a new one and it will replicate everything the previous one did. But what if we were capable of creating a machine that would be capable of replicating human thought processes to the point of becoming truly human-like? If we managed to create AI capable of exhibiting free will and true ‘humanity’, would such a being not be worthy of the same respect we give our human peers?

As briefly mentioned above, humans tend to feel more compassion for tangible bodies with familiar faces. A robot with a face, capable of replicating certain human expressions is more likely to melt audience’s hearts than a disembodied voice ever will. Should AI ever become so advanced that they become near-humans in terms of consciousness, it would be easier to see them as our equals if they reflect us in terms of physical form as well.

The debate regarding AI can quickly devolve into a long debate on what it means to be human in the first place. I personally would argue that as humans we are fleshy bodies piloted by biological machines, although not everyone would agree. After all, our feelings are chemical responses to stimuli that are processed in our brains, even if we have managed to develop our own personalities and a sense of ‘humanity’, it can still be argued that our way of reasoning is not as different as an AI’s. And how could our way of thinking not be different from that of AI, when we are its creators?

So, if we ever do manage to create robots so human-like that we effectively cannot tell the difference between it and a real human, would it deserve rights? It needs to be considered whether the destruction or irreparable damage of a robot would be considered murder at all; after all, it would represent the destruction of a personality, an individual. Similarly, would an AI need to take responsibility to a crime committed? In 2017, a self-driving car famously struck and killed a pedestrian, of course the AI that ‘drove’ the car has no moral compass and cannot feel remorse or guilt for its actions, but it is interesting to consider what may happen if it could.