It’s a very popular sci-fi trope, a cutesy robot that the viewer develops warm feelings for as the machine proves itself to be capable of acting as a human would, even exhibiting some semblance of emotion. These robots are, of course, not human, yet we can’t help but feel sad if something were to happen to them, how come, and how close are we to that reality? It’s interesting that despite the barrier between us as humans and robots seems to fade away when they begin to exhibit certain ‘human’ traits that we look for in our own human companions. Stereotypically, the ‘lovable robot’ trope in sci-fi has three important features: compassion, loyalty, and helpfulness, usually with a bit of sass or robot sarcasm.
Of course, it’s easier to love a robot figure that was written and designed by a human to reflect traits that we would want in a human companion as well. Actual AI have been a lot harder to get attached to, and logically so. Most of the ‘assistant’ AI we have today is largely devoid of personality and uniqueness. But as AI gets more and more advanced and commonplace, we are definitely beginning to sense a sort of attachment to real AI. Alexa has the ability to read your children a bedtime story, or tell you a joke. Some parents even reported their kids speaking fondly of and getting attached to Amazon’s Alexa.
We haven’t reached a point where the AI we create has enough originality to warrant its on ‘personhood’ per say. If your family Alexa breaks, you can always buy a new one and it will replicate everything the previous one did. But what if we were capable of creating a machine that would be capable of replicating human thought processes to the point of becoming truly human-like? If we managed to create AI capable of exhibiting free will and true ‘humanity’, would such a being not be worthy of the same respect we give our human peers?
As briefly mentioned above, humans tend to feel more compassion for tangible bodies with familiar faces. A robot with a face, capable of replicating certain human expressions is more likely to melt audience’s hearts than a disembodied voice ever will. Should AI ever become so advanced that they become near-humans in terms of consciousness, it would be easier to see them as our equals if they reflect us in terms of physical form as well.
The debate regarding AI can quickly devolve into a long debate on what it means to be human in the first place. I personally would argue that as humans we are fleshy bodies piloted by biological machines, although not everyone would agree. After all, our feelings are chemical responses to stimuli that are processed in our brains, even if we have managed to develop our own personalities and a sense of ‘humanity’, it can still be argued that our way of reasoning is not as different as an AI’s. And how could our way of thinking not be different from that of AI, when we are its creators?
So, if we ever do manage to create robots so human-like that we effectively cannot tell the difference between it and a real human, would it deserve rights? It needs to be considered whether the destruction or irreparable damage of a robot would be considered murder at all; after all, it would represent the destruction of a personality, an individual. Similarly, would an AI need to take responsibility to a crime committed? In 2017, a self-driving car famously struck and killed a pedestrian, of course the AI that ‘drove’ the car has no moral compass and cannot feel remorse or guilt for its actions, but it is interesting to consider what may happen if it could.
Personally, Maybe in relation to what is called uncanny valley, personally I often little bit feel awkward to see robots with human-like faces to be honest. However, I also think that I would really love AI robots if there are ones which looks like R2D2! In my point of view, the appearance and interface of technologies became more and more important today. Almost all technologies seems to be convenient enough for human in my point of view, and most digital machines used today may satisfy humans’ demand in fundamental level, I think (as long as you are not so particular about techs). Therefore, the satisfaction level of the user experience in total should be the frontier, and this is most relevant to the development of AI from now on in my opinion. Anyway, thanksss for the interesting article!!!
Great article Heloise! I have to say, I do see a future where AI will develop to the point where getting genuinely attached to them as feasible. Honestly, I notice myself doing some weird things at times with regard to that. Anytime I ask Google Assistant for something for example, I always say thank you after she answers. Why? She’s already given me the answer, thus no longer listening, and she wouldn’t care anyway. Yet, every single time she tells me what time it is, I go: ‘Thanks!!’
I think the point where this will be something to actively discuss and ponder about is when Artificial General Intelligence (AGI) is developed. AGI is basically the type of AI that would allow for the very real and ‘human’ AI interactions we see in movies. Right now it’s still decades away from being achieved, but I can’t help but be curious about what life will be like when it does.