Can computers cry?

With the last lecture in mind, I felt inspired to go more in depth about the questions that were asked in the last lecture. “Are computers having fun? Are computers smarter than us? Are they better than us?” I find these very intriguing questions, because what would a world with self-conscious or humanlike computers look like? Different people working in computer science or robotics commented on this topic. So for my last blog I am going to talk about those different viewpoints on conscious computers. 

Can computers become conscious?

Can we compare humans with computers? In the article of Signorelli (2018) he goes more in depth on these paradoxical and controversial questions. Currently we don’t know much yet about the (computerial) brain. What caught my interest in this article is that Signorelli (2018) explores a prototype theory of consciousness and he classifies machines in categories to this framework. His analysis showed that the conclusion is very paradoxical. On the one hand it implies that trying to achieve conscious technology to beat humans will never fully exceed human capabilities. On the other hand, if this were possible, the machine wouldn’t be considered a computer anymore. I think it would look more like a cyborg. They acknowledge in the conclusion that there are a lot of misunderstandings when it comes to conscious computers. This proves that there is a need for a close interaction between biological sciences, such as neuroscience and computer sciences. The article does give an interesting starting point of a global framework on the foundation of conscious computation which expects to understand and connect brain properties in a replicable and implementable way to AI. 

figure 1: types of cognitions and types of machines (A) Emergent processes related to consciousness and types of cognitions defined from their relations. It is important to highlight that processes associated with moral thought are present in type 1 and type 2 cognition, but not necessarily in the other two types of cognition. (B) Types of machines and categories according to different types of cognition, contents and information processing stated above. 

Will they ever understand?

According to Dirk Hovy, computers will never be able to understand human language. We already have technology that acts like they understand human language. For example, google home or Alexa. However, learning language is way more complicate than this he says. Talking to a computer is not the same as talking in person. Things that are obvious for us are not that obvious for computers. They only pay attention to what we say, but not who says it. And he thinks that this is problematic. A computer can easily translate our text or give us recommendations for example. But computers are not good (yet) in analyzing language, because of for example language changes and dialects. I find his connection between computers and language very interesting, because this goes togheter with emotions. We can say the same thing with different intentions or emotions and a computer will most likely not (never?) understand you.

Interacting with an emotional unstable machine

To pick up on where I left with the last video, this other video tells you more about how we could learn emotions to technology. Raphael Arar (2018) sounds more positive on the idea of computers being able to make sense of emotions. On idea I found interesting that he mentioned was the mention of the canny valley. That is the creepy factor of tech where it is close to human but slightly off. A lot of things get lost in human translation when computers try to understand our emotions. This can make AI very creepy sometimes. What he explains in his Ted talk sounds very difficult and it actually is! Especially because of the fact that if computers would talk to us it would kinda sound scary and sad. Like if they were emotional unstable.

Is this what we really want (and need)?

Technology we want to interact with” a comment that Raphael made in his Ted talk. Is this something we really want (or need). The fact that we as humans can feel emotions, learn and understand language and try to understand emotions makes us human. Why would we let a computer do that for us? Talking to a computer is not the same as talking to a person. This kind of technology sounds very nice, but how do we make sure we put it to good use? Luckily, this will take decades before something like this is developed. It is very hard especially with language constantly changing, but also societies are changing. The effect of this is that the ways we look at technology and artificial intelligence are also changing. Some people are super optimistic while others are pessimistic. So for in the future it is important to keep talking about about self-aware and conscious computers, because it might happen in the far future.

Did you like this topic?

Then I recommend watching and/or reading these video’s and articles….

  1. Consciousness in Humanoid Robots. (z.d.). Frontiers.
  2. Sickler, B. (2020). Why Computers Can Never Replace the Human Brain. Crossway.
  3. Signorelli, C.M. (2018). Can Computers Become Conscious and Overcome Humans? Front. Robot. AI 5:121. doi: 10.3389/frobt.2018.00121
  4. TED. (2018). How we can teach computers to make sense of our emotions | Raphael Arar [Video]. YouTube.
  5. TEDx Talks. (2018). Why Computers Can’t Understand Us | Dirk Hovy | TEDxBocconiU. YouTube.
  6. Whitney, L. (2017). Are Computers Already Smarter Than Humans? Time.