With the last lecture in mind, I felt inspired to go more in depth about the questions that were asked in the last lecture. “Are computers having fun? Are computers smarter than us? Are they better than us?” I find these very intriguing questions, because what would a world with self-conscious or humanlike computers look like? Different people working in computer science or robotics commented on this topic. So for my last blog I am going to talk about those different viewpoints on conscious computers.
Can computers become conscious?
Can we compare humans with computers? In the article of Signorelli (2018) he goes more in depth on these paradoxical and controversial questions. Currently we don’t know much yet about the (computerial) brain. What caught my interest in this article is that Signorelli (2018) explores a prototype theory of consciousness and he classifies machines in categories to this framework. His analysis showed that the conclusion is very paradoxical. On the one hand it implies that trying to achieve conscious technology to beat humans will never fully exceed human capabilities. On the other hand, if this were possible, the machine wouldn’t be considered a computer anymore. I think it would look more like a cyborg. They acknowledge in the conclusion that there are a lot of misunderstandings when it comes to conscious computers. This proves that there is a need for a close interaction between biological sciences, such as neuroscience and computer sciences. The article does give an interesting starting point of a global framework on the foundation of conscious computation which expects to understand and connect brain properties in a replicable and implementable way to AI.
Will they ever understand?
According to Dirk Hovy, computers will never be able to understand human language. We already have technology that acts like they understand human language. For example, google home or Alexa. However, learning language is way more complicate than this he says. Talking to a computer is not the same as talking in person. Things that are obvious for us are not that obvious for computers. They only pay attention to what we say, but not who says it. And he thinks that this is problematic. A computer can easily translate our text or give us recommendations for example. But computers are not good (yet) in analyzing language, because of for example language changes and dialects. I find his connection between computers and language very interesting, because this goes togheter with emotions. We can say the same thing with different intentions or emotions and a computer will most likely not (never?) understand you.
Interacting with an emotional unstable machine
To pick up on where I left with the last video, this other video tells you more about how we could learn emotions to technology. Raphael Arar (2018) sounds more positive on the idea of computers being able to make sense of emotions. On idea I found interesting that he mentioned was the mention of the canny valley. That is the creepy factor of tech where it is close to human but slightly off. A lot of things get lost in human translation when computers try to understand our emotions. This can make AI very creepy sometimes. What he explains in his Ted talk sounds very difficult and it actually is! Especially because of the fact that if computers would talk to us it would kinda sound scary and sad. Like if they were emotional unstable.
Is this what we really want (and need)?
“Technology we want to interact with” a comment that Raphael made in his Ted talk. Is this something we really want (or need). The fact that we as humans can feel emotions, learn and understand language and try to understand emotions makes us human. Why would we let a computer do that for us? Talking to a computer is not the same as talking to a person. This kind of technology sounds very nice, but how do we make sure we put it to good use? Luckily, this will take decades before something like this is developed. It is very hard especially with language constantly changing, but also societies are changing. The effect of this is that the ways we look at technology and artificial intelligence are also changing. Some people are super optimistic while others are pessimistic. So for in the future it is important to keep talking about about self-aware and conscious computers, because it might happen in the far future.
Did you like this topic?
Then I recommend watching and/or reading these video’s and articles….
- Consciousness in Humanoid Robots. (z.d.). Frontiers. https://www.frontiersin.org/research-topics/5781/consciousness-in-humanoid-robots
- Sickler, B. (2020). Why Computers Can Never Replace the Human Brain. Crossway. https://www.crossway.org/articles/why-computers-can-never-be-human/
- Signorelli, C.M. (2018). Can Computers Become Conscious and Overcome Humans? Front. Robot. AI 5:121. doi: 10.3389/frobt.2018.00121
- TED. (2018). How we can teach computers to make sense of our emotions | Raphael Arar [Video]. YouTube. https://www.youtube.com/watch?v=hs-YuHv0vUk
- TEDx Talks. (2018). Why Computers Can’t Understand Us | Dirk Hovy | TEDxBocconiU. YouTube. https://www.youtube.com/watch?v=e6ggFdqsyEg
- Whitney, L. (2017). Are Computers Already Smarter Than Humans? Time. https://time.com/4960778/computers-smarter-than-humans/
Interesting article. I do think trying to humanise computers will probably be quite a slow process precisely because of the Uncanny Valley, but it is a scary thought to think one day we might turn to our computers for emotional support.
An interesting but scary thought is also that neuroscience has not yet exactly defined consciousness. They know it’s composed of electrical activity in certain cells and so on but the question of why it actually exists in a state of consciousness, and not the opposite, is unanswered. I am tempted to draw a very scientifically uneducated parallel to electrical activity in technology, and how perhaps it could also lead to consciousness if it existed chemically?
This is a really interesting post, thank you for sharing your thoughts. The aspect that I found most intriguing is the concept of do we need this? And what would the benefits of this be? The AI that I think of today seem to me rooted in maths and aiding in scientific problems, (aside from maybe AI Art). However, the question of a relatable and emotional available computer does not seem immediately be necessary in the contemporary context of AI today, like it’s not longer a tool but an entity. The only thing that I can think to that end is how we see AI represented in film and TV that are full of personality and emotion, but then again the paradox arises, if a machine has a conscious, is it still a machine?
Very informative post! When I was talking with Chat GPT, I am always trying to guide it to think about itself and how it thinks in different ways compared to human beings. However, it seems that all words like “think”, “emotion”, “sensation”, “experience” etc. are coded to be answered that “I am an artificial learning machine and not capable to do so like humans”, even though it was using the word “I” . Here, the element of language jumps out. Since chatting AI is based on the study of text and linguistic features, it feels like they are just imitating human beings speaking. But does speaking necessarily imply thinking? this is the question that the philosophy of AI will ask and what I am interested in.