Weak AI

The discussion about artificial intelligence (AI) moves between the desire for smart machines that support us in various fields and the angst that we will lose the control one day. I will consider the angst that people elicit when talking about machines.

In 2017 two chatbots appeared to be chatting to each other in a not understandable language for human beings. It seemed they created their own language. As a result, Facebook immediately shut down the robots. THESE small events in the field of AI might strengthen the angst that machines will control humans sooner or later.

The conversation looks very odd:

Bob: i can i i everything else . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i i can i i i everything else . . . . . . . . . . . . . .

Alice: balls have a ball to me to me to me to me to me to me to me

Bob: i . . . . . . . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

Bob: you i i i i i everything else . . . . . . . . . . . . . .

Alice: balls have 0 to me to me to me to me to me to me to me to me to

Bob: you i i i everything else . . . . . . . . . . . . .

Alice: balls have zero to me to me to me to me to me to me to me to me to

However, were these two robots actually able to think about one another`s deviated sentences and thereby capable of identifying the meaning in order to respond appropriately?  

In other words, were they able to create their own thoughts and thereby capable of creating a new language? 

That would mean that they may be able to think or imagine of taking control over the world. This gained thought might create a lust of power in the minds of the robots. As a result, they might gather together and take action to control humans. 

Thus, the angst of the people would be justified. 

But can machines really think in order to create devastation? 

In 1997 Deep Blue beat chess champion Kasparov in a tournament. Deep Blue consisted of 32 IBM supercomputers connected together. In addition, it had 220 special chess chips to examine 100 million positions per second. No human could even possibly get close to this outstanding performance. Could Deep Blue really think? Was it intelligent? The philosopher John Searle said no, it could not. He argued that it was not a competition between a machine and a human, but rather a competition between Kasparov and the team of engineers and programmers. 

Let`s take an example, that illustrate that machines might not have a thinking entity. Searle introduced 1980 the Chinese Room Thought Experiment. Imagine a monolingual English speaker in a room, who gets tools such as a batch of Chinese symbols and a batch of instructions in English in order to respond in proper Chinese. Through a small whole the English speaker receives a question on paper in Chinese. Then, the English speaker uses the tools to respond to this question in Chinese, indistinguishable from a Chinese speaker. He doesn’t speak a word of Chinese neither does he understand the question nor the answer. The person is good at following instructions, not in speaking Chinese. He simply behaves like a computer. He gets input and because of a certain “program”, he is able to give an appropriate output. 

The angst that robots will take actions by themselves to gain the control over the humans, seems not reasonable by regarding the thought experiment above.

The thought experiment illustrates, that they are not able to create their own thought by themselves and are thereby not capable of creating the feeling; lust of power, which would lead to their desire to take the control over humans. They don`t have a mind they just process the input that they receive to get an appropriate output. 

Thus, I personally think the angst that robots will take action by themselves to gain control is not reasonable.

However, the angst that humans program a program for robots, which aims to take control over humans, might be more plausible. It all depends on the input that we install in the robot.

So, basically it is in our hands how powerful robots will be! 

Turing, A. M. (1950). Computing machinery and intelligence. Mind, LIX, 236, 433-460.