Hi all! In this podcast interview Assistant and I discuss artificial intelligence in relation to its past and future developments and its connection to games, based on Julian Togelius’ Playing Smart. Since we’ve been discussing AI so often, I thought it was time to hear what it had to say on the matter!
For those of you who know ChatGPT already, you are aware that the Assistant cannot speak, only write. Therefore, I first carried out the interview in written form and then used a text-to-speech AI generator to voice Assistant’s responses. I hope you can forgive the monotonous voice: sadly it is the best I could do with the free version. :’)
You will notice that its responses often reiterate mine and that they have a clear structural pattern (which reminded me of argumentative essay templates).
Let me know what else stands out to you! Hope you enjoy!
Sources:
Yun-Gyung Cheong, Alaina K. Jensen, Elin Gudnadottir, Byung-Chull Bae and Julian Togelius (2015): Detecting Predatory Behaviour in Game Chats. IEEE Transactions on Computational Intelligence and Games.
Gaming bots learn from how we play, New Scientist (24 September 2014)
Togelius, Julian. “In the Beginning of AI, There Were Games” in Playing Smart (MIT Press, 2019): 1-10.
This is such a fascinating idea to conduct an interview about AI with an AI itself! Also very nifty how you transferred the interview about AI with the AI ChatGTP Assistant’s with a text-to-speech AI, AI inception! I was surprised how long and thorough the Assistant’s answers were, giving many examples too. Overall a very creative and informative interview!
Thank you Lucia! I also enjoy this conceptual inception. The AI can surely “talk” extensively (although its arguments are a bit superficial if not guided further at times), it has been quite fun to make this podcast!
It is so creative to make a podcast combing theories and up-to-date empirical materials! Good job! My recent experience with Chat GPT demonstrates very paradoxical features. For example I asked GPTchat: do you think those texts you generated are true? It answers that “I am not capable of determining the truth or falsehood of the text that I generated”. However, as an assistant based on knowledge, it does speak in a way that what it present represents justified information bearing truth. It seems that it is imitating human activities based on basic AI moral principles programed by the people who design it, in order to run out of the risk of being like human too much. This is, I guess, the reason why AI will cause fearness and anxieties towards it, based on the principle of Uncanny valley.
Thank you @Augustina! I also have noticed this “carefulness” you talk about. When I asked Assistant if I could interview it, it remarked that it was going to try its best to answer my questions but reminded me that as an AI agent, it did not have personal experiences and opinions. As you said, it basically processes the knowledge it has been fed into answers: and it learns from your questions! The chat also adds a bias disclaimer at the bottom of the page if I am not wrong, and you can definitely “hear” the programmer’s voice in Assistant’s answers, if you know what I mean.