While I was wondering what I should write about in my last blog, I realized how much I actually enjoyed writing. As I am considering doing a minor in journalism – and even masters -, I searched online for future expectations if I were to become a journalist. Regrets. I should not have done that. I stumbled on an article by the NRC, about a new AI development called GTP-3. Thanks to this text generator, writers might not be necessary in the future anymore.
What is GTP-3?
Wait what? Yes: you’ve read this correctly. The AI-writer GTP-3, Generative Pre-trained Transformer 3, developed by OpenIA, is the outcome of the refinement of its two previous versions that are build last year in 2019. GTP-3 is the largest AI neuronetwork until now, containing 96 layers of neurons and 175 billion parameter connections. This network is trained by 45 terabyte of data, which consist of the English Wikipedia, all texts that are available online and written between 2016 and 2019 and countless books. Due to the artificial intelligence neuronetwork and the broad range of texts, the AI-writer is able to recognize certain text constructions.
What is so special?
So what is this robot capable of? Even though it needs a lot of fine-tuning, the robot can already perform human writing tasks. As it does not have a human brain – well ofcourse – it is not capable of comprehensing what words mean. However, due to its training, GTP-3 can calculate what words are propable to match in sentenses, what a follow-up sentence could be, and these sentenses will form a text. In addition, it is also able to complete stories and correct grammar mistakes. It can answer language related questions with only a few examples translate words. Moreover, mathematical problems can be solved – till some extend though. Until now, there has not been such a multifunctional language generator as GTP-3.
As I mentioned already, there is still room for improvements. Even though it is provided with an incredible amount of data, it is limited in processing humanlike thoughts. More data equals a better understanding of language, but not it does not really become smarter. According to Gary Marcus, the founder of Robust.AI, GTP-3 has just as its predecessor problems with “biological, physical, psychological and social reasoning“. Therefore, the system is not reliable in many cases. Moreover, OpenAI included a section concerning gender, race and religion in their scientific article. Namely, GTP-3 is trained through texts that may be outdated in terms of stereotyping behavior. It is therefore not surprising that the AI model will reflect these stereotypes that are present in their training data.
In sum, GTP-3 is a very interesting development, but it is definitely well-trained enough.
To go back to my future career, GTP-3 will not replace the job of a journalist. Yet, it might be very possible that a future language generator – perhaps GTP-3’s ‘grandchild’ – might get closer to a human-like brain. Until then, we are still better than robots in writing and I will pursue my aspirations to follow some journalism courses. Thank you Digital Media, Culture, and Society for not only teaching me about the Tech, but about my (uncertain) future career as well.
What do you think of these developments?