If you have any artist friends, chances are they’ve recently mentioned mentioned AI-generated art. Chances also are that they’re concerned about it. Is AI-generated art really art? What future do human artists have, if algorithms can generate images that win in art competitions?
Last April, OpenAI announced DALL·E 2, an AI tool that can create images in any style from a text-based description. This was is a powerful tool, but its use was restricted to approved users. More recently, the open source Stable Diffusion was released, allowing anyone with half a decent computer to generate their own AI-generated images.
The results? Some art communities are banning AI-generated images, artists claim that such AI image generation tools are actively anti-artist, and others are concerned their original work will soon be very hard to find.
While these concerns are justified, we do have to ask if there even is a competition between human artist and AI. You probably have an idea about what art is, and an algorithm certainly can’t make art. Or can it? Is AI-generated art really ‘art’?
What’s art anyway?
Well, the answer is probably different for each person. Personally, I think there are two aspects that are important for a work to be considered art, even if they don’t necessarily define it.
First, creating art is process that combines influences and ideas from yourself and others into something new. We can ask if an AI tool like DALL-E or Stable Diffusion generates images as part of a creative process. Does it combine influences and ideas? Certainly. Is is capable of creating something new, something original? No.
To understand why, we’ll shortly look at how these tools work. Tools like these use neural networks: a series of algorithms that take in some input, put it through a network which modifies that input, until a corresponding image is created to be given as output.
In a neural network like Stable Diffusion, the input is a description, and the output is an image. The trick is finding the right settings for the algorithms in the middle to take in a description, and end up with an image.
This is done by showing it lots — millions, or even billions — of images with a description, until it learns what words in a description correspond to elements in an image.
The important takeaway here is that a tool using a neural network needs to be trained with existing artworks (sometimes without their author’s consent, but that’s a discussion for a later blog post), meaning that all possible images it can produce will be based on art that was made by a human. So it can cut and paste and blend influences from countless artists, but it won’t create something original.
Another aspect of art is the fact that creating art is a communicative act. Some artist put out their works to the public as an expression of themselves, their ideas and opinions. Others create art because they want to make statements, to start discussions, to make people think.
Here, I think AI image generation tools have more of a chance to be considered up to the task the way human-made art is.
The first, and most obvious effect of the introduction of these tools is that ‘art’ is cheaper and easier to make. For writers, worldbuilders, roleplaying gamers, and countless others, tools like offer an immense power of expression. If you wanted to add an image to help with the atmosphere of your story, you would have had to commission art from a (semi-)professional artist or learn to make art yourself, taking both time and money. Now, such an image is just a few words away.
Additionally, AI-generated images can be used by artists as part of their own creative process. Stock images and reference photos are already used by many artists, and these new tools can create images that are vastly more specific to an artist’s needs
So, as a way of enriching art of a different media, or as just another tool in the set of an artist, AI image generation can help artists express themselves better.
So, good news?
Well, we’ve seen that AI image generation tools can empower artists and giving them greater expressive power. On the other hand, artists have raised concerns about the impacts of making art making tools so easily available.
These tools are still developing, and the future might solve these issues, or introduce new ones. There are a couple of questions that might be interesting to think about:
- What is the economic impact of such easily and cheap art? Will being an artist be a viable job in the future?
- What rights do artists of the images used in training the neural networks have?
- Will human-made and AI-made art be distinguishable from each other? Does it matter?
Interesting topic! The questions you raised are definitely something to think about when real competitions like the one you mentioned are being won by these AI art generators.
While on the one side I love how advanced this technology has become, and the fact that this stuff has gotten infinitely more convenient for the regular person to do, it is scary how one of the professions that seemed least susceptible to automation, is beginning to be automated.
I think the recent problems surrounding AI generated could be valid concerns, however, similar to how the general public perceives a (somewhat) new technology like NFT’s, understanding and interpreting how technology works is a critical factor when it comes to the extend of considering the potential problems the technology creates. But what makes this hard is how varied we can perceive the technology, where in this case it could be argued that an AI model is merely “inspired” on the data it is trained on, similar to human artists that go on and create art based on their own experiences, which could be considered a form of “training” as well.
As for the other “requirement” for something to be art that you mentioned, I’m not sure if we can say that AI cannot create something new. Because it might not YET. Hypothetically, if we train a model with the same concepts that have inspired us humans to “create something new”, by for instance training it to add some form of “sophisticated randomness” to a canvas, it would seem like it would fullfill that requirement. Then again, the models domain is inherently limited to what it is trained on, so you could argue that that again, is not something “new”. But what about where our thoughts come from, aren’t those also not merely limited to what we have perceived? Maybe not, since you could say tha thuman emotion plays a role in that. So then we’d need to give AI morale/intentions. Seems dangerous. I don’t know, it’s not an easy problem :/
Explicitly regulating is not easy, but one thing I’ve seen artists mention is that the data that the models are trained on should be disclosed, and that it should be possible for artists to opt out. Seems a bit weird to me though, because youre publically showing off your art and I can see it, then why is this AI that I made not allowed to see it?
I’m not an artist though, so I can only try to relate to their worries. In any case, this is a very nice read, and the questions at the end are very interesting.