There are parallels between how a human and a robot make art.
A professional artist today will have spent their life viewing hundreds of thousands of images. In addition to the ocean of photographic images that floods their eyeballs in publications, movies, advertisements, there are the thousands of images of “art” they’ve seen in museums, galleries, art books and online. Their brains are scooping up all these images — each composed and framed by a photographer, or designer, or painter — and are using them to influence and guide their own artistic work. Out of the hundreds of thousands of images they take in and process, some they like more than others. It is not uncommon for someone making art and images to begin by making stuff heavily influence by their favorites. Qualities about their favorite images are reflected in their own art. This process is often unconscious, and may even be invisible to them. However, others may notice that their art reminds them of someone elses. Over time, a new artist’s work may evolve to a point where their art is distinctive on its own, but it is very rare for new art not to resemble anything before it. In this way, a new image is created from a pool of millions of previous images. The new image, which did not exist an hour before, is produced in relation to and influenced by all those millions of existing images, and some particular images will influence it more than others.
Todays image generating AIs or Machine Learning systems do the same. They view and digest millions of previous made images and then generate new images based on those existing images. The exact process by which they create a new image is not “visible”; it’s messy and hard to track since there are millions of possibilities enroute to the final image. But like a human artist, we can give the system some biases. We can ask the AI to create a picture of a steam locomotive “in the style of Picasso.” And it will generate an image of a steam locomotive in the style of Picasso. It can do this because it has been trained to detect the kind of image patterns that paintings by Picasso show. Certain curves of the line; certain colors he commonly used, his usual textures, brush stroke width, geometry. And it is trained to recognize the patterns that all steam locomotives show: cylindrical body, solid wheels, cab, piping, etc. Of course it has also detected patterns for all the millions of other objects, scenes, and styles of all the images in the world. It conceptually maps all these patterns into an abstract location-space. The AI then “finds” an image that should exist in the space of Picasso and locomotive, and them it perceives it, which means it sees it by rendering it. It imagines it, in the way a human artist would imagine a scene.
In both cases of human generated art and machine generated art, the process relies on the generator ingesting thousands and thousands of previous images to assist in the creation of a brand new image.
In the traditional world of human art, if a person’s new art resided too close to the work of an existing artist, society would discount it. We’d say they were imitators. If the artist was trying to pass off the work as made by the famous artist, we’d call it fraud. But artists with well-rewarded careers make art that may be visibly “influenced” by another artist, or they may be in a specific “school” of style, influenced by a group of artist with related patterns. Being visibly influenced by another artist is considered acceptable, even if it was never acknowledged.
Today with AI art generators, it is trivially easy and amazingly precise to create an image in the style of X, or to be influenced by X. The exactitude of this influence is a marvel. The steam locomotive painted by Picasso absolutely looks like his work, and it can be created in seconds. We could call these fakes, since they are not created by a famous artist, but can surely look like they were. But why stop there. We can ask an AI to paint the locomotive in the style of Picasso and Andrew Wyeth, and get something strange, unique, and gloriously novel. We could ask the AI to make a picture of a bull in the style of Picasso but render it photo-realistic; and it will.
But we now come to a novel question. The AI art generator is indiscriminate in is input; it has been fed all art and images possible. It has in effect studied all the art images that have ever been posted online, including all the artwork of living artists. There are a few living artists who work are commonly used as an influence in AI generative art. Painters who work in a very realistic style are often invoked. It becomes trivially easy to create a fake image in their style, and sometimes the request is to render it on a subject that would make no sense for the original artist. The artist has no control over what happens with their style. The question is: should the original influencer artist have any say in what happens to their influence? Should they get any value if the image has any value? Should they be able to remove their art from the training set? Can they withdraw from this new world, or get paid if they remain with great influence?
It is clear that, in general, artists fear obscurity more than piracy. They prefer to be known and imitated rather than not known. I am aware of no artists who have hidden their work from human students for fear that they were being imitated. They may go after an imitator with legal means, but they have not tried to hide their work. But in the realm of machines, removing your work may be the only way to avoid becoming a generative art prompt. There are legal, practical, and social hurdles that will make disappearing from the commons difficult. (One way that will be tried is to leave the images but remove the artist’s name; basically unlabel the image so the name cannot be used as a prompt.)
There will be many reasons why participating in this commons would be beneficial, in a similar way that copyrighted material lives in a world of free copies. One idea is that every generative image should carry with embedded in it the prompt that created it. If the prompt contained another artist’s name, there might be some way to carry forward the credit, or even to compensate the artist. One could imagine a blockchain method to track the prompts. Artist may want to be included to have the AI assist them in creation. Depending on what they do, it might fill in details in their style, or even create works which they can claim for themselves. It might become a way for them to “extend their brand” as they say. There could be some artists who, instead of teaching, work with the AIs to refine and evolve their ability to be prompted. That is they could develop a style that is created primarily to act as a prompt for millions of other non-artists.
Training AIs to create art already shares some similarities to training humans to create art. This will complicate how we treat generative art machines and the new powers they bring.