Since a US district judge ruled that AI art was not copywrite-able (the ruling is a touch more complex than that, but let that go for the moment since the practical effect is to ban copywrite on AI created art.), people who wish to replace paying artists to make art with AI art have been arguing that AI should be copywrite-able even the law is clear on this point. One of their favorite arguments is that AI art is a tool like a camera and therefore AI art is like photography.
This is silly.
A camera is a tool under the control of a human being. The person picks the specific thing to be photographed, chooses the angle of the shot, has input or control over the lighting, and makes a hundred different decisions about the composition before choosing to take the picture. The human controls the tool and thus controls the output.
Compare that to AI art. The imitative model copies an enormous amount of art created by human beings, then uses math to determine which pixels or words should come next in a given sequence. To get anything out of them, you prompt the model with something like “Give me a picture of two women staring at a sunset in the style of X” where X is the artist you don’t want to pay. If you are lucky, you get a picture that is not ludicrous and one that does not contain enough copied elements to get you sued for plagiarism. This is self-evidently not the same thing as working with a camera. Well, I say self-evidently, but a lot of tech people seem to think that it is. They are wrong.
The prompts are not doing creative work. The prompter is not making a meaningful contribution to the process. No matter how detailed the prompt, the machine is making the composition decisions. It calculates (yes, calculates, not decides) the ultimate positioning of the figures, the ultimate lighting, the ultimate colors relationships, the ultimate other hundred other details that make a picture a picture. Arguing that the prompt meaningfully contributes to the creations is like arguing that a bride should have copywrite of a photograph because she posed the bridesmaids on the steps of the church. It was still the photographer who made the photograph through the process of all of those other decisions that go into taking a picture.
Or better, it is like arguing that the results of your Google Map directions are copywrite-able. After all, “Get me to Laguardia with no tolls and avoiding the Hudson Parkway in forty-five minutes” is a prompt to a machine learning system that produces a unique output. But no one would seriously argue that the directions that result from the prompt should be allowed to be copywritten. But Maps is just doing what AI imitative art systems do — taking a prompt and applying its math to produce a relevant output. And, frankly, those directions would be more a work of art than a lot of what the current AI imitative systems are capable of producing.
Imitative AI is not a photography, not in the sense that the tech people want it to be. It is clearly not copywrite-able because it clearly is not produced by human creativity, not in the sense that the copywrite law understands human creativity. Nor should it be.
I have said may times that so-called AI can be beneficial, if used correctly. But pretending that AI imitative art is the same as human created art of photography is not true and not beneficial. It only serves to help recoup the money the tech companies have put into these systems in an exploitable fashion instead of encouraging them to focus on uses that actually benefit the rest of us.
If they want pretty pictures that they can copywrite, they can pay artists. Or learn to use a brush, a pen, or a camera. They cannot pretend that the functional equivalent of Maps directions deserves the same protections, and monetization, of human created art.
Leave a Reply