- Aron Levin
- Posts
- Artists vs Tech: The battle over AI art and creativity
Artists vs Tech: The battle over AI art and creativity
The world's greatest art heist and evolution of creativity
America! Land of the free, home of the brave, and where class-action lawsuits are just as common as baseball and hotdogs. So it’s to little surprise that artists have taken up arms in a cultural war against big tech and their nefarious AI creations (and, you know, nothing says ‘freedom’ quite like suing a tech company.).
Anyway. A group of artists just filed a class-action lawsuit against Stable Diffusion, Midjourney and DeviantArt. The lawsuit (pdf) was announced last week, and alleges that the companies and organizations have infringed the rights of “millions of artists” by training their AI tools on “billions of images” from the internet without the consent of the original artists.
But… why does this matter? And why now?
I’m not a lawyer. I’m not a copyright expert. With that said, I’ll stick to what I found to be generally noteworthy and could come to play a key role in the context of the media narrative, cultural wars, and the court of public opinion (more so than an actual court.).
David vs. Goliath and copyright law
First– It’s pushing the AI industry further into the mainstream with a classic David vs. Goliath narrative. A story as old as time itself, along with the cultural wars that generally follow.
Second– a broader ethical question about the use of AI in creative industries and as a form of artistic expression (both commercial and otherwise), how AI-assisted creative work should be treated under copyright law, and what it means for artists.
We’ll explore the implications of both, so let’s start with context. The facts, and so on.
The suit claims that the defendants, Stable Diffusion, Midjourney, and DeviantArt, have used copyrighted works from millions of artists without their consent to train their AI algorithms and create new, "original" works.
They are being accused of violating copyright law, engaging in unfair competition, and misappropriation of the artists' names and identities. It also claims that the defendants' actions have caused irreparable harm to the artists, including dilution of the value of their work and marketability of their names and identities – and are seeking damages and an injunction against the defendants to prevent further infringement of the artists' rights.
The bottom line? This matters, as it could set a precedent for how AI companies obtain and use images in the training of their algorithms and how artists (commercial and others) are allowed to use the technology to… well… make things.
The lawsuit in layman’s terms
The companies, i.e. defendants, are accused of scraping billions of images from the internet, including images that were uploaded by the plaintiffs and other artists, without their permission. They then used these images to train their AI algorithms, which then generated new artworks that were similar to the original images.
The plaintiffs in the case are a group of professional artists who claim that the companies' actions have caused them financial harm and damage to their reputations. They claim that the companies are creating artworks that are so similar to their own that it is difficult for people to tell the difference. This has led to a decrease in the value of their own artworks and has also made it harder for them to sell their work.
The lawsuit also argues that the defendant's actions put millions of artists at risk by scraping and using their copyrighted works without permission or compensation, thereby devaluing and displacing the original artists' work in the market.
The plaintiffs are alleging several different legal violations, including copyright infringement, violation of the Digital Millennium Copyright Act, and violation of their right of publicity.
Overall, the plaintiffs argue that the companies' actions have caused them harm and that they should be held accountable for their actions. They are asking the court to award them monetary damages and to issue an injunction that would prevent the companies from continuing to use the images that they scraped without permission.
Asking one AI to defend another
Okay, so… I couldn’t help myself and fed the entire 46-page lawsuit to OpenAI via ChatGPT. Consider this an experiment more so than anything else (I can’t determine how accurate or reasonable this is since I’m not a lawyer or domain expert), and If you’re an actual lawyer, I’d love to hear your take on this.
Here’s what ChatGPT came up with, assuming the role of a defense attorney (based only on the contents of the actual lawsuit):
The defendants did not intend to infringe on the plaintiffs' copyrights, as the AI algorithms were trained on a vast amount of publicly available images, and it would be impossible to obtain consent from every artist whose work was included in the training dataset.
The use of the plaintiffs' images in the training dataset constitutes fair use, as it was for the purpose of creating new, original works and did not harm the market for the original works.
The AI-generated works created by the defendants' algorithms should be considered transformative, as they add new meaning and expression to the original images and cannot be considered direct market substitutes.
The defendants' AI-generated works should be protected under the First Amendment as a form of free speech, as they are expressive works created by a machine, and not human.
The plaintiffs' claims of harm to their names and identities are overstated, as the AI-generated works are not perfect copies of their original works and do not cause confusion among consumers or dilute the value of their names and identities.
The defendants argue that the use of billions of photos to train their AI algorithms is a fair use under copyright law, as it is transformative and for the purpose of creating new, original works.
The defendants argue that their scraping of photos from the internet was done without the intent to infringe on any copyrights, and was done for the purpose of creating a diverse dataset for their AI algorithms to learn from.
The defendants did not violate the DMCA as the works were not protected by copyright management information, and their AI algorithms were not designed to circumvent any technological protection measures.
The defendants' use of the plaintiffs' names and identities is protected by the First Amendment, as it is for the purpose of identifying the style of the AI-generated works and not for commercial gain.
If the lawsuit is ruled in favor of the plaintiffs, it would set a dangerous precedent for the AI industry, chilling innovation, and stifle the growth of AI-generated works as a new form of expression.
Reframed as closing arguments:
“Ladies and gentlemen of the jury, the plaintiffs in this case have made a number of serious allegations against my clients, the defendants. They have accused them of copyright infringement, vicarious copyright infringement, violation of the Digital Millennium Copyright Act, and violation of the common law right of publicity. But as we have seen throughout this trial, the evidence simply does not support these claims.”
As the defense attorney for the defendants in this lawsuit, I would first like to state that the use of AI-generated art and the training of these algorithms on copyrighted works is a complex and nuanced issue. The defendants in this case have made a good faith effort to abide by copyright laws, and have implemented measures to ensure that any use of copyrighted works is done with the appropriate permissions and licenses.
Additionally, the defendants have made it clear that their AI-generated art is not meant to replace or compete with the work of human artists, but rather to serve as a new and innovative form of artistic expression. The use of AI in art is a rapidly evolving field, and the defendants have taken great care to stay on the right side of the law and to be respectful of the rights of copyright holders.
Lastly, I would like to point out that the art generated by the defendants’ AI products is transformative and adds something new to the underlying copyrighted works, and is protected under fair use doctrine.
In conclusion, the defendants have acted responsibly and in good faith in their use of AI-generated art and have not infringed on the rights of the plaintiffs or any other copyright holders. We respectfully request that the court dismiss this lawsuit".
But… less than 5% of class action lawsuits actually go to trial.
So, there are a couple of things at play here.
If the plaintiffs win the class-action lawsuit, it would mean that the court has found the defendants liable for the alleged violations of copyright law, common law rights of publicity, and so on. In this case, the “millions of artists” who are part of the class would be entitled to compensation for any harm they have suffered. In addition, the ruling would establish a precedent that could have a significant impact on the AI industry (in particular with AI-generated images and art.).
And… that’s if they (i.e. the artists) win. But that’s unlikely. See, less than 5% of class action lawsuits actually go to trial. Most settle (i.e. agree to a settlement amount and/or changes to their business practices in order to resolve the case.). In that case, there’s no binding legal precedent, and the terms are typically confidential.
The bottom line? The outcome of this (and other lawsuits that are happening in parallel), may impact how companies like MidJourney, OpenAI, and the likes both operate and serve their users and customers.
The court of public opinion
Artists have always had various tools at their disposal. Paint, canvas, cameras, software, and so on.
“In September 2022, New York resident Kris Kashtanova sought and received U.S. copyright registration for a comic book titled Zarya of the Dawn, featuring images generated by Midjourney. In December 2022, the U.S. Copyright Office revoked this registration, deeming the work ineligible for registration because it was generated by AI.” - Paragraph 142
As the quote above suggests, copyright law is… complicated. Generally speaking, however, if you straight up copy someone’s work, it’s plagiarism. Copy lots of people’s work… and it’s research. Imitate an artist and their style, however, and you’re generally in the clear, legally speaking.
And that’s a beautiful thing.
Artists have always been influenced by other artists and taken inspiration from the work of others to develop their own unique style – without consent or permission. But what happens when you train a computer to look for stylistic elements in the work of others and imitate their style? And where, exactly, do you draw the line?
Look, I get it. Artists are right to be angry.
This is a case of big tech fucking with a group of individuals that already struggle to make a living. I feel for them. There’s suddenly this new piece of groundbreaking technology that, to some extent, goes after their living. But so were digital cameras and affordable editing software. Tools that empower artists.
Like Jason Allen, a Colorado resident who, in September 2022, used Midjourney to generate an image that he submitted to an art competition at the Colorado State fair, which later won.
Did this upset the other contestants and artists? For sure. Did he “cheat”? Well. His artwork was submitted to the “digitally manipulated photography” category after having spent some 80 hours making more than 900 iterations of his final artwork. The AI, he said, “is a tool, just like the paintbrush is a tool. Without the person, there is no creative force.”
So, it’s… complicated. And let’s, for argument’s sake, say that tech (big and small) are not allowed to train their image generation AI models without your explicit consent. Then what?
They’ll probably just proceed to train and develop their capabilities based on original, properly licensed stock photography, public domain, or content that they already own (like our, uhm, billions of photos on Instagram or Facebook.).
The outcome is the same. Can't put the genie back in the bottle. But big tech needs oversight. And accountability.
When asked whether he sought consent from the creators of the Training Images, Holz said “There isn’t really a way to get a hundred million images and know where they’re coming from… There’s no way to find a picture on the internet, and then automatically trace it to an owner and then have any way of doing anything to authenticate it” […] “To my knowledge, every single large AI model is basically trained on stuff that’s on the internet. And that’s okay, right now. There are no laws specifically about that.” — Paragraph 150, 146
The quote is from David Holz, CEO Midjourney, and the kind of statement that could get them in serious trouble in both the short and long-term. And “Everyone's doing it” as a rationale for engaging in behavior that is widespread, but less-than-ideal is problematic – especially when coupled with “there’s no law about it” (yet, lol).
I’m not picking sides. In fact, it’s (at least partially) not even about AI (or big tech) vs. Artists, but “artists against AI” vs. artists who use AI to empower their workflow.
It’s more like painters vs photographers
Only thing is… cameras have been around for more than two hundred years, and we still buy and invest in “analog” art. But that perspective doesn’t fit the David vs. Goliath storyline.
The fear has sometimes been expressed that photography would in time entirely supersede the art of painting. Some people seem to think that when the process of taking photographs in colors has been perfected and made common enough, the painter will have nothing more to do – Henrietta Clopath, Brush & Pencil, 1901.
Back in the early 1900s, photography was, in fact, considered a very real threat to both artists and art critics. Some of them believed that painters would have nothing to do at the point when the process of taking photographs in color had been perfected.
Interestingly enough, 1901 is also considered to be 19-year-old Pablo Picasso’s breakthrough year as an artist. And then there’s Salvador Dalí, Claude Monet, Matisse, Jackson Pollock, Frida Kahlo, and… well, you get my point.
The pick-and-shovel business
OpenAI is raising money at a reported $29 billion valuation, and they’re not alone. VC FOMO is coming back. Sequoia argues that Generative AI has the potential to generate trillions of dollars of economic value. And so on.
Hype? Sure. Short-sighted opportunism? Certainly. Charlatans with shiny object syndrome? No, no… not the investors, but the web3 “enthusiasts”– looking for a party to crash. Expect all of the above and then some. History has a tendency to repeat itself, but maybe, just maybe, it’ll be different this time around.
Here’s what I do know with certainty:
During a gold rush, it’s great to be in the pick-and-shovel business – and I can’t help to think that that’s, perhaps, at least to some extent, what these lawsuits are really about.
– Aron