A landmark legal case revealed this week marks the start of a battle fought between human artists and artificial intelligence companies over the value of human creativity.
On Monday, visual media company Getty Images filed a copyright claim against Stability AI, maker of a free image generating tool, sparking an escalation in the global debate around intellectual property ownership in the age of AI.
The case is among the first of its kind and will set a precedent for how the UK legal system, one of the most restrictive in the world in terms of copyright law, will treat companies building generative AI — artificial intelligence that can generate unique images and text.
Getty, which holds more than 135m copyrighted images in its archives and provides visual material to many of the world’s biggest media organisations, has filed its claim in the UK High Court.
The claim comes after California-based company OpenAI released a tool in January 2021, called Dall-E, that can create realistic and beautiful imagery based on simple text instructions alone.
An explosion of AI image tools, including Stability AI’s, soon followed, allowing users to generate visuals ranging from Bugs Bunny in a cave painting, to Kermit the Frog as painted by Edvard Munch, and a black hole in Bauhaus style, signifying a shift in how we view creativity.
Getty claims that Stability AI, which was recently valued at $1bn, had “unlawfully copied and processed millions of images protected by copyright . . . to benefit Stability AI’s commercial interests and to the detriment of the content creators”.
Although Getty has banned AI-generated images from its platform, it has licensed its image data sets to several other AI companies for training their systems.
“Stability AI did not seek any such licence from Getty Images and instead, we believe, chose to ignore viable licensing options and longstanding legal protections in pursuit of their standalone commercial interests,” the company said.
Stability AI said it took these matters seriously and added: “We are reviewing the documents and will respond accordingly.”
The landmark case will be watched closely by global businesses such as OpenAI and Google, said Sandra Wachter, professor of technology and regulation at the Oxford Internet Institute.
“It will decide what kind of business models are able to survive going forward,” she said. “If it’s OK to use the data, other companies as well can use it for their own purposes. If that doesn’t happen, you would need to find a new strategy.”
Text-to-image AI models are trained using billions of images pulled from the internet — including social media, ecommerce sites, blogs and stock image archives. The training data sets teach algorithms, by example, to recognise objects, concepts and artistic styles such as pointillism or Renaissance art, as well as connect text descriptions to visuals.
For instance, Dall-E 2, one of the most advanced generators built by OpenAI, is trained on 650mn images and their descriptive captions. The company, which released conversational AI system ChatGPT in December, is being courted by Microsoft for a $10bn investment, at a $29bn valuation.
Stability AI’s product, Stable Diffusion, was trained on 2.3bn images from a third-party website which pulled its training images from the web, including copyright image archives such as Getty and Shutterstock. At the core of the legal debate is whether this large-scale use of images generated by human beings should count as an exception under current copyright laws.
“Ultimately, [AI companies] are copying the entire work in order to do something else with it — the work may not be recognisable in the output but it’s still required in its entirety,” said Estelle Derclaye, professor of intellectual property law at the University of Nottingham, who specialises in the fair use of data sets.
“It’s like the Napster case in ‘99, cropping up again in the form of AI and training data,” she said, referring to the popular peer-to-peer file-sharing site with 80mn users that collapsed under copyright claims from musicians.
Lawsuits are piling up elsewhere for the industry.
This week, three artists filed a class-action suit in the US against Stability AI and other companies Midjourney and DeviantArt for their use of Stable Diffusion, after the artists discovered their artwork had been used to train their AI systems.
Such products create an existential threat for creators and graphic designers, lawyers representing the artists said.
“The artists who have created the work being used as training data now find themselves in the position where these companies can take what they created, monetise it and then go to a marketplace to sell it in direct competition with the creators,” said Joseph Saveri, a lawyer representing the artists in the US class action.
A spokesperson for Stability AI said the allegations “represent a misunderstanding of how generative AI technology works and the law surrounding copyright” and that it intended to defend itself. Midjourney and DeviantArt did not reply to requests for comment.
Saveri’s law firm is also pursuing a case against GitHub, the code hosting website, its owner Microsoft and OpenAI to challenge the legality of GitHub Copilot, a tool that writes code, and a related product, OpenAI’s Codex, claiming they have violated open-source licences. GitHub has said it is “innovating responsibly” in its development of the Copilot product.
In the past year, photographers, publishers and musicians in the UK have also spoken about what they deem an existential threat to their livelihoods, in response to the UK government’s proposals to loosen IP laws. The criticism represents the tension between the UK’s desire to court technology companies and its responsibility to protect its £115.9bn creative industries.
Removing copyright protections for artistic images to train AI could have “harmful, everlasting and unintended consequences” for human creators, the Association of Photographers said in its submission to the government. It will lead “to a downward spiral in which human endeavour is disincentivised against a background of billions of AI-generated works”, it added.
Last week, a House of Lords report concluded that the government’s proposed changes to provide more flexibility to tech companies were misguided, warning that they “take insufficient account of the potential harm to the creative industries. Developing AI is important, but it should not be pursued at all costs.”
Ultimately, the outcome of the Getty Images case in the UK could set the tone for how other regimes, including within the European Union, interpret the law.
Professor Derclaye said: “It’s massive in terms of implications, because you are deciding the margin of manoeuvre of AI generators to continue what they are doing.”
Art and artificial intelligence collide in landmark legal dispute Republished from Source https://www.ft.com/content/d691d599-3cdb-48d8-9824-9b2784a17d90 via https://www.ft.com/companies/technology?format=rss