The much-awaited decision in the lawsuit between stock photography giant Getty Images and AI developer Stability AI has now been released, see: Getty Images v Stability AI [2025] EWHC 2863 (Ch) (HTML version here). I’ve discussed this case before, but these are the facts as presented by Smith J in her introduction.
Getty Images is an image company that licenses high-quality photographs, videos, illustrations and related metadata through various platforms. Its images can be found online with their distinctive Getty Images or iStock watermarks, and Getty asserts exten…
The much-awaited decision in the lawsuit between stock photography giant Getty Images and AI developer Stability AI has now been released, see: Getty Images v Stability AI [2025] EWHC 2863 (Ch) (HTML version here). I’ve discussed this case before, but these are the facts as presented by Smith J in her introduction.
Getty Images is an image company that licenses high-quality photographs, videos, illustrations and related metadata through various platforms. Its images can be found online with their distinctive Getty Images or iStock watermarks, and Getty asserts extensive copyright and trade mark portfolios connected to this content. Stability AI is a UK-registered company founded in 2019 which produced Stable Diffusion, which is a generative AI image-synthesis model based on latent diffusion techniques initially developed by researchers at LMU Munich and Heidelberg. Early versions of the model were trained using cloud computing resources provided by Amazon Web Services outside the United Kingdom, and the underlying architecture and code were publicly released in 2022.
Stable Diffusion v1.x, v2.x and later “SDXL” models were trained on subsets of the LAION-5B dataset (formerly of this parish). Stability accepted from the start that the LAION dataset contained URLs pointing to images hosted on Getty’s websites, and therefore that some Getty Images content was included in the data scraped for training. Getty argued that this includes millions of its copyright-protected works. Stability released Stable Diffusion to the public online in websites like Hugging Face, allowing users to download the model weights or to access the model via interfaces such as DreamStudio, or Stability’s API platform. After release, the model gained wide adoption, with hundreds of thousands of downloads and millions of generated images.
Getty sued Stability AI in May 2023, the complaint was for direct copyright infringement, secondary copyright infringement, trade mark infringement, passing off, and database right infringement. Using the input and output paradigm in analysing AI copyright cases, the direct copyright infringement was alleged to have happened during training, where Getty alleged that Stability had unlawfully copied its images during the training process, and that they had had also communicated those to the public. Getty also argued that Stable Diffusion was capable of outputting direct copies of protected works, but also that it authorised users to generate those outputs, which amounted to secondary infringement.
However, by the time of trial, Getty accepted that training occurred outside the UK. It also accepted that specific prompts that allowed the potential infringing outputs had been blocked, so Getty dropped the direct copyright infringement claims. Because all of these were integrally linked with many other claims, particularly the database right infringement, the trial only dealt with two main issues, trade mark infringement, passing off, and secondary copyright infringement.
**What happened to direct infringement? **
Before we go into what the court decided, it is relevant to consider the reason why the direct copyright infringement claims were dropped during trial. From the start, defendants had argued that no training had taken place in the UK, a claim that was discussed by the court and almost derailed the lawsuit. You may be wondering, “wait, why would the place of training matter?” The thing that most people forget when it comes to copyright is that it is strictly national, as Smith J remarked in the 2023 ruling allowing the case to proceed, copyright “is a territorial right which confers protection on its holder only within the territory of the United Kingdom.”
Astute readers will point out that this could still be the case if the training of a work by a UK rightsholder took place abroad, but the effects of said infringement took place within the United Kingdom. This would be correct, but the core of the argument for direct infringement would rest on the understanding that the action is an act of infringement where the training took place, and that is still far from settled. There are dozens of lawsuits litigating this very question right now in the US, and we have only had a few hints. The prevailing position given the various settlements and preliminary results is that AI training will be assumed to be fair use because it is “exceedingly transformative”, that is unless shadow libraries were used, but that is a question for another time.
This could change, but UK rightsholders will have to contend with the fact that if a model was trained outside of the UK, then they won’t be able to sue for direct copyright infringement (or sue for secondary infringement as we will see). Getty Images decided not to even try to contend this question in the trial, concentrating on the trade mark and secondary infringement claims. Let that sink in for a second. Because of the lack of commercial TDM exceptions and a fair use equivalent, no AI developer is conducting any training in the UK at the moment, this means that all training is abroad, which also means that no UK rightsholder can sue for direct infringement. It’s extremely ironic that on paper the UK has more protection because it has a much weaker TDM exception, but that also is the reason why they can’t sue for direct copyright infringement.
The crazy thing is that this could be remedied by adopting more developer-friendly exceptions to bring in training to the UK (and thus making them available to UK lawsuits), but I’m sure that activists won’t see that as an option.
**Secondary infringement **
Getty argued that the trained model itself is an “infringing copy” under UK law, because the model weights would amount to infringing copies of Getty works if the training had occurred in the UK. Stability rejected this, arguing that the model does not contain copies of any images, only learned parameters. Smith J sided with the defendants, and in a paragraph that launched a thousand social media posts, she comments:
“Getty Images’ claim of secondary infringement of copyright is dismissed. Although an “article” may be an intangible object for the purposes of the CDPA, an AI model such as Stable Diffusion which does not store or reproduce any Copyright Works (and has never done so) is not an “infringing copy” such that there is no infringement under sections 22 and 23 CDPA.”
I have to say that I completely agree with Smith J here, although this has quickly become the sole point of attack of a detailed decision, performed by people who have clearly not even bothered to read it. The main objection appears to be that of memorisation. It’s been known for many years now that models can often memorise items in the training data, and we have all seen the screenshots that get paraded endlessly featuring Marvel or DC superheroes, scenes from movies, or famous photographs. That is indeed the case, models can memorise, but as I have been reminding people endlessly, memorisation is not an exclusive right of the author (otherwise memorising a poem could land you in court). Reproduction is an exclusive right of the author. So if a model has memorised an image you own, and it is able to reproduce it as an output, then congratulations, you have a copyright infringement claim! But this is not the case for most works in the training data.
The problem here, and one that keeps eluding quite a few non-legal commentators, is that the fact that a model CAN memorise items does not mean that a model contains copies of all training data, and this is very important from a legal perspective, and a detail that Smith J clearly understood. Even considering models as lossy compression mechanisms, they do not store all of the training data. So what matters from a legal perspective is that you can produce an output, the model has remembered what Iron Man looks like, and can reproduce him in an output. Good news if you’re Disney, bad news for almost everyone else.
I recommend those willing to criticise the ruling to read the entire arguments in the Secondary Infringement section. Smith J relied heavily on uncontested expert evidence that unequivocally argued that models do not store copies of the training data. An argument which continues to impress judges is the disparity between the size of the training data and the size of the weights, one expert expressed it like this:
“…the LAION-5B dataset is around 220TB when downloaded. In contrast, the model weights for Stable Diffusion 1.1-1.4 can be downloaded as a 3.44GB binary file. The model weights are therefore around five orders of magnitude smaller than a dataset which was used in training those weights.”
The ruling does tackle the question of memorisation, and it is uncontested that under some circumstances models can memorise works contained in their training data. But this is why looking at the detail is important, it doesn’t matter that models can memorise in theory, you have to demonstrate that they memorised your particular work. Smith J comments:
“However, notwithstanding this evidence about memorization, it is important to be absolutely clear that Getty Images do not assert that the various versions of Stable Diffusion (or more accurately, the relevant model weights) include or comprise a reproduction of any Copyright Work and nor do they suggest that any particular Copyright Work has been prioritised in the training of the Model. There is no evidence of any Copyright Work having been “memorized” by the Model by reason of the Model having been over-exposed to that work and no evidence of any image having been derived from a Copyright Work.”
This is going to be a vital part of the ruling going forward, and although this won’t have an effect on the armies of reply guys and armchair quarterbacks, it should at least act as an important guidance on how memorisation should be dealt with under copyright. I’ve been commenting since early on in the copyright wars that memorisation would be mostly irrelevant unless it could lead to infringing outputs. I’m delighted that this view is finally gaining traction.
There is a fascinating discussion on statutory interpretation, as well as a copyright nerd’s heaven where Smith J goes through in detail on concepts such as “article” and “infringing copy”. The judge concludes that an article in the meaning of the CDPA can be an electronic copy stored in electronic form, with which I also agree. Smith J also concludes in this part of the discussion that an infringing copy must still be a copy, and referring to the arguments presented above, there is no actual copy in the model. Interestingly, the judge has opened the door to the fact that her interpretation may be wrong, which will probably be the basis of any future appeal.
**Trade mark and passing off **
Getty argued that Stable Diffusion can generate synthetic images that contain imitation Getty Images or iStock watermarks, which it said constituted trade mark infringement and misrepresentation. Getty were successful in this part. The judge concluded:
“the question of trade mark infringement arises only: a) in respect of the generation of Getty Images watermarks* and iStock watermarks* by v1.x Models (in so far as they were accessed via DreamStudio and/or the Developer Platform); b) in respect of the generation of Getty Images watermarks* by v2.x Models. iii) There is no evidence of a single user in the UK generating either Getty Images or iStock watermarks* using SD XL and v1.6 Models. Thus no question of trade mark infringement arises in respect of these Models and that claim, in so far as it relates to them, is dismissed.”
The court’s trade mark findings were, in short, a win in name only. Getty Images argued that Stable Diffusion was spitting out fake Getty and iStock watermarks, but by the time the evidence was in, only a handful of examples survived scrutiny. Smith J ruled that most versions of the model never produced anything close to a recognisable watermark, and that many of the supposed “infringing” marks were just fuzzy AI artefacts, in other words, algorithmic ghosts rather than deliberate branding. The threshold was high: Getty had to show that the models actually generated something a real UK user would mistake for one of its marks. It failed for the newer SDXL and v1.6 models, leaving only the older v1.x and v2.x to fight over.
Even there, Getty’s victories were microscopic. The court found limited infringement under sections 10(1) and 10(2) of the Trade Marks Act, essentially a couple of stray images bearing a credible “iStock” watermark and one Getty-style artefact that could, in theory, confuse an average consumer. Everything else was thrown out. The judge dismissed the broader claims of reputational harm under section 10(3), finding no evidence that the occasional AI-generated smudge had tarnished Getty’s brand. The result is a symbolic but hollow win in the trade mark front, enough for Getty to say it proved a point, but not enough to redraw the law on AI-generated trade marks.
Edited: Just for completeness, it seems clear that Getty was never close to proving the standards for misrepresentation required for passing off, so Smith J did not feel the need to consider this as a valid claim.
Concluding
This is a fantastic decision that will be the subject of endless discussion for months to come. It is also likely to be appealed, and we will see what happens then. I would like to commend the judge on an extremely careful and judicious analysis, I’ve been following this case with interest since day one, and I’ve found her rulings astute, her questions direct and to the point, and she’s been capable of removing all of the superfluous nonsense from a very important subject. This would have been a frightening prospect for any judge, but I felt that she understood the technical aspects really well, and managed to convey the reasoning in a concise and competent manner.
I can claim however that I can prove the ruling is wrong in one point. I present evidence that at least one Stable Diffusion user in the UK managed to produce images with the Getty logo. And that user is me:

Getty, call me.