The UK Government’s Copyright and AI Report: what it means for the creative sector
The UK Government has published its long-awaited Report on Copyright and Artificial Intelligence (the “Report”). Running to over 120 pages, it covers a lot of ground, from training data and transparency to digital replicas and the future of copyright protection for AI-generated works.
The Report is, to put it plainly, one of the most consequential documents for the UK creative industries in some time.
Here is what it says, what it means, and, importantly, what it does not do.
Background: How we got here
The Report follows a public consultation that ran from December 2024 to February 2025 and received over 11,500 responses. It is published under a statutory obligation created by the Data (Use and Access) Act 2025 and is accompanied by a government economic impact assessment.
The consultation asked a fundamental question: should UK copyright law be changed to make it easier for AI companies to train their models on protected creative works?
The creative industries answered with a resounding no.
The four options on the table
The government put forward four possible approaches:
Option 0 - Do nothing. Keep copyright law as it is.
Option 1 - Strengthen copyright, so that AI developers must obtain a licence in every case.
Option 2 - Introduce a broad data mining exception, allowing AI training on any lawfully accessible content with no ability to opt out.
Option 3 - Introduce a data mining exception with an opt-out, meaning AI companies could use protected work unless the rights holder had actively taken steps to reserve their rights in a machine-readable format.
Option 3 was the government's originally preferred position. It proved deeply unpopular.
Eighty one percent of consultation respondents supported Option 1, i.e. keeping and strengthening existing copyright protections. Only three percent supported Option 3. Option 2 received the lowest support of all.
The Government's decision: A deliberate delay
Having received that feedback, the Government has stepped back from its earlier preference. A broad copyright exception with opt-out is no longer its preferred approach.
That said, the Government has not adopted Option 1 either.
Instead, it has chosen to wait. It will gather further evidence, monitor international developments, and keep all options under review. No legislative reform to copyright law will be introduced until the Government is confident it can meet its dual objective: protecting the creative industries while supporting AI development.
For creatives, this is meaningful but incomplete progress. The law as it stands, which requires AI developers to obtain licences for the use of copyright works, remains in place. That is a better outcome than a broad exception would have been. But without the enforcement infrastructure to give those rights practical effect, the status quo still leaves many creators exposed.
What the Report actually commits to
Amid the caution, there are concrete proposals worth noting.
Transparency: The Government agrees that AI developers should disclose what they have trained on. It proposes to work with industry to develop best practice standards on input transparency. Legislation may follow, but that is not confirmed. Over ninety percent of consultation respondents wanted mandatory disclosure. The government has stopped short of that, for now.
Output labelling: The government supports the principle of labelling AI-generated content so that audiences know what they are looking at. It proposes to develop best practice with industry, monitor international approaches, and work towards common standards. Again, no mandatory rules yet.
Computer-generated works: This is the most concrete reform proposed. Under section 9(3) of the Copyright, Designs and Patents Act 1988, works created entirely by a computer — without a human author — currently receive copyright protection. The government proposes to remove that protection (noting that it “departs from the core rationale for copyright, which is to encourage and reward human creativity”). If you are a creator, this is good news: it reinforces the principle that copyright exists to reward human creativity, not to protect machine output. AI-assisted works, where a human makes creative choices, will continue to be protected.
Digital replicas: The government acknowledges that existing law does not fully protect performers and individuals against unauthorised AI-generated replicas of their voice or likeness. It proposes to explore options for new protection, including the possible introduction of a dedicated digital replica right or personality right. This is still at the exploratory stage, but the direction of travel is significant for performers, voice artists, musicians, and anyone whose identity has commercial value.
The Getty Images v Stability AI Judgment
One of the most important legal developments covered in the Report is the November 2025 High Court judgment in Getty Images v Stability AI, which was the first UK court ruling directly addressing copyright and AI training.
The Court found that a trained AI model could, in principle, constitute an infringing copy under UK copyright law, even if it was trained abroad. However, on the specific facts, there was no evidence that the Stable Diffusion model actually contained copies of Getty's works, so the secondary infringement claim did not succeed at that stage.
The ultimate outcome of the case (the judgment is being appealed) will, of course, be pivotal. If the appellate courts confirm and extend the reasoning, it could significantly strengthen the hand of rights holders seeking to challenge the use of their works in AI training, even by models developed and trained overseas.
The International Picture
The Report contains a detailed comparative analysis of how other countries are approaching these evolving issues. The snapshot version is as follows:
The EU has already introduced a data mining exception with an opt-out (under the Digital Single Market Directive), alongside transparency obligations under the AI Act, which came into force in August 2025. The EU's approach continues to be tested in its courts.
The USA operates on a fair use basis. Over seventy copyright cases involving AI are currently before US courts. The outcome of that litigation will shape the global landscape.
Japan has a broad exception, but with important limits — it does not apply where the purpose is to generate content similar to the works used in training.
Singapore has the broadest exception of any major jurisdiction.
India is considering a statutory licensing model.
Australia has confirmed it will not introduce a TDM exception.
The UK's comparative position is clear: it currently sits among the countries with the strongest protections for rights holders and the least flexibility for AI developers. Whether that changes depends on what happens in Washington and Brussels in the months ahead.
What does all this mean for participants in the creative industries?
Whilst it is probably too early to draw any conclusions (and certainly too early to take comfort), a few significant factors are emerging:
Creators’ existing rights remain intact. The law has not been weakened. If an AI company is using protected work to train a model without a licence, that remains a copyright infringement under UK law. The challenge (and it is a real one) is practical enforcement.
Transparency is the key battleground. Almost every practical protection for creators depends on knowing which works have been used and by whom. The Government's reluctance to mandate disclosure, rather than simply encourage it, leaves a gap that the creative industries will need to continue pressing on.
Licensing is becoming a real market. The Report notes that the AI licensing market is growing. Some publishers and image libraries are already concluding licensing deals with AI developers. If you are a creator, engaging with collective licensing bodies and understanding what your representative organisations are negotiating on your behalf is increasingly important.
The opt-out infrastructure matters even without a statutory exception. Even in the absence of a legislative opt-out scheme, technical tools such as robots.txt, metadata standards, rights reservation protocols, can signal to AI developers that your work is not available for training. Using them is fast becoming essential practice, rather than merely a technical nicety.
Digital replicas and personality rights are moving up the agenda. If a person’s voice, likeness, or performance has commercial value, the proposed exploration of new personality rights is worth watching closely. This is an area where the law is clearly developing, and where early engagement with specialist advisers can help protect what may become a significant income stream, or prevent unlicensed exploitation of it.
Our View
The Report is a careful, comprehensive document. It also leaves significant areas open, at least on the central question of whether AI training should require a licence.
For the creative industries, that outcome is better than it might have been. The Government has listened. It has walked away from a policy that would have shifted the burden of protection onto creators and given AI companies a broad licence to train on protected works without consent.
Needless to say, "we have not made it worse" is not the same as "we have made it better": the enforcement gap remains; transparency obligations remain voluntary; and the commercial reality, i.e. that much AI training takes place overseas, under different legal regimes, beyond the effective reach of UK copyright law, has not changed.
Finally…
The next phase of this debate, which will be shaped significantly by appellate judgments, US litigation outcomes, EU regulatory developments, and a government that has promised to come back with further proposals, will be at least as consequential as this one.
We will be watching it closely, and we will keep you informed.
DISCLAIMER: Please note that this content is for informational purposes only; it does not constitute, and should not be construed as constituting, legal advice. Whilst care is taken to ensure the content is accurate at the time it was produced, it may no longer be. You should seek specific legal advice in respect of particular legal issues or concerns. No liability or responsibility is accepted in respect of the content, or any actions taken based on the content.
Continue Reading