Insights & Events
March 18, 2026

Deepfakes in the UK: are AI replicas a threat or an opportunity?

The use of AI related technologies to create digital replicas (often labelled as “deepfakes”) is rapidly reshaping how the name, image and likeness of celebrities and other high-profile individuals are being exploited and monetised.

These issues have been put into the spotlight by recent global developments, ranging from the Hollywood “deepfake fight” involving Tom Cruise and Brad Pitt to lucrative AI image licensing deals in sport and entertainment – demonstrating how these fast-moving changes in technology are presenting both legal challenges and emerging commercial opportunities.

The UK government has recently announced plans to create a framework aimed at identifying gaps in the detection of deepfakes, highlighting how a combination of legal and technological measures is the path forward to ensuring adequate protection in this space.

The challenge: unauthorised and damaging AI replicas

Hollywood panic: Brad Pitt and Tom Cruise deepfake fight

A realistic AI‑generated video showing Hollywood stars Brad Pitt and Tom Cruise fighting and discussing Jeffrey Epstein went viral in February 2026, created with ByteDance’s Seedance 2.0 model. The clip was produced from a prompt as short as two lines and has been publicly condemned by unions and industry associations (including SAG-AFTRA and the Motion Picture Association), warning that the unauthorised use of actors’ faces, voices and likenesses represents a threat to livelihoods and an erosion of creative control.

These unauthorised digital replicas are becoming increasingly realistic and hard to distinguish from legitimate content, highlighting how quickly deepfake and AI performance misuses are becoming systemic problems.

Previous high-profile UK targets 

The recent deepfake episode with Brad Pitt and Tom Cruise follows in the wake of several other high-profile incidents involving public figures in recent times. In the UK, prominent examples have included Stephen Fry having an AI version of his voice being taken from Harry Potter audiobooks and used in a documentary without his consent. Martin Lewis, owner of the moneysavingexpert.com website, has also been a victim, with a computer-generated likeness appearing in an online scam video promoting a purported investment scheme.

These incidents both undermine investment in creative content and risk deception amongst the UK public, particularly in an age where social media and influencer content plays such a pivotal role in consumer behaviour.

UK image rights protection: a patchwork system under pressure?

In the UK, there is currently no standalone or codified legal framework governing the use and exploitation of image rights as such, leaving a patchwork of existing legal frameworks (such as IP rights, privacy and data protection laws, advertising regulations, defamation, and contractual deals) to control the use (or rather, misuse) of these rights.

So, what legal tools can a celebrity deploy to combat the use of these unauthorised digital replicas and maximise the chances of successful enforcement?

  • Passing off: This is a well-trodden legal path requiring goodwill, misrepresentation and damage, with high-profile UK examples including Formula One driver Eddie Irvine and music icon Rihanna both successfully preventing the unauthorised use of their image to falsely endorse goods/services.
  • Registered trade marks: Having a creative portfolio of registered trademarks can be an effective method of image rights protection (particularly when it comes to following takedown procedures on social media and online content sharing platforms). Many celebrities have built strong trade mark portfolios (and Oscar-winning actor Matthew McConaughey is a good recent illustration of this, having registered various marks covering his voice, image and signature, including a sound mark for his iconic phrase "alright, alright, alright" to stop AI misuse).
  • Copyright: Whilst copyright offers broad protection for creative artistic works (including photographs and videos), in practical terms this may offer limited protection to an individual subject to a deepfake, as: (i) the deepfake content may not actually copy an existing copyright work (unless it recreates a scene or sound recordings from existing audio-visual content) and (ii) in any case, the copyright in such content would typically be owned by another third party that actually created the copyright content (e.g. the photographer or the film studio). There is also an exception to infringement in the UK for “parodies”, which may limit the ability to prevent deepfakes created for the purpose of humour and mockery.
  • Privacy: The laws of privacy can offer safeguards to prevent misuse of a person’s image in certain (albeit fairly narrow) situations. Recent UK legislation has sought to bolster this protection in the context of deepfakes, including the Online Safety Act 2023 (OSA) which includes provisions regarding sharing of a person's intimate images without their consent.

There are, therefore, a number of existing legal protections that can be asserted against those creating and exploiting deepfake content, and recent legislation such as the OSA and the EU AI Act (which requires deployers of an AI system using deepfake content to disclose that the content has been artificially generated or manipulated) are introducing specific obligations to combat against misuse of these systems.

However, the increased sophistication of such deepfake technology (and the pace in which content can be disseminated so widely across social media) is putting the UK’s current legal framework for image right protection under the spotlight more than ever. Perhaps more importantly, it highlights the need for robust technological measures to work alongside these legal tools to limit the adverse social and economic consequences caused by these AI systems – something which the UK government is actively looking to address.

The opportunity: licensed AI replicas as a new commercial frontier

Amid these evident challenges and risks, AI replicas are also unlocking new commercial opportunities for rights holders:

  • MLB players secure AI‑licensing protections: Major League Baseball players have recently agreed a deal for tech company Genies to create AI characters of the players that can interact with fans under an agreement with MLB Players Inc (the business entity of the players' association). This is one of the first major examples of collective bargaining over AI replicas, demonstrating how performers can secure compensation and control rather than being replaced.
  • Khaby Lame’s AI “digital twin” deal: TikTok sensation Khaby Lame, with more than 160m followers on the platform, has reportedly secured a transaction valued at USD975m with Rich Sparkle Holdings that included the use of his face, voice and behavioural models for “AI digital twin development” for the creation of multilingual, cross-time-zone livestream e-commerce content.
  • Indiana Jones and the digital de-aging: Digital replica technology has already been successfully and consensually deployed across the film industry, including Harrison Ford being de‑aged through digital technology for ‘Indiana Jones and the Dial of Destiny’, while Carrie Fisher was digitally reconstructed for prior Star Wars entries. These are controlled, studio‑authorised uses demonstrating how AI can enhance storytelling without undermining performers’ rights.

These examples show the upside of AI likeness technology:

  • New revenue channels
  • Expanded creative possibilities
  • Global reach without physical presence
  • Continuity of franchises and characters

What next?

It is clear that digital replication technology is here to stay and when governed, licensed and controlled properly, AI clones have the potential to become valuable commercial assets rather than threats.

However, deepfake technologies are proliferating faster than traditional legal remedies can respond, and the potential harm caused by unauthorised and deceptive deepfakes to consumer trust (e.g. through scams and false endorsements) and the financial and reputational damage that celebrities and the creative industries may suffer through IP exploitation emphasise the need for tighter control.

The recent UK government announcement to create a framework aimed at identifying gaps in the detection of deepfakes is a positive movement to tackle these issues at a technological level, rather than solely shifting the burden to rights holders to follow legal enforcement avenues.

Authors