Call us on +44 (0)20 7465 4300
gilles-lambert-pb_lF8VWaPU-unsplash
30 October 2025

The Rising Threat of AI Deepfakes: An expert’s guide to the implications for electoral integrity and reputation law in the UK

October is National Cybersecurity Awareness Month, a timely moment to reflect on how technology can be both a force for innovation and a source of risk. Among the most pressing concerns today are AI deepfakes, which can threaten personal reputations, public trust, and even democratic processes. Mark Jones, defence and investigations expert, and Hanna Basha, privacy and reputation expert, discuss the rise of AI deepfakes in the UK.

The exponential rise of artificial intelligence (AI) has brought both innovation and risk. Among the most concerning developments is the proliferation of “deepfakes” – hyper-realistic synthetic images, videos, or audio created through AI. While once dismissed as humorous online parodies, deepfakes have become powerful tools for deception, exploitation, abuse, and political manipulation. When the UK approaches another general election cycle, the threat to electoral integrity, personal reputation, and the rule of law will never have been greater.

The Legal Response: The Online Safety Act

The Online Safety Act 2023 (the “Act”) represents a landmark in UK digital regulation. Although much attention has focused on the obligations imposed on social media platforms, the Act also introduces a series of new criminal offences targeting harmful communications, many of which directly address emerging risks, such as deepfakes and online abuse.

Part 10 of the Act came into force, offering several new offences, including:

  1. False communications offence (Section 179) – Sending false information to cause harm, punishable by up to 51 weeks’ imprisonment, a fine or both.
  2. Threatening communications offence (Section 181) – Sending threats of death or serious harm, carrying up to 5 years’ imprisonment.
  3. Epilepsy trolling (Section 183) – Sending flashing images to induce seizures, punishable by 5 years’ imprisonment.
  4. Encouraging or assisting serious self-harm (Section 184) – Criminalising the promotion of self-harm, even if no injury occurs, with a 5-year maximum sentence.
  5. Unsolicited sexual image offences (Sections 187–188) – Including the creation and distribution of deepfake sexual content, punishable by up to 2 years imprisonment and a fine.

These provisions replaced and strengthened prior legislation, which addressed “revenge porn” and “cyber flashing.” The aim was that these new offences would offer prosecutors more effective tools to tackle modern forms of digital abuse.

Deepfakes and the Law: A Global First

In a further step, the UK government announced in late 2024 that the creation of a sexually explicit deepfake without consent – even if not shared – will constitute a criminal offence, carrying up to 2 years imprisonment.

Deepfakes are not just sexually explicit. They now represent a growing societal and legal challenge. While initially seen as a novelty, their use has expanded to include blackmail, harassment, identity theft, and political misinformation. Data from Home Security Heroes revealed that 98% of deepfake videos online are pornographic, with a 3,000% increase in cyber fraud in 2023 involving impersonation and identity misuse.

We increasingly see the use of cybercrime amongst children. Sexually explicit deepfakes are parents’ worst nightmare. Cyber-bullying, sextortion, and sexual exploitation are on the rise. The National Crime Agency recently issued alerts to schools warning about a sharp increase in online sexploitation targeting young people.

Lawful but Awful: The Broader Harm of Online Content

The harm caused by online content extends beyond what is strictly illegal. Much of it falls into the category of “lawful but awful” – material that, while not criminal, can still inflict profound psychological and reputational damage. Online misinformation, manipulative political content, and fabricated imagery can distort public debate, erode trust in democratic institutions, and devastate personal reputations.

Children and young people are particularly vulnerable. Ofcom’s 2023 report found that 83% of 16–24-year-olds consume news online, often filtered through algorithms that amplify sensational or harmful content. Prolonged screen time increases exposure to false narratives, extremist material, and sexualised deepfakes.

Reputations and even democratic stability can be undermined by a single manipulated image. The speed at which deepfakes spread means that legal remedies, while improving, often struggle to keep pace with the technological harm they cause.

The Role of Tech Companies and Regulatory Enforcement

The Online Safety Act places clear duties on tech platforms to protect users from online harms. Companies must conduct risk assessments, implement age-verification tools, and restrict access to harmful material, including pornography and content promoting suicide or self-harm. Failure to comply could result in significant penalties from Ofcom, the regulator responsible for enforcing the Act.

Whilst the Act went some way to solving the problem of sharing and posting sexually explicit deepfakes, the only real solution is targeting the creators. It remains to be seen whether an overstretched police force has sufficient resources to investigate and bring perpetrators before the courts.

Deepfakes, Democracy, and Reputation

The implications of AI-generated deception reach far beyond personal harm. In the context of elections, deepfakes have the potential to distort political discourse, manipulate voters, and undermine trust in institutions. The speed and realism of synthetic content make traditional fact-checking almost obsolete.

For high-profile individuals, families, and corporations, reputation management has become a front-line issue. As deepfakes grow more sophisticated, distinguishing truth from fabrication becomes increasingly complex — a challenge both for the law and for society.

In summary

The UK has taken significant legislative strides to address the harms caused by digital manipulation and online abuse. However, the rise of AI-driven misinformation and deepfakes represents a moving target for lawmakers, regulators, and the courts.

Protecting electoral integrity, personal dignity, and the truth itself will require ongoing vigilance – not only from government and law enforcement, but also from technology companies, the media, and the public.

As AI’s creative potential expands, so too does its power to deceive. The challenge for the UK – and for democracies everywhere – is ensuring that innovation does not come at the expense of integrity, privacy, or trust.


For further information, please get in touch with Hanna Basha or Mark Jones.  Alternatively, telephone on 020 7465 4300

Key Contact
Mark Jones
View Profile
Hanna Basha
View Profile