Key Highlights
Contents
- Approximately eight million synthetic media files circulated across the United Kingdom in the previous year, representing a near 300% increase from 2023 levels
- The betting and wagering industry experienced a 73% surge in fraudulent activity between 2022 and 2024, with AI-generated content enabling identity verification circumvention
- A 2025 study concluded that British law enforcement agencies lack adequate resources to combat artificial intelligence-driven fraud schemes
- Internal Meta documents revealed approximately $16 billion in advertising revenue during 2024 originated from fraudulent schemes and prohibited merchandise
- Forthcoming UK regulations addressing synthetic media under the Online Safety Act face implementation, though critical enforcement mechanisms for fraudulent advertising won’t activate until 2027 at the earliest
The United Kingdom confronts an unprecedented wave of artificial intelligence-generated synthetic media fraud, and existing regulatory mechanisms appear ill-equipped to address the challenge. Mounting evidence demonstrates that fraudulent operations utilizing deepfake technology have achieved industrial-scale operations, with the internet gambling sector bearing particularly severe consequences.
According to data from the Home Office’s Accelerated Capability Environment, approximately eight million AI-generated fraudulent media items circulated throughout the UK during the past year. This represents nearly a 300% increase compared to figures documented in 2023.
The AI Incident Database characterized this fraud category as having achieved “industrial” scale in a 2026 assessment. Fred Heiding, a Harvard University researcher focused on AI-enabled fraud schemes, issued a stark warning: “the worst is yet to come.”
The internet wagering industry has experienced disproportionate damage. Research from Gambling IQ, an industry intelligence organization, documented a 73% escalation in sector-specific fraud occurring between 2022 and 2024.
Fraudsters deploy deepfake technology to circumvent Know Your Customer verification protocols and execute widespread bonus exploitation across betting platforms. The sophisticated technology enables criminals to generate convincing voice replicas and video impersonations of legitimate individuals.
Police Forces Lack Adequate Resources
The Alan Turing Institute published findings in 2025 concluding that UK law enforcement remains “inadequately equipped to deal with AI-fuelled fraud.” Joe Burton, Professor of Security and Protection Science at Lancaster University, authored the assessment.
Burton delivered an unambiguous evaluation. “AI-enabled crime is already causing serious personal and social harm and big financial losses,” he stated.
He advocated for providing police forces with enhanced capabilities to dismantle criminal operations. Without such improvements, he cautioned, criminal exploitation of artificial intelligence technologies will proliferate at an accelerating rate.
The UK Gambling Commission presently assigns primary responsibility to platform operators for preventing criminal activity. Companies must implement their own fraud detection policies and protective measures.
However, as AI capabilities advance at unprecedented speed, platforms operating independently cannot adequately address the threat. Numerous AI-facilitated scams targeting the gambling sector originate entirely outside regulated platform environments.
Social networking services play a pivotal role in distributing these fraudulent schemes. Platform recommendation algorithms can inherently amplify deceptive content, favoring user engagement metrics over information accuracy.
Reuters disclosed in November 2025 that Meta’s internal documentation indicated roughly 10% of its 2024 revenue stream — approximately $16 billion — derived from advertisements connected to fraudulent operations and prohibited merchandise.
Just last week, Reuters discovered that Meta failed to eliminate scam content from its UK operations over 1,000 instances during a single seven-day period. The fraudulent material included unlicensed online gambling operations employing deepfake technology to recruit participants.
Legislative Action Proceeds at Sluggish Pace
Ofcom has initiated development of new regulatory standards addressing synthetic media under both the 2023 Online Safety Act and 2025 Data Use and Access Act. Yet the regulator’s published guidance reveals significant limitations within the existing framework.
Certain AI conversational systems escape regulatory jurisdiction completely. They function as isolated systems and fail to qualify as search services or platforms facilitating user-generated interactions.
Though the Online Safety Act commenced enforcement in March 2025, authority to address paid fraudulent advertising has been postponed until 2027 minimum. This leaves enforcement reliant upon voluntary compliance from corporations like Meta.
Neither the Financial Conduct Authority nor Ofcom currently possesses direct jurisdiction to intervene regarding these advertisements. Content produced without external sources, encompassing synthetic images and videos, frequently evades oversight unless meeting particular criteria.
The financial and social burden of deepfake fraud schemes continues impacting platforms and end-users, despite the fact that systems generating these threats operate beyond their sphere of influence.
