Technology companies in Britain will soon have just 48 hours to remove abusive or intimate images shared without consent — or risk massive fines and even service bans.
Under the new UK 48-hour abusive images law, platforms that fail to act could face penalties of up to 10% of their qualifying worldwide revenue. In serious cases, their services could also be blocked in the country.
The move is part of the government’s wider effort to strengthen online safety, especially for women and girls.
Why the Law Is Being Tightened
Sharing nonconsensual intimate images in the UK is already illegal. However, many victims say they struggle to get platforms to permanently remove such content once it spreads.
With AI tools now able to create realistic explicit deepfakes in seconds, the problem has grown faster than existing safeguards.
Prime Minister Keir Starmer said the online world has become a major battleground in the fight against violence against women and girls.
The government believes that putting direct responsibility on tech firms — not just individuals — is necessary to stop the harm from spreading.
What Platforms Will Be Required to Do
The proposed amendment to the Crime and Policing Bill will create a legal duty for major platforms to:
-
Remove illegal intimate images within 48 hours of being reported
-
Ensure victims only need to report the content once
-
Remove identical copies across services
-
Prevent re-uploads of the same material
Fines would be based on a company’s Qualifying Worldwide Revenue, a measure used by Ofcom that includes income generated globally from regulated services.
Ofcom’s Role and Hash-Matching Technology
Britain’s media regulator, Ofcom, is moving quickly to enforce stronger protections.
It is fast-tracking a decision on new Ofcom hash matching rules, which would require platforms to use technology that detects and blocks illegal intimate images at the source.
This system works by creating a digital fingerprint (hash) of an image. If someone tries to upload the same content again, it can be automatically blocked.
A final decision is expected in May, with parts of the Illegal Harms Code summer 2026 measures likely to take effect this summer.
Further online safety decisions — including how companies should respond to sudden spikes in harmful content — are expected in autumn.
Wider Online Safety Debate
The crackdown comes amid a broader debate in Britain about protecting young users online.
Ministers are reviewing whether to restrict social media access for under-16s, similar to measures introduced in Australia. There are also proposals to make livestreaming safer for children by limiting harmful interactions and preventing abuse.
Campaigner Elena Michael from #NotYourPorn welcomed the announcement but warned that enforcement will be key.
She said that focusing only on the original perpetrator is not enough, as harmful content can be shared and reshared many times across different platforms.
Her point highlights a shift in policy — from targeting individuals alone to holding tech platforms accountable for how content spreads.
What This Means Going Forward
If passed, the new rules will mark one of the strongest regulatory steps taken by the UK against the online sharing of abusive images.
Technology firms will face strict deadlines, heavy financial penalties, and closer regulatory scrutiny.
For victims, the government says the aim is simple: faster removal, fewer re-uploads, and stronger digital protections in an age where private images can go viral within minutes.









