India has formally stepped into regulating AI-generated content, but the real shift isn’t just about labelling deepfakes. It’s about speed, pressure, and who gets to decide what stays online.
Under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, notified through G.S.R. 120(E), platforms are now racing against the clock. For certain lawful government orders, intermediaries have just three hours to act—down from the earlier 36.
The government’s stated goal is clear: curb the spread of deceptive AI content—deepfake videos, synthetic audio, and altered visuals that can convincingly pass off as real. For the first time, the law formally defines this category as synthetically generated information (SGI).
Any audio, visual, or audio-visual content created or altered using computer resources that depicts realistic people or events now falls under scrutiny.
Routine edits—colour correction, noise reduction, compression, translation—are exempt, provided they don’t distort the original meaning. Research papers, presentations, training material, and hypothetical drafts using illustrative content are also excluded.
Platforms become gatekeepers, not just hosts
The heavier compliance burden falls on large social media platforms like Facebook, Instagram, and YouTube.
Before a user even uploads content, platforms must now ask whether it is AI-generated. But self-declaration alone isn’t enough. Platforms are required to deploy automated tools to cross-verify content based on its format, source, and nature.
If identified as synthetic, the content must carry a clear, prominent disclosure. These labels must be permanent—unalterable and non-removable.
Failure to act despite awareness could be treated as a lapse in due diligence, potentially exposing platforms to liability.
The hidden risk: over-correction
What worries digital rights experts is not the intent, but the incentive structure.
With response windows shrinking—from 15 days to 7, and from 24 hours to 12—platforms may increasingly choose the safest legal option: take content down first, ask questions later.
The fear is that legitimate satire, parody, commentary, and experimental AI use could get swept up in aggressive moderation, especially when platforms have just hours to decide what counts as “truth.”
Criminal law enters the picture
The rules also draw a direct line between synthetic content and criminal statutes. SGI linked to child sexual abuse material, obscenity, false electronic records, explosives-related content, or deepfakes impersonating real individuals now intersects with laws such as the Bharatiya Nyaya Sanhita, POCSO Act, and the Explosive Substances Act.
Platforms must also periodically warn users—at least once every three months—about penalties for misusing AI-generated content, in English or any Eighth Schedule language.
What was dropped—and why it matters
Notably, the government has dropped an earlier proposal that mandated watermarks covering at least 10% of AI-generated visuals. Industry bodies, including IAMAI and its members, had flagged it as impractical.
The final rules retain labelling but avoid rigid watermark sizing—offering flexibility, but also leaving room for uneven enforcement.
The bigger picture
While the government has assured intermediaries that action taken under these rules won’t jeopardise safe harbour protection under Section 79 of the IT Act, the operational reality is more complex.
With tighter deadlines and higher liability, platforms may err on the side of silencing content—even when it’s legitimate.
In trying to control deepfakes, India’s new AI rules may end up reshaping how freely—and how cautiously—content is shared online.







