Government’s new IT rules make AI content labelling mandatory; give Facebook, Instagram and other platforms 3 hours for takedowns

128157496.jpg


Government’s new IT rules make AI content labelling mandatory; give Facebook, Instagram and other platforms 3 hours for takedowns
New government rules now mandate clear labelling for all AI-generated content, including deepfakes and synthetic audio, starting February 20. Social media platforms must verify user declarations on AI content and embed traceable metadata. Takedown timelines have been drastically reduced to as little as three hours for certain violations, with platforms also required to warn users about penalties.

The government has brought AI-generated content—deepfake videos, synthetic audio, altered visuals—under a formal regulatory framework for the first time by amending India’s IT intermediary rules. Notified via gazette notification G.S.R. 120(E) and signed by Joint Secretary Ajit Kumar, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, take effect from February 20.The core ask is simple. Platforms must label all synthetically generated information (SGI) prominently enough for users to spot it instantly. They must also embed persistent metadata and unique identifiers so the content can be traced back to its origin. And once those labels are in place, they can’t be modified, suppressed or stripped away.

What government defines as AI-generated content

The centre law now has a formal definition for “synthetically generated information” for the first time. It covers any audio, visual or audio-visual content created or altered using a computer resource that looks real—and shows people or events in a way that could pass off as genuine.But not everything with a filter qualifies. Routine editing—colour correction, noise reduction, compression, translation—is exempt, as long as it doesn’t distort the original meaning. Research papers, training materials, PDFs, presentations and hypothetical drafts using illustrative content also get a pass.

Instagram, YouTube, Facebook face tighter compliance bar

The heavier lifting falls on big social media platforms—Instagram, YouTube, Facebook among them. Under the new Rule 4(1A), before a user hits upload, the platform must ask: is this content AI-generated? But it doesn’t end at self-declaration. Platforms must also deploy automated tools to cross-verify, checking the content’s format, source and nature before it goes live.If flagged as synthetic, the content needs a visible disclosure tag. If a platform knowingly lets violating content slide, it’s deemed to have failed its due diligence.The government also quietly shelved an earlier proposal from its October 2025 draft. That version wanted watermarks covering at least 10% of screen space on AI visuals. IAMAI and its members—Google, Meta, Amazon among them—pushed back, calling it too rigid and hard to implement across formats. The final rules keep the labelling mandate but ditch the fixed-size watermark.

Three hours to act, not 36

Response windows have been slashed. Platforms now get three hours to act on certain lawful orders—down from 36. The 15-day window is now seven days. The 24-hour deadline has been halved to 12.The rules also draw a direct line between synthetic content and criminal law. SGI involving child sexual abuse material, obscene content, false electronic records, explosives-related material, or deepfakes that misrepresent a real person’s identity or voice now falls under the Bharatiya Nyaya Sanhita, POCSO Act and the Explosive Substances Act.Platforms must also warn users at least once every three months—in English or any Eighth Schedule language—about penalties for misusing AI content. On the flip side, the government has assured intermediaries that acting against synthetic content under these rules won’t strip them of safe harbour protection under Section 79 of the IT Act.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *