• Source:JND

AI has been in the middle of a lot of controversies lately, and over the past few months we have seen AI become a tool for exploitation of privacy and security. And this is against the backdrop of the internet being flooded with explicit images and videos of women. Very recently, Elon Musk-owned "X" (Twitter) was under fire because many deepfake photos of women of an explicit manner were being shared on the platform without any consent, and this definitely gives rise to questions like, "Where does individual privacy stand in this ever-growing age of AI?" AI is integrating itself into our daily lives and is now highly easily available, and this makes it even more risky. Mass dissemination of pictures and videos of unscrupulous manner is now easier than ever before. This is not it. AI is also taking over other types of content as well, and the internet is flooded with that too, and this has a profound impact on overall content creation over the internet.

ALSO READ: India AI Impact Summit 2026: Multi-Billion Dollar Plans Announced By Tech Leaders So Far

Experts are of the opinion that current algorithms favor this hyperrealistic AI creativity as it promotes engagement. But what about real content then? As we all know, content creation platforms like the ones owned by Meta, Instagram, and Facebook are engagement platforms; the chances of such content going viral are even higher. Here no one is spared; even political leaders like Italy's PM Giorgia Meloni and Pakistan's Azma Bukhari have fallen prey to these insidious acts of AI. This has raised new questions on harassment of women and children in this digital age.

Keeping this in mind, the central government has taken a few steps in order to curb such activities and has now laid out the IT Amendment Rules 2026, which have come into effect starting today, February 20, 2026. These rules include stricter provisions to look after things like labeling of AI content, deepfake removal, and fake news within three hours.

Provisions Under Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026

The new IT law stands out because it directly targets the growing misuse of AI-generated content, especially deepfakes, with clear and time-bound rules.

One of the biggest changes is mandatory disclosure. Any video must carry a visible watermark, and audio clips must include a disclaimer at the very start. Platforms are also required to maintain detailed metadata, including where the file originated from, so that the source of manipulated content can be traced more easily.

For the first time, Indian law officially recognizes “synthetic content.” It defines it as any audio, video, or image that has been created or altered using algorithms in a way that makes it appear real. This gives authorities a clear legal framework to act against deepfakes.

The law also places responsibility on platforms. They must use automated AI-driven systems to block illegal uploads, including child sexual abuse material, revenge porn, and fabricated electronic records targeting celebrities. The idea is to stop harmful content before it spreads widely.

Users are not exempt either. Anyone uploading content must declare whether it is AI-generated. Platforms will then use their own verification tools to check authenticity.

Most importantly, the law emphasizes speed. Content declared illegal by a court or the government must be taken down within three hours. Deepfake nudity and other sensitive material must be removed within two hours, and user complaints must be resolved within seven days.

ALSO READ: Nothing Phone 4a Series To Come With A Snapdragon Chipset, Company Confirms

These rules are a step in the right direction, as they aim to curb the ever-rising misuse of AI content, privacy, and security. With this as we go forward, more amendments and flexibility might be required to shape a safe landscape for every individual.


Also In News