The Indian Ministry of Electronics and Information Technology (MeitY) issued a revised advisory on March 15, 2024, significantly altering the regulatory landscape for Artificial Intelligence (AI) platforms in the country. This update comes as a revision to the previous advisory issued on March 1, 2024.


The advancement of AI technologies, particularly Large Language Models (LLMs), has raised concerns globally about their potential misuse and societal impact. Governments, including India, have been actively engaged in formulating regulations to address these concerns while promoting responsible AI development and deployment.


1. Removal of permission requirement: The revised advisory eliminates the previous mandate that required intermediaries and platforms to seek explicit government approval before deploying “under-tested” or “unreliable” AI models and tools in India. This change signifies a shift towards a more flexible regulatory approach, aiming to foster innovation while ensuring accountability.

2. Emphasis on content labeling: The updated advisory underscores the importance of labeling AI-generated content, particularly focusing on content vulnerable to manipulation through deepfake technology. Platforms are now urged to implement robust labeling mechanisms to distinguish AI-generated content from authentic sources, thereby enhancing transparency and mitigating the spread of misinformation.

3. Compliance with existing IT Rules: While removing the requirement for government approval, the revised advisory maintains obligations for intermediaries to comply with existing Information Technology (IT) Rules. Platforms are instructed to exercise due diligence in implementing measures to tackle unlawful content, including AI-generated content susceptible to manipulation.

4. Identification of deepfakes and misinformation: MEITY emphasizes the need for platforms to develop mechanisms for identifying deepfakes and misinformation, incorporating unique metadata or identifiers across various forms of content, including audio, visual, text, or audio-visual content. This directive aims to enhance the ability to detect and prevent the dissemination of deceptive content.


With the revised advisory in place, AI platforms operating in India should adapt their strategies to align with the updated regulatory framework. Key considerations include:
– Reviewing internal policies and procedures to ensure compliance with the revised advisory and existing IT Rules.
– Implementing robust content labeling mechanisms to differentiate AI-generated content from authentic sources and combat misinformation.
– Developing advanced detection technologies to identify and mitigate the spread of deepfakes and other forms of deceptive content.
– Collaborating with regulatory authorities and industry stakeholders to address emerging challenges and promote responsible AI deployment in India.