– India received criticism for AI advisory
– Ministry no longer requires government approval for deploying AI models
– Firms advised to label under-tested and unreliable AI models and ensure AI models do not share unlawful content or permit bias
India recently received criticism for its AI advisory, prompting the Ministry of Electronics and IT to update the guidelines. The new advisory no longer requires firms to seek government approval before launching or deploying AI models in the South Asian market. Instead, companies are advised to label under-tested and unreliable AI models to inform users of potential issues.
The revision follows backlash from high-profile individuals, including Martin Casado from venture firm Andreessen Horowitz, who called India’s initial move “a travesty.” This marks a significant change from India’s previous hands-off approach to AI regulation, emphasizing the sector’s importance to the country’s strategic interests.
The new advisory, like the original, has not been published online, but TechCrunch has reviewed a copy. It highlights the importance of not using AI models to share unlawful content or permit bias, discrimination, or threats to the electoral process. Intermediaries are advised to use mechanisms like consent popups to inform users about the reliability of AI-generated output.
The ministry also stresses the need to easily identify deepfakes and misinformation, urging intermediaries to label or embed content with unique metadata. The advisory no longer requires firms to identify the “originator” of specific messages. Overall, the Indian government is focusing on promoting responsible AI use while also supporting innovation in the AI sector.