India is pulling out of a recent AI advisory after criticism from many local and global businessmen and investors.
The Ministry of Electronics and Information Technology shared an updated AI advisory with industry stakeholders on Friday, which no longer required them to obtain government approval before releasing or deploying an AI model to users in the South Asian market.
Under the revised guidelines, companies are advised to flag untested and unreliable AI models to inform users of their potential error or unreliability.
The review follows strong criticism of India’s IT ministry earlier this month by several high-profile individuals. Martin Casado, a partner at venture firm Andreessen Horowitz, had called India’s move a “travesty”.
The March 1 advisory also marked a reversal of India’s previous approach to AI regulation. Less than a year ago, the ministry had refused to regulate the development of artificial intelligence, identifying the field as vital to India’s strategic interests.
The new advisory, like the original one earlier this month, has not been published online, but TechCrunch has reviewed a copy of it.
The ministry said earlier this month that while the advisory was not legally binding, it signaled the “future of regulation” and that the government was demanding compliance.
The advisory highlights that AI models should not be used to share illegal content under Indian law and should not allow bias, discrimination or threats to the integrity of the electoral process. Intermediaries are also advised to use “consent pop-ups” or similar mechanisms to explicitly inform users about the unreliability of AI-generated results.
The ministry maintained its emphasis on ensuring that counterfeits and disinformation are easily identified, advising intermediaries to tag or embed content with unique metadata or identifiers. It no longer requires companies to devise a technique to identify the “creator” of any particular message.