IT Ministry Eliminates Pre-Approval Process for Launching New AI Models

Ministry of Electronics and Information Technology

Three points you will get to know in this article:

  • MeitY updates AI deployment guidelines for social media firms.
  • New advisory removes prior government approval for “under-tested” AI.
  • Emphasis on clear labeling, legal compliance, and user protection.
AI

What is Meity?

The Ministry of Electronics and Information Technology (Meity) is a government ministry in India that is responsible for formulating and implementing information and technology policies. Established on July 19, 2016, MEITy emerged independently from the Ministry of Communications and Information Technology. Its primary role is to craft and implement policies that drive the growth of the electronics sector and shape IT strategies. One of its notable initiatives is the “Northeast Heritage” Web, a platform operated by the Government of India. This platform serves as a rich resource offering insights into Northeast India, presenting information in a variety of languages including Assamese, Meitei (Manipuri), Bodo, Khasi, and Mizo, alongside Hindi and English. MEITy is dedicated to fostering inclusivity and accessibility through language diversity, ensuring that valuable information reaches people across the nation.

Government Advisory Update

The Ministry of Electronics and Information Technology (MeitY) has updated its recent guidance for the leading social media companies operating in the nation regarding the utilization of artificial intelligence. This revision includes a notable change, specifically removing the requirement for intermediaries and platforms to obtain government approval prior to launching AI models that are deemed “under-tested” or “unreliable.”

According to the IT ministry, the new advisory issued on March 15 replaces the previous one issued on March 1. Notably, the earlier stipulation mandating platforms to seek explicit permission from the Centre before deploying AI models labeled as “under-testing/unreliable” has been eliminated from the updated advisory.

According to a report by Moneycontrol, who got hold of the updated advisory from the IT ministry, it states, “This advisory supersedes the previous one dated March 1, 2024.”

Changes in Language and Compliance

In the latest version, intermediaries are no longer required to furnish a report on actions taken and their status. However, compliance is still necessary immediately. Though the obligations outlined in the updated advisory remain the same, the language has been made gentler.

In the advisory issued on March 1, digital platforms were instructed to obtain prior approval before introducing any AI product in India. Sent to digital intermediaries on March 1, the directive mandated platforms to identify AI models under trial and ensure no illegal content was hosted on their platforms. Additionally, the MeitY advisory cautioned about penalties for non-compliance with the directive.

Clear Labeling for Unreliable AI Products

So, the updated advice from the government suggests that if AI products haven’t been thoroughly tested or might not be totally reliable, they should come with a clear label. This label would let folks know that the results from these products might not be totally dependable.

According to what Moneycontrol reported, the advice goes like this: “AI models, software, or algorithms that haven’t undergone enough testing or might not be totally reliable should only be used in India if they’re labeled properly to warn users about the possibility that the results they produce might not be completely accurate.”

Focus on Legal Compliance and Integrity

The latest advisory emphasizes that AI models should steer clear of sharing content that violates Indian laws. It’s crucial for platforms to ensure their AI algorithms remain fair and don’t pose any threat to the integrity of elections. To help users steer clear of unreliable information, platforms ought to use consent pop-ups to caution them.

Spotting Misinformation and Deepfakes

According to reports, the updated advisory also puts a spotlight on spotting deepfakes and false information. Platforms are directed to either label or embed content with distinctive markers. This applies to all types of content—be it audio, visual, textual, or a mix—making it easier to spot potential misinformation or deepfakes, even if the term “deepfake” isn’t explicitly defined.

MeitY also mandates the use of labels, metadata, or unique markers to indicate if content has been artificially generated or tampered with, attributing it to the intermediary’s computer resource. Furthermore, if users make any alterations, the metadata should be able to identify them or their computer resource.

Recipient Platforms and Stakeholder Feedback

The message went out to eight major social media platforms: Facebook, Instagram, WhatsApp, Google/YouTube (specifically Gemini), Twitter, Snap, Microsoft/LinkedIn (for OpenAI), and ShareChat. Notably absent from the recipients were Adobe, Sarvam AI, and Ola’s Krutrim AI.

Feedback on the advisory issued on March 1 was less than favorable among AI startups. Many founders shared concerns regarding its potential impact on generative AI startups.

Aravind Srinivas, CEO of Perplexity.AI, didn’t mince words, calling India’s decision a “bad move.”

Likewise, Pratik Desai, founder of Kissan AI, a platform providing AI-driven agricultural assistance, found the move to be demoralizing.

In response to concerns raised about the advisory, Minister of State for Electronics and Information Technology, Rajeev Chandrasekhar, emphasized the importance of responsibility when deploying experimental AI systems online. He highlighted that platforms must adhere to existing legal obligations under IT and criminal law, especially to prevent harm or facilitate unlawful content. Chandrasekhar suggested that employing clear labeling and obtaining explicit consent are effective measures for protection. Additionally, he recommended major platforms to seek government permission before launching potentially flawed systems.

The minister affirmed the country’s enthusiasm for AI, recognizing its potential to enhance India’s digital landscape and foster innovation.

The Ministry of Electronics and Information Technology (MeitY) updated guidelines for social media companies on deploying AI in India, removing the need for government approval on “under-tested” AI models. The revision, effective March 15, eases compliance by eliminating prior explicit consent requirements and report filings. Noteworthy directives include clear labeling for AI products lacking thorough testing, monitoring to prevent illegal content dissemination, and safeguards against misinformation including deepfakes. Minister Chandrasekhar stresses responsible AI deployment, advocating for clear labels, and users’ explicit consent. Despite a negative reception among some AI startups, the government recognizes AI’s potency to advance digital landscapes in India, promising innovation with legal compliance and user protection at the forefront.

SA Team

Start typing and press Enter to search

Shopping Cart