The Union government has clarified that its AI advisory, issued ahead of the general elections regarding the use of generative artificial intelligence services, was aimed at major platforms, not startups, and has not yet been established as a legal framework. This clarification comes in the wake of criticism the ruling party received for requiring AI platforms, particularly those in the testing phase, to obtain government permission prior to deployment, a move perceived as an attempt to limit AI tools.
Minister of state for electronics and IT Rajeev Chandrasekhar further explained that the requirement for permissions from MeitY would apply only to large platforms, excluding startups. He emphasised that the AI advisory was intended to protect platforms from potential legal action by consumers. Although the clarification has eased concerns, questions about regulatory implications remain.
READ | Big four on NFRA cross-hairs as it sets out to revamp financial reporting
AI Advisory explained
With the Lok Sabha elections approaching, the IT ministry last Friday advised generative AI companies such as Google and OpenAI and other platforms utilising the technology, to ensure that their services do not produce content that violates Indian laws or threatens the integrity of the electoral process. The government demands these companies to clearly label any AI-generated content, including articles and videos that could be mistaken as genuine. Additionally, the advisory mandates that companies inform the government and obtain explicit permission before publicly releasing AI tools that are under development.
The advisory’s emphasis on untested AI platforms has also sparked debate. Critics argue that defining and identifying such platforms is complex and subjective. Determining the level of testing required for an AI model to be considered tested remains unclear. Additionally, concerns have been raised about the potential for this requirement to stifle innovation, as startups and smaller companies may not have the resources or capabilities for extensive testing before deployment.
This advisory was issued shortly after an incident where Google’s Gemini platform was asked if Prime Minister Narendra Modi is a fascist, to which it replied affirmatively, citing his policies and the ruling party’s actions. This response prompted significant backlash, leading to the advisory. However, the AI community has expressed concerns that this directive could hinder innovation and negatively affect the burgeoning industry, labelling it as detrimental to innovation and public interest.
Legal challenges to the government’s advisory are mounting, with critics pointing out the absence of a clear legal basis for regulating generative AI companies under current technology laws.
The government has maintained that the advisory is not intended to be a rigid legal framework but rather a due diligence measure for online intermediaries under the Information Technology Rules, 2021. However, critics point out that these rules do not explicitly address large language models, raising questions about the legal basis for applying them to generative AI companies. This lack of clarity has led to calls for a more transparent and legally sound approach to regulating AI in India.
Impact of the clarification
The clarification suggests the advisory was more about political messaging than legal consequences, seeking to protect users from potential harm. Nonetheless, the clarity and legal foundation of this stance are still debated. Given India’s vastness, regulatory oversight can be a daunting task, yet the significance of potential violations cannot be underestimated, justifying the targeted audience’s apprehension.
The clarification also makes exceptions for the advisory, stating it will not apply to platforms in the healthcare or agriculture sectors, currently targeting social media platforms.
Policymakers face a delicate balance. Mandating government approval for AI platforms with potentially hazardous data or algorithms could result in arbitrary regulatory decisions. However, leaving the industry completely unregulated is not a viable solution either. The concern over AI-generated deepfakes and their impact on elections is substantial as major democracies, including India, conduct polls. The effects on election outcomes will become more apparent with time.
There is a need for clarity on defining large platforms versus startups, criteria for tested models, responsible parties for model approval, and applicability of such a regulatory framework to rapidly advancing technology.
While generative AI presents significant risks, as highlighted by ChatGPT founder Sam Altman, the Indian government’s abrupt and broad directives are unlikely to effectively manage the challenges posed by AI. Effective policymaking will require a balance between regulatory measures and stakeholder interests.