Site icon Policy Circle

Artificial intelligence: TRAI urges collaborative efforts for responsible regulation

AI mission

India's new AI mission targets tier-II/III cities, looking to equip farmers, doctors, and teachers with future skills.

As Artificial Intelligence continues to evolve at a breakneck speed, government agencies are struggling to find ways to regulate it. India’s telecommunications regulator has asked the government to collaborate with international agencies and other countries to form a global agency for AI regulation. In a consultation paper titled ‘Leveraging Artificial Intelligence and Big Data in Telecommunication Sector,’ the Telecom Regulatory Authority of India has recommended the formation of a statutory authority to regulate AI.

TRAI has recommended that proactive steps must be taken to find ways for responsible use of artificial intelligence and the framework must be set up by taking stock of potential risks. TRAI’s recommendation comes hot on the heels of concerns raised by Sam Altman, the founder of OpenAI, which runs ChatGPT. Earlier, Altman had called for an international regulatory body for AI, akin to that governing nuclear power.

READ | India Sustainability Summit 2023: A forum for exchange of ideas, forging alliances

In the wake of rising concerns regarding AI, the IT Ministry is also working on including provisions for regulating AI from the lens of user harm in its draft Digital India Bill.

Need for regulating AI

Artificial Intelligence has become the talking point for technologists, policymakers and activists as it becomes clear that AI can have a permanent mark on the future of various industries, including entertainment, journalism, and core jobs.

The gravity of the situation becomes clearer when a creator becomes afraid of his own creation. Sam Altman had testified before the US Senate that three specific areas of concern need foremost attention; the chief concern among these being a possibility that AI could go wrong. Secondly, he testified that AI could be deployed towards spreading targeted misinformation especially when major democracies around the globe including India and the United States are going to elections soon.

The risk of AI has not just caught the eye of ChatGPT founder but also some 350 odd persons, including Altman; Mira Murati, the chief technology officer of OpenAI; Kevin Scott, Microsoft’s chief technology officer; top executives from Google AI; leaders from Skype and Quora and even a former United Nations High Representative for Disarmament Affairs. The statement that they have collectively put out focuses on mitigating the risk of extinction from AI and why AI regulation should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

What has TRAI recommended

It is no brainer that as the largest democracy and the country with highest population, India should safeguard its citizens from the harms of developing technology. To that end, TRAI recommended adoption of a regulatory framework which is applicable across sectors. This also means that specific AI use cases which can directly impact humans are regulated through legally binding obligations. TRAI also vocalised the setting up of an independent statutory body, called the Artificial Intelligence and Data Authority of India (AIDAI), which solely oversees responsible AI and regulation of use cases in India.

The said nodal agency will not just oversee all issues related to data digitisation, data sharing, and data monetisation in the country, but also define principles of responsible AI and their applicability on AI use cases. TRAI also said that AIDAI must ensure that principles of responsible AI are made applicable at each phase of the AI framework lifecycle including design, development, validation, deployment, monitoring and refinement.

Furthermore, to dissolve power concentration, a multi-stakeholder body (MSB) needs to be created to act as an advisory agency to the AIDAI. MSB will have members from the Department of Telecommunications, Ministry of Information and Broadcasting, Ministry of Electronics and Information Technology, Department for Promotion of Industry and Internal Trade (DPIIT), Department of Science and Technology, and the Ministry of Home Affairs, along with legal and cybersecurity experts.

Challenges to AI regulation

AI allows machines to behave in such a way that it would be called intelligent if a human being behaved that way. The problems with AI regulation is upfront, how to regulate an entity that is ever evolving, ever learning? AI is human-like reasoning displayed by computer systems.

Another issue with regulating AI is a larger question as to why companies who are making massive profits from them will temper the machines. Nvidia, a semiconductor company which is at the heart of the artificial intelligence revolution, recently became a one trillion-dollar company. Nvidia provides the chips and software for the computing-intensive demands of generative AI.

Take another case of the Ukraine war. While the Geneva Convention prohibits targeting of civilians and non-combatants by killer drones. What if the drone starts picking its own target. US senators also stated what to define and how to define the technology is an important policy dilemma.

Boosting technological development while maintaining human interests has always been a concern in each different time period when significant scientific discoveries took place. This time is no different.

Exit mobile version