AI regulation needed, but fear of unknown should not block human progress

artificial intelligence, AI, AI regulation
The calls for AI regulation and concerns about its threat stem from fears about the impact on jobs, data privacy and individual rights.

AI regulation: As the evolution of artificial intelligence gathers momentum, governments across the world are struggling to regulate the applications of this revolutionary technology. None of the existing legal frameworks is prepared for machines and programs that can think like humans. Regulating AI is one of the topmost priorities of policymakers, especially after the emergence of Microsoft-backed Open AI’s ChatGPT and Google’s Bard.

Leading jurists and technology experts are in favour of tight regulation of the development of artificial intelligence and its applications. They say further work in the field must be stopped until a respectable forum can come up with solutions and regulations. However, no one is sure if shutting down AI progress is tantamount to blocking the progress of the human race.

READ | Single-use plastics ban remains a dream without viable alternatives

Artificial intelligence is widely used by the industry and academia to solve problems in fields such as healthcare, finance, education, entertainment, cybersecurity, and marketing. AI is also used in content creation by some leading media and research organisations such as Reuters and Bloomberg. Other uses of AI in the content space include translation, audio description, captioning, idea generation, and copywriting.

A letter by the Future of Life Institute, signed by thousands of scientists, technocrats, businessmen, academics, and others, including Open AI cofounder Elon Musk, has called for a six-month pause in further development of neural language models. The reigning sentiment is one that reverberates most commonly in science fiction novels and movies alike — that uncontrolled development of such language models may someday create human-competitive intelligence and hence poses great risk to humanity.

The internet is also filled with stories about how AI is being deployed to further various human priorities such as switching to clean energy. A recent article by the Journal of Petroleum Technology said AI has been improving the efficiency and cost-effectiveness of discovering and extracting clean energy as it can be used to analyse large data sets from satellite imagery, sensor networks, and other sources to identify the most promising locations for renewable energy projects. These mixed feelings on the use of AI and the threats that come with it are posing a hindrance to regulating AI. The question is who gets to decide whether a technology is harmful or profitable.

Every scientific progress has found naysayers at all points in history. Italian philosopher and astronomer Galileo Galilei faced persecution and punishment by the Catholic Church for sharing his scientific findings that the earth is not the centre of the universe. Scientific progress has always clashed with morality, religion, and ethics. There are numerous other instances when new inventions faced paranoia and opposition, including the invention of the printing press by Johannes Gutenberg in 1436.

The concerns about the threats posed by AI stem from fears about the adverse impact on jobs, data privacy, and individual rights. There are also concerns about AI algorithms that have been discriminating against certain groups of people and are being deployed to spread false information or propaganda. Analysts of the AI space agree that if the systems are not properly secured, they may be vulnerable to cyber-attacks that could result in significant damage.

Efforts in AI regulation

Globally, governments are waking up to the possibility of AI misuse and are coming up with solutions of their own to regulate it. However, no regulation can be employed without proper research and study. In view of this, the European Consumer Organisation (BEUC) is calling on EU consumer protection agencies to investigate the technology and the potential harm to individuals. The Australian government recently requested advice on how to respond to AI and is considering next steps. The UK has decided to split responsibility for governing AI between its regulators for human rights, health and safety, and competition and not to constitute a new body.

The Indian government, meanwhile, is not considering a law to regulate the growth of artificial intelligence in the country. IT and telecom minister Ashwini Vaishnaw recently said while acknowledging that there are ethical concerns and risks around AI, the government has already started making efforts to promote and adopt best practices.

Policymakers are also realising that instead of coming up with individual guidelines, they must collectively find solutions to the AI dilemma. The Digital Transformation Minister of Japan, Taro Kono, recently said that the G7 must discuss AI technologies, including ChatGPT, and issue a unified G7 message. The need of the hour is not a complete stop on AI progress but a collaborative effort to address the problems. However, to ensure that, scientists, developers, policymakers, and governments will be required to rise above their respective interests and focus on the public welfare of all nations.