Split wide open on AI regulation
The world has entered a new phase of accelerated development and application of artificial intelligence (AI) with the unveiling of DeepSeek by a Chinese entity as a strong competitor to ChatGPT and other Large Language Models. Alongside, the debate on AI regulation has also gathered steam. The subject dominated the recent AI Summit in Paris — a meeting of political leaders, diplomats and CEOs of technology companies — co-chaired by India and France. Far from facilitating a consensus on the principles of AI regulation, the summit exposed the fissures on the subject, reflective of the changed political situation after the return of US President Donald Trump.
Noting that AI was different from past technological milestones in terms of its impact, Prime Minister Narendra Modi called for “collective global efforts to establish governance and standards that uphold shared values, address risks and build trust”.
As a first step towards such collective action, the summit adopted a diplomatic declaration on ‘inclusive and sustainable AI’. However, two major leaders in AI development — the US and the UK — refused to sign the declaration. US Vice-President JD Vance said the world needed an AI regulatory system that fostered innovation and not strangled it, while the UK felt that the statement did not address AI’s likely impact on national security. As the summit ended, it was obvious that the world was divided over AI safety and regulation.
Artificial intelligence has been around for many years now, but recent advances have made general-purpose AI a reality. General-purpose AI tools can perform a wide variety of tasks and are being deployed by technology companies for many consumer and business purposes. The International AI Safety Report prepared by independent experts, including from India, was released ahead of the Paris summit. The report pointed out that sophisticated AI agents would be able to act autonomously and use computers to complete complicated projects. This means both additional benefits and new risks.
We already know the risks of AI systems — scams, non-consensual intimate imagery, child sexual abuse material, bias against groups of people or opinions, reliability, privacy violations, etc. The additional risks that the AI safety report has listed include large-scale impacts on the labour market, AI-enabled hacking and biological attacks. While testing certain AI models, it was found they can reproduce known biological and chemical weapons and facilitate the design of novel toxic compounds. General-purpose AI makes it easier to generate persuasive content at scale, thus helping people or groups seeking to manipulate public opinion. Earlier, it was believed that such impacts of AI were some decades away, but new capabilities (like scientific reasoning and programming) mean such risks are not so distant.
Regulation of AI would necessarily require an assessment of risks and a pathway to mitigate as well as monitor them. This is going to be a Herculean task because the possible uses and applications of general-purpose AI systems are very broad — the same system can be used for anything from medical advice to writing software and generating images. AI developers and users do not fully know the possible applications of such systems, making any regulation of the technology challenging. The pace of advancement in general-purpose AI, therefore, creates an ‘evidence dilemma’ for decision-makers and regulators, as the report has pointed out. Moreover, technology companies are sharing only partial information with non-industry players. We cannot make any headway with regulation, given such an information gap in the understanding of the capabilities of AI systems.
Artificial intelligence has also reignited the old debate on regulation versus innovation. The US made it clear in Paris that it does not favour any regulation of the technology if it hinders innovation. This is in line with the stand of technology giants leading the AI race in America. India somehow sided with this view with a little tweak, as reflected in the Prime Minister’s statement that “governance was not just about managing risks but also about promoting innovation and deploying it for the global good”. The US, which is rooting for fossil fuels under Trump, did not sign the sustainability statement. This is critical because AI development and operation is a power-guzzling affair and large-scale deployment of AI systems would impact efforts to wean the world away from fossil fuels. This links AI with another contentious global issue — climate change.
Regulation has always played ‘catch up’ with technology. This has happened with many technological developments in the past and continues to be so with new advances — stem cell research, cloning, xenotransplantation, the Internet and social media. AI, however, is different. It is not one technology but a bunch of technologies and applications. The first task is to decide what is to be regulated. Countries or companies wishing to boost their ‘leading’ position in AI might hence apply a very broad interpretation of applications falling under the term. Another vital question is who is to be regulated — an entity that develops the technology, the one which develops applications or the one that deploys it. All these lines are blurred, given the nature of this technology and the information available from technology companies. It is somewhat like the content and carriage dilemma regulators are facing currently with the regulation of content on Internet-based offerings.
AI has made many revisit the three laws of robotics proposed by science fiction writer Isaac Asimov in 1942. The first said, “A robot may not injure a human being or, through inaction, allow a human being to come to harm.” The second stated, “A robot must obey orders given it by human beings except where such orders would conflict with the First Law.” The third specified that “a robot must protect its existence as long as such protection does not conflict with the First or Second Laws”. Based on these guiding principles, it would be prudent to evolve a broad set of principles for AI governance and regulation while constantly assessing emerging risks and benefits.