AI regulation needs a balanced pathway : The Tribune India

Join Whatsapp Channel

AI regulation needs a balanced pathway

Instead of patchwork regulation or inconsistent policy advisories on AI, we need an all-inclusive public discourse on policy approach as well as regulation.

AI regulation needs a balanced pathway

Roadmap: India needs to develop a robust policy and regulatory framework that focuses not just on the potential of artificial intelligence but on the risks as well. istock



Dinesh C. Sharma

Science commentator

THE brouhaha over Gemini, the artificial intelligence (AI)-driven chatbot of Google, has brought into focus several important issues relating to technology, regulation and the role of the state. The AI tool became a point of discussion on social media after some users highlighted its reply to a question on the Prime Minister and fascism. The Minister of State for Information Technology quickly responded by saying that the AI-generated reply to a particular question about the PM was biased and amounted to violation of IT rules and other laws relating to the sharing of unlawful content by intermediaries or platforms. The minister’s statement was followed by an official advisory that the release of ‘under-testing or unreliable’ AI models and software to users would require ‘explicit permission of the Government of India’.

In the absence of a comprehensive regulatory pathway and framework on AI and other disruptive technologies, the minister’s response and the subsequent advisory can at best be considered a kneejerk reaction.

Governments have had a poor track record of dealing with new technology, starting with the Directorate General of Technology Development over half a century ago. It acted as a gatekeeper for foreign technology and was responsible for rejecting applications from American corporations to start electronics production in India in the 1960s. When Texas Instruments started exporting software from Bangalore via a satellite link (a novelty then) in the 1980s, the Ministry of Home Affairs wanted a printout of all that was being sent to make sure no state secrets were transmitted. All it got was a room full of printouts filled with 0 and 1. In the 1990s, a clerk in the Industries Ministry sent the licence application of a vaccine firm to the heavy engineering department for approval because it mentioned genetic engineering as the method of production. Given how bureaucracy works, one can just imagine the fate of Google, Meta, Apple or OpenAI sharing their AI algorithms with the clerical staff in the IT Ministry for prior approval.

Instead of patchwork regulation or inconsistent policy advisories on AI, we need an all-inclusive public discourse on policy approach as well as regulation. The discussion on AI so far has been limited to its transformative nature or potential and the role of innovation and startups in furthering it. Companies want to bring new products or new features in existing products to the market as quickly as possible. If users get a mobile phone app that deploys AI for image recognition, they would just go for it without bothering to read the Terms of Service which may mention using data generated by consumers. Technology firms complain that any regulation would affect innovation adversely. Following the Gemini controversy, the minister concerned has clarified that the AI advisory would not apply to startups.

Such a skewed discourse leaves out larger questions about AI and its potential use in different sectors. Do we need applications that follow use or felt needs (what some experts call ‘human-centred AI) or develop applications that call for change in our behaviour or disrupt human and social norms? It is feared that if AI tools are designed without the context of the end user in mind, its development may be based on flawed or biased assumptions about ultimate users and the socio-economic context or practice the AI tool is seeking to augment or replace.

Several such ethical, social, privacy and transparency issues need to be addressed while thinking of regulatory frameworks. AI technologies are fast getting adopted in many sectors without public understanding of these issues because of lack of research and proper public understanding. Even government agencies are adopting AI technologies with no regard for privacy, transparency and fundamental rights of people. The use of drone surveillance and facial recognition technologies in the ongoing farmers’ agitation is an example.

In the race between technology and regulation, more often it is the regulation which is in the ‘catch-up’ mode. Governments and regulators all over the world are desperately trying to remain in this race with AI. In America, President Joe Biden has issued an executive order on AI that focuses on security, safety, privacy and discrimination. To manage risks, the order emphasises increased transparency and the use of testing, tools and standards. It suggests stress-testing new technologies for potential holes or safety oversights. Companies developing AI models that could pose a serious risk to national security or public health will have to notify the government and share the results of safety tests.

The AI Act of the European Union provides for different rules for different risk levels. In the high-risk category are AI systems that negatively affect the safety or fundamental rights of citizens. ‘Unacceptable risk’ AI systems will be those considered a threat to people and will not be permitted. They include cognitive behavioural manipulation of people or specific vulnerable groups, social scoring (classifying people based on behaviour or economic status, etc.), biometric identification and categorisation of people, and real-time and remote biometric identification systems like facial recognition. The currently popular general-purpose and generative AI systems like ChatGPT will have to comply with transparency requirements and ensure designs that don’t generate illegal content and publish summaries of copyrighted data. Then there are limited-risk AI systems that will be subjected to lighter transparency norms.

India needs to do three things to deal with AI. One, develop a robust policy and regulatory framework that focuses not just on the potential but on risks as well. This needs to be done in an open, transparent manner and by involving all stakeholders —technology firms, social scientists, civil society, legal experts, consumer organisations and policy think tanks. The framework should be based on a set of clearly defined guiding principles like safety, transparency and fairness. Second, develop an agile, flexible and modern regulatory system based on the guiding principles. Third, build necessary governance and regulatory capacity to deal with emerging technologies. It is time to meaningfully engage with AI and not reject it.

#Artificial Intelligence AI #Google #Social Media


Top News

India upset Olympic champions South Korea to bag gold in archery World Cup

India upset Olympic champions South Korea to bag gold in Archery World Cup

In a battle between the top-two seeds of the competition, In...

Mumbai SIT detains actor Sahil Khan from Chhattisgarh in Mahadev betting app case

Mumbai SIT detains actor Sahil Khan from Chhattisgarh in Mahadev betting app case

The actor is apprehended from Jagdalpur in Chhattisgarh on S...

Village defence guard injured in firing in J-K’s Udhampur

Village defence guard injured in firing in J-K’s Udhampur

A patrol party of VDGs confronts some suspiciously moving pe...

13 arrested as mephedrone worth Rs 230 crore is seized after raids in Gujarat and Rajasthan

13 arrested as mephedrone worth Rs 230 crore is seized after raids in Gujarat and Rajasthan

The raids are conducted by the Gujarat Anti-Terrorist Squad ...

Navi Mumbai former corporator among 2 arrested on charges of extortion

Navi Mumbai former corporator among 2 arrested on charges of extortion

Held on charges of extorting money from a contractor


Cities

View All