Add Tribune As Your Trusted Source
TrendingVideosIndia
Opinions | CommentEditorialsThe MiddleLetters to the EditorReflections
UPSC | Exam ScheduleExam Mentor
State | Himachal PradeshPunjabJammu & KashmirHaryanaChhattisgarhMadhya PradeshRajasthanUttarakhandUttar Pradesh
City | ChandigarhAmritsarJalandharLudhianaDelhiPatialaBathindaShaharnama
World | ChinaUnited StatesPakistan
Diaspora
Features | The Tribune ScienceTime CapsuleSpectrumIn-DepthTravelFood
Business | My MoneyAutoZone
News Columns | Straight DriveCanada CallingLondon LetterKashmir AngleJammu JournalInside the CapitalHimachal CallingHill ViewBenchmark
Don't Miss
Advertisement

Go beyond quick fix to fight deepfakes

India needs a comprehensive regulatory framework to deal with all aspects of AI
Challenge: Regulating AI is proving to be a nightmare for governments around the world. iStock

Unlock Exclusive Insights with The Tribune Premium

Take your experience further with Premium access. Thought-provoking Opinions, Expert Analysis, In-depth Insights and other Member Only Benefits
Yearly Premium ₹999 ₹349/Year
Yearly Premium $49 $24.99/Year
Advertisement

THE Central government has published draft rules for the regulation of Artificial Intelligence (AI)-generated digital content. The draft extends the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules issued in 2021 to cover ‘synthetically generated information’. Such information or content has been defined as something which is artificially or algorithmically created, generated, modified or altered using a computer resource to make it reasonably appear as authentic or true.

Advertisement

The rules have been framed specifically to address the menace of deepfake videos, audio and synthetic media, which have flooded social platforms in recent months and demonstrated the potential of generative AI to create convincing falsehoods. Such content depicts individuals in acts or statements they never made, such as a recording of a purported telephonic conversation between US President Trump and India’s Prime Minister Modi or a fake clip of the popular show Kaun Banega Crorepati, hosted by Amitabh Bachchan.

Advertisement

AI-generated deepfakes are often used to spread misinformation, damage someone’s reputation or commit financial fraud. Political parties can use them to influence voters during elections. The response of government agencies so far has been to issue advisories to social media platforms and ‘significant social media intermediaries’ (SSMIs) against what it feels is ‘objectionable’ content. The SSMIs are supposed to have a compliance system to deal with complaints and regulatory requirements.

The new set of rules seeks to provide a legal basis for labelling, traceability and accountability related to synthetically generated information. This will be done by making it binding for all AI-generated content to be labelled through metadata embedding so that synthetically generated or modified information can be distinguished from authentic content. Social media intermediaries will be accountable for verifying and flagging synthetic information.

The proposed measures thus cover not only social media platforms and intermediaries but also AI companies and tools that offer technologies for enabling the creation or modification of synthetically generated information. AI tool companies will have to ensure that such information is labelled or embedded with a permanent, unique metadata or identifier.

Advertisement

Like the statutory health warning on tobacco products, the label on synthetic content will have to be displayed prominently, covering at least 10 per cent of the surface area of a visual display or the initial 10 per cent of the duration in the case of audio content. Social media platforms will be required to ask users to declare if the information they are uploading is synthetically generated, and the platform will have to deploy ‘reasonable and proportionate’ technical measures to verify such declarations. If online platforms remove or block access to AI-generated or other problematic content in good faith, they will not lose legal protection granted under the Information Technology Act.

Regulating any kind of AI is proving to be a nightmare for regulators and governments around the world because of the very nature and organisation of this technology. The AI content food chain is complex and multi-layered. It is not just AI tools like ChatGPT, Gemini or Dall-E but also several intermediaries and platforms that enable, permit or facilitate the creation or sharing of AI-generated content like AI art generators, voice-cloning tools, deepfake apps; embedded tools in social media or video platforms where users upload AI-generated material; chatbots or content creation tools that produce AI text or visuals. Except for major social media players like Facebook, Instagram, X and LinkedIn, which have a physical presence in India, most players in the AI food chain are based elsewhere and it may be hard to subject them to Indian regulations.

The rules on deepfakes appear to have been drafted in a hurry. They look more of a knee-jerk reaction to the recent spate of deepfake videos involving political leaders rather than a well-thought-out regulatory step to protect the online rights of citizens. The focus of the rules is entirely on ‘taking down’ of deepfakes considered objectionable, and holding social media platforms responsible for such content. Instead, the regulation must be designed to protect everyone’s right to their physical features as well as their voice. Film stars in India are specifically seeking this right to prevent unauthorised AI representation of their faces and voices.

Regulators the world over are responding to the deepfake challenge in different ways. Denmark proposes to treat a person’s unique likeness as intellectual property for all citizens, not just celebrities. A proposed law in the US prohibits the deliberate distribution of materially deceptive AI-generated audio or visual material about candidates in federal elections. France is working on a comprehensive online safety law. The UK has amended its online safety law to include deepfakes and intimate image abuse. The most comprehensive is the European Artificial Intelligence Act. All such regulatory steps are being taken after due public discourse, involvement of civil society and parliamentary debates.

The draft regulation in India has been proposed without any public debate. The Central government has given the public barely a fortnight to comment on the draft rules. Such an important piece of regulation needs wider, multi-stakeholder discussion for it to be effective. The discussion should also cover the effectiveness of the digital media ethics code enforced since 2021, so that lessons could be drawn and incorporated into new rules.

There is always the fear of misuse by government agencies and the selective takedown of content. Senior government and political leaders have been found using fake and AI-generated content, either deliberately or inadvertently. Handles aligned with political parties often spread AI-generated misinformation. The biggest elephant in the room is the dubious stance of tech companies and social media platforms on anonymous and bot-operated accounts.

India needs a comprehensive regulatory framework to deal with all aspects of AI, not just deepfakes, much like the EU AI Act, instead of a fragmented or patchwork approach to regulation. This must go hand in hand with the promotion of digital literacy and education about online safety and the protection of consumer rights.

Dinesh C Sharma is a science commentator.

Advertisement
Tags :
#AIAct#AIethics#AIIndia#AIregulation#Deepfakes#DigitalContent#GenerativeAI#OnlineSafetymisinformationSocialMedia
Show comments
Advertisement