DT
PT
Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
Add Tribune As Your Trusted Source
search-icon-img
search-icon-img
Advertisement

Stakeholders flag concerns over blanket labelling in draft IT rules on synthetically generated information

  • fb
  • twitter
  • whatsapp
  • whatsapp
Advertisement

New Delhi [India], December 11 (ANI): A cross-section of creators, legal experts, brand representatives and digital platforms on Monday raised strong objections to what they termed "blanket labelling" requirements in the Draft IT Rules on Synthetically Generated Information (SGI), urging the government to adopt a more transparent, risk-tiered regulatory framework.

Advertisement

According to a press release issued by the organisers, the observations were made at a closed-door roundtable convened by The Dialogue, a New Delhi-based tech policy think tank, to examine the feasibility and legal viability of the Draft IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2025.

Advertisement

Participants warned that the current formulation risks clubbing routine AI-enabled creative processes with high-risk synthetic media. Creators argued that the digital economy is built on personal credibility, and excessive labelling could damage that trust.

Advertisement

"There is a clear difference between AI-authored content and AI-enhanced content. Almost everything in our industry is AI-enhanced now, but my mileage as a creator is still built on trust... If every video I make ends up with an 'AI' banner just because I used captions or a clean-up tool, my credibility is at stake," content creator Tuheena Raj said, stressing that strong labels should apply mainly to "finance, health, political messaging, deepfakes - not... routine, low-risk enhancements."

Representatives from the advertising sector noted that AI is already deeply integrated into scriptwriting, editing, localisation, and testing workflows. They cautioned that unclear provisions might enable "liability dumping", pushing compliance burdens onto smaller creators and agencies.

Advertisement

Platform representatives drew parallels with global regulatory trajectories, noting that even mature jurisdictions lean towards principle-based, risk-graded AI rules rather than rigid, format-specific mandates.

"We work across multiple jurisdictions... Even in those mature' territories, you don't yet see such detailed rules on how every piece of synthetic media must be tagged," said Shivani Singh of Glance (InMobi Group). She questioned whether "blanket labelling will actually solve the deepfake problem we are worried about."

Legal experts argued that the Draft Rules conflate transparency with harm prevention and lack a differentiated approach to risk. "The absence of risk grading results in overbroad mandates that treat all content with suspicion," said Akshat Agarwal of AASA Chambers, adding that labelling could become "a blunt instrument that penalises innovation without meaningfully curbing harm."

Across the discussion, stakeholders emphasised the need for clearer definitions, exemptions for routine or accessibility-related AI uses, and interoperable provenance standards rather than heavy detection obligations. They stressed the importance of frameworks that protect against deception without undermining legitimate creative expression. (ANI)

(This content is sourced from a syndicated feed and is published as received. The Tribune assumes no responsibility or liability for its accuracy, completeness, or content.)

Read what others don’t see with The Tribune Premium

  • Thought-provoking Opinions
  • Expert Analysis
  • Ad-free on web and app
  • In-depth Insights
Advertisement
Advertisement
Advertisement
tlbr_img1 Classifieds tlbr_img2 Videos tlbr_img3 Premium tlbr_img4 E-Paper tlbr_img5 Shorts