THE tightening of rules governing social media increases the responsibility of digital companies and platforms regarding deepfakes and content generated through artificial intelligence. The latest changes in the law follow complaints of sexualised images produced by an AI bot. Effective from February 20, it will be mandatory to place prominent labels on AI-generated content. Major platforms will also have to get a declaration from users confirming whether the content being shared is AI-generated. It’s a step in the right direction. Also, any illegal or misleading AI content will have to be removed or blocked within three hours. The time limit was 36 hours previously. The complex logistics of this mandate apart, the question of who determines what is objectionable online — or what passes muster and on what basis — remains a point of contention. There are legitimate concerns regarding suppression of free expression.
As artificial intelligence makes long and fast strides in digital domination, framing effective regulatory controls poses a tough task. For the AI industry itself, consistently upholding ethical frameworks as their technologies scale in a highly competitive market is a demanding ask. The resignation of a safety researcher at a leading company, citing his disagreement with the unchecked acceleration of controversial projects, points to the pitfalls of technological advancements. The moral dilemma is stark, and there are no easy answers.
In the US, a landmark trial has begun that is examining the mental health effects of Instagram and YouTube. The world’s largest social media companies have been accused of creating addiction machines. ‘We’re seeing more and more young people who experience not just psychological distress, but physical distress, when their devices are taken away,’ a leading expert remarked. The observation can find resonance across the globe. That’s how challenging the tech offensive is.







