Multipronged strategy a must to curb data poisoning : The Tribune India

Join Whatsapp Channel

Multipronged strategy a must to curb data poisoning

Successful data poisoning may make it possible for ransomware or phishing attempts to evade detection and get past spam and email filters.

Multipronged strategy a must to curb data poisoning

Risk: In the realm of AI innovation, data can play the role of a Trojan horse. istock



Atanu Biswas

Professor, Indian Statistical Institute, Kolkata

DATA is a powerful weapon. It is a treasure trove of information. While data has been a driving force for civilisation in the 21st century, there has been an ever-growing trend of data reliance. In fact, data is the lifeblood of a marvel of the contemporary world — artificial intelligence (AI).

However, the training data of the AI models has become the Achilles’ heel of this wonderful transformative technology. Tay, Microsoft’s Twitter chatbot, is an early example of this. Tay’s introduction on Twitter in 2016 was intended to aid in the development of her conversational abilities through human interactions. Tay was created to mimic a teenage girl but was removed from the platform within 24 hours for expressing racist, misogynistic and Nazi-loving ideas that she had picked up from other Twitter users. Tay is an early illustration of the potential dangers associated with training data issues. It is possible that a model’s training data gets contaminated.

A covert saboteur with significant ramifications has surfaced in the recent past: data poisoning. Adversaries can introduce biases, errors or unique vulnerabilities into the training dataset of an AI tool by adding, removing or altering certain data points. These vulnerabilities get manifested when the compromised model makes predictions or choices. An AI tool’s output may thus become erroneous, discriminating or unsuitable if its dataset has been changed or distorted in any manner.

In the realm of AI innovation, data can play the role of a Trojan horse. Consider, for instance, that an AI model has been taught to identify suspicious emails or unusual behaviour on a corporate network. Successful data poisoning may make it possible for ransomware or phishing attempts to evade detection and get past spam and email filters. Similarly, accidents may arise from AI system errors in self-driving cars. Financial models distorted by biased data, medical algorithms misinterpreting distorted test results and facial recognition systems powered by biased datasets that greatly enhance the likelihood of falsely accusing members of a certain racial group of crimes are all at risk. These examples show how data poisoning can penetrate and skew the fundamentals of AI systems, leading to severe financial losses, reputational harm and moral ambiguities that undermine faith in technology.

Scholars are debating the nature of various types of data poisoning and potential countermeasures. According to Google research from July last year on various types of threats to AI systems, an attacker only needs to control 0.01 per cent of a dataset to poison a model. And because the datasets being used usually contain millions of samples, it is difficult to identify this kind of attack.

Data poisoning as a defensive strategy is also garnering a lot of attention. Users are downloading Nightshade and Glaze, two freeware software programmes from the University of Chicago. While Glaze is a defensive tool that individual artists can use to defend themselves against style-mimicking attacks, Nightshade is an offensive tool that can be used to disrupt AI models that scrape their images as part of training data without permission.

The University of Chicago’s press department refers to Nightshade as the ‘poison pill’ because it modifies an image’s pixels in a way that causes mayhem for computer vision while leaving the image intact for human eyes. By contaminating the practice data and by making some of the outputs of image-generating AI models useless — dogs turning into cats, cars turning into houses, and so on — Nightshade could harm subsequent generations of these models, including DALL-E, Midjourney and Stable Diffusion. However, though creative, this method is unlikely to be effective for very long. It won’t be long until large language models are trained to recognise these defence strategies.

How about passing regulations? Content creators and AI model developers may engage in a protracted tug-of-war. The Artificial Intelligence Act was adopted by the European Parliament on March 13 this year. It is regarded as the first all-inclusive horizontal legal framework for AI in history. It attempts to establish standards for data quality, accountability, transparency and human monitoring across the EU. “Cyberattacks against AI systems can leverage AI-specific assets, such as training datasets (eg data poisoning),” it acknowledges. According to Article 15 of the Act, “The technical solutions to address AI-specific vulnerabilities shall include, where appropriate, measures to prevent, detect, respond to, resolve and control for attacks trying to manipulate the training dataset (data poisoning), or pre-trained components used in training (model poisoning), inputs designed to cause the AI model to make a mistake (adversarial examples or model evasion), confidentiality attacks or model flaws.” But, with the ever-evolving technological advancement, understandably, it’s never easy to find an exhaustive list of such ‘measures’.

Generally speaking, data-poisoning attacks fall into four major categories. An ‘availability attack’ taints the model as a whole. A ‘targeted attack’ solely impacts a portion. For the majority of samples, the model will still function adequately, making it difficult to identify targeted attacks. A ‘subpopulation attack’ doesn’t impact the entire model. Rather, it affects subsets with comparable characteristics. Furthermore, a ‘backdoor attack’ occurs when an adversary inserts a backdoor into training samples, such as a collection of pixels in an image’s corner. This causes the model to classify items incorrectly.

Data anomalies can be found by using statistical models, and shifts in accuracy can be detected by using programmes like Microsoft Azure Monitor and Amazon SageMaker. Data poisoning must be prevented via a multifaceted strategy. Ensuring the integrity of the training dataset is simpler for systems that don’t require large amounts of data. The analysis grows more challenging, if not impossible, as datasets get larger. A machine learning (ML) model can be made to perceive itself as a target and fight against attacks like model poisoning by teaching it to detect efforts to alter its training data.

Overall, data poisoning highlights how AI security paradigms are changing. The attack vectors for AI and ML systems are becoming more varied, and combating these modern dangers calls for a combination of traditional cybersecurity expertise, an understanding of ML principles and ongoing innovation. It is, after all, a cat-and-mouse game, just like any security system.

#Artificial Intelligence AI


Top News

Delhi records 44.4 degrees Celsius, ‘red alert’ issued due to heatwave

Delhi records 44.4 degrees Celsius, ‘red alert’ issued due to heatwave

While the station at Safdarjung records a high of 44.4 degre...

All Indian students safe in Bishkek: Embassy

All Indian students safe in Bishkek, says embassy

4 people, including three Egyptians, have been arrested

Helicopter carrying Iran’s president suffers ‘hard landing,’ state TV says without further details

Helicopter carrying Iran’s president suffers ‘hard landing,’ state TV says without further details

Raisi, 63, is a hard-liner who formerly led the country’s ju...


Cities

View All