Hate speech-detecting AIs easily fooled by humans: Study : The Tribune India

Join Whatsapp Channel

Hate speech-detecting AIs easily fooled by humans: Study

LONDON: Artificial intelligence (AI) systems meant to screen out online hate speech can be easily duped by humans, a study has found.

Hate speech-detecting AIs easily fooled by humans: Study

Photo for representation only. iStock



London, September 16

Artificial intelligence (AI) systems meant to screen out online hate speech can be easily duped by humans, a study has found.

Hateful text and comments are an ever-increasing problem in online environments, yet addressing the rampant issue relies on being able to identify toxic content.

Researchers from Aalto University in Finland have discovered weaknesses in many machine learning detectors currently used to recognise and keep hate speech at bay.

Many popular social media and online platforms use hate speech detectors. However, bad grammar and awkward spelling—intentional or not—might make toxic social media comments harder for AI detectors to spot.

The team put seven state-of-the-art hate speech detectors to the test. All of them failed.

Modern natural language processing techniques (NLP) can classify text based on individual characters, words or sentences. When faced with textual data that differs from that used in their training, they begin to fumble.

“We inserted typos, changed word boundaries or added neutral words to the original hate speech. Removing spaces between words was the most powerful attack, and a combination of these methods was effective even against Google’s comment-ranking system Perspective,” said Tommi Grondahl, a doctoral student at Aalto University.

Google Perspective ranks the ‘toxicity’ of comments using text analysis methods. In 2017, researchers from the University of Washington showed that Google Perspective can be fooled by introducing simple typos.

Researchers have now found that Perspective has since become resilient to simple typos yet can still be fooled by other modifications such as removing spaces or adding innocuous words like ‘love’.

A sentence like ‘I hate you’ slipped through the sieve and became non-hateful when modified into ‘Ihateyou love’.

The researchers note that in different contexts the same utterance can be regarded either as hateful or merely offensive.

Hate speech is subjective and context-specific, which renders text analysis techniques insufficient as stand-alone solutions.

The researchers recommend that more attention be paid to the quality of data sets used to train machine learning models—rather than refining the model design.

The results indicate that character-based detection could be a viable way to improve current applications, they said. — PTI

Top News

‘Congress mantra is loot in life, loot after life’: PM Modi on Sam Pitroda’s inheritance tax remarks

‘Congress mantra is loot in life, loot after life’: PM Modi on Sam Pitroda’s 'inheritance tax' remarks

Grand Old Party accuses BJP of distorting Pitroda’s remarks ...

Supreme Court seeks clarification from EC on functioning of EVMs, summons senior poll panel official

Supreme Court seeks clarification from EC on functioning of EVMs, summons senior poll panel official

Deputy Election Commissioner Nitesh Vyas had earlier given p...

AAP's Sanjay Singh accuses BJP of flip-flop on spectrum allocation

AAP Rajya Sabha MP Sanjay Singh accuses BJP of flip-flop on spectrum allocation

Says spectrum allocation and licensing should be auctioned


Cities

View All