New system to predict which online conversations may turn toxic : The Tribune India

Join Whatsapp Channel

New system to predict which online conversations may turn toxic

NEW YORK: Scientists have developed a computer programme that can predict when civil, constructive conversations on the internet may take a toxic turn and degenerate into personal attacks - a tool which could help regulate online trolls.

New system to predict which online conversations may turn toxic

Photo for representational purpose only.



NEW YORK: Scientists have developed a computer programme that can predict when civil, constructive conversations on the internet may take a toxic turn and degenerate into personal attacks - a tool which could help regulate online trolls.

The internet offers the potential for constructive dialogue and cooperation, but online conversations often go become aggressive.

After analysing hundreds of exchanges on Wikipedia, researchers developed a computer programme that scans for red flags - such as repeated, direct questioning and use of the word “you” in the first two posts - to predict which initially civil conversations would go awry.

Early exchanges that included greetings, expressions of gratitude, hedges such as “it seems,” and the words “I” and “we” were more likely to remain civil, the study found.

“There are millions of these discussions, and you can’t possibly monitor all of them live. This system might help human moderators better direct their attention,” said Cristian Danescu-Niculescu-Mizil, an assistant professor at Cornell University in the US.

“We as humans have an intuition of how to detect whether something is going bad, but it’s just a suspicion. We can’t do it 100 percent of the time,” said Danescu-Niculescu-Mizil said.

“Therefore, we wonder if we can build systems to replicate this intuition, because humans are expensive and busy, and we think this is the type of problem where computers have the potential to outperform humans,” he said.

The computer model, which also considered Google’s Perspective, a machine-learning tool for evaluating “toxicity,” was correct around 65 per cent of the time. Humans guessed correctly 72 per cent of the time.

People can test their own ability to guess which conversations will derail at an online quiz.

The study analysed 1,270 conversations that began civilly but degenerated into personal attacks, culled from 50 million conversations across 16 million Wikipedia “talk” pages, where editors discuss articles or other issues.

They examined exchanges in pairs, comparing each conversation that ended badly with one that succeeded on the same topic, so the results were not skewed by sensitive subject matter such as politics.

The researchers hope this model can be used to rescue at-risk conversations and improve online dialogue, rather than for banning specific users or censoring certain topics.

Some online posters, such as nonnative English speakers, may not realise they could be perceived as aggressive, and warnings from such a system could help them self-adjust.

“If I have tools that find personal attacks, it’s already too late, because the attack has already happened and people have already seen it,” Jonathan P Chang, from Cornell.

“But if you understand this conversation is going in a bad direction and take action then, that might make the place a little more welcoming,” said Chang. PTI

Top News

Patanjali advertising case: Supreme Court asks Ramdev, Balkrishna to appear before it

Patanjali advertising case: Supreme Court asks Ramdev, Balkrishna to appear before it

The bench also issued notice to Ramdev to show cause why con...

Lok Sabha elections 2024: PMK gets 10 seats in seat-sharing deal with BJP in Tamil Nadu

Lok Sabha elections 2024: PMK gets 10 seats in seat-sharing deal with BJP in Tamil Nadu

The PMK is a Vanniyar community-dominated party and has sign...

H-1B initial registration period to close on March 22

H-1B initial registration period to close on March 22

Online account users will also be able to collaborate on reg...


Cities

View All