Subscribe To Print Edition About The Tribune Code Of Ethics Download App Advertise with us Classifieds
search-icon-img
  • ftr-facebook
  • ftr-instagram
  • ftr-instagram
search-icon-img
Advertisement

Robots can lie and deceive like humans, reveals study

The robots were used in medical, cleaning and retail work
  • fb
  • twitter
  • whatsapp
  • whatsapp
featured-img featured-img
Photo for representational purpose only.
Advertisement

Just like humans, robots can lie and deceive, according to a study on Thursday that shows how emerging technologies like generative AI can be used to manipulate users.

The team from George Mason University in the US aimed to explore “an understudied facet of robot ethics” to understand mistrust towards emerging technologies and their developers.

To determine if people can tolerate lying from robots, the team asked nearly 500 participants to rank and explain various forms of robot deception.

Advertisement

“I think we should be concerned about any technology that is capable of withholding the true nature of its capabilities because it could lead to users being manipulated by that technology in ways the user (and perhaps the developer) never intended,” said lead author Andres Rosero, a doctoral candidate at the University.

“We’ve already seen examples of companies using web design principles and artificial intelligence chatbots in ways that are designed to manipulate users towards a certain action. We need regulation to protect ourselves from these harmful deceptions.”

Advertisement

The findings, published in the journal Frontiers in Robotics and AI, showed that robots can deceive humans in three scenarios: external state deceptions, hidden state deceptions, and superficial state deceptions.

The robots were used in medical, cleaning and retail work, and were portrayed as lying about the world beyond the robot, a housecleaning robot with an undisclosed camera, and a robot working in a shop.

The participants were asked to approve of the robot’s behaviour, its deceptiveness, and if it could be justified. Most participants disapproved of the hidden state deception, which they considered the most deceptive.

They also disapproved of the superficial state of deception, where a robot pretended it felt pain. The researchers attributed these deceptions, particularly hidden state deceptions, to robot developers or owners.

They warned that the study should be extended to experiments that can better model real-life reactions, such as videos or short role plays because it was conducted on a limited number of participants, which does not make up for concrete evidence.

Advertisement
Advertisement
Advertisement
Advertisement
'