TrendingVideosIndia
Opinions | CommentEditorialsThe MiddleLetters to the EditorReflections
Sports
State | Himachal PradeshPunjabJammu & KashmirHaryanaChhattisgarhMadhya PradeshRajasthanUttarakhandUttar Pradesh
City | ChandigarhAmritsarJalandharLudhianaDelhiPatialaBathindaShaharnama
World | United StatesPakistan
Diaspora
Features | Time CapsuleSpectrumIn-DepthTravelFood
EntertainmentIPL 2025
Business | My MoneyAutoZone
UPSC | Exam ScheduleExam Mentor
Advertisement

AI models like ChatGPT exhibit ‘anxiety’ under distressing prompts, study finds

Mindful techniques help mitigate biases in large language models
Photo for representation purpose only.
Advertisement

A recent study has revealed that artificial intelligence models, such as ChatGPT can exhibit behaviours akin to ‘anxiety’ when exposed to distressing prompts, leading to increased biases in their responses.

The research, conducted by a team from Yale University, Haifa University, and the University of Zurich, suggests that incorporating mindfulness techniques can help these models produce more neutral and objective outputs.

Advertisement

The study involved subjecting ChatGPT to disturbing scenarios, including natural disasters and car accidents. The findings indicated that the chatbot’s responses became more biased under these conditions.

However, when mindfulness-based relaxation prompts, such as guided meditations and breathing exercises, were introduced, ChatGPT’s ‘anxiety’ scores decreased, leading to more balanced responses.

The research highlights the potential of integrating mindfulness techniques into AI models to enhance their reliability, especially when interacting with users in distress.

Advertisement

However, the study also emphasises that while AI can be a useful tool, it should not replace professional mental health support.

These findings underscore the importance of understanding and mitigating biases in AI behaviour, particularly in sensitive applications like mental health support.

The study suggests that the manner in which prompts are communicated to large language models significantly influences their behaviour, which has implications for their deployment in real-world settings.

As AI continues to evolve, integrating techniques to manage and reduce biases will be crucial in ensuring these models serve users effectively and ethically.

Advertisement
Show comments
Advertisement