- More and more people are turning to AI chatbots for emotional support
- Clinical psychologist Dr. Paul Losoff said this could harm human connections
- Those struggling with anxiety and depression could be especially affected
By SYEDA SAAD FOR DAILYMAIL.COM
Published: | Updated:
What do you typically use artificial intelligence (AI) for? Do you ask it to create a workout program for you? A customized skincare routine? Help you edit work emails?
It turns out, some Americans are using AI programs like ChatGPT to ask for emotional support and advice.
In fact, some people are so invested in their back-and-forth with AI that they’re forming relationships with these chatbots.
But clinical psychologist Paul Losoff told the DailyMail.com that dependency on AI robots is becoming a huge risk, and warned against getting too close to ChatGPT.
‘One might come to depend and rely on AI so [much] that they don’t seek out human interactions,’ he said.
So why exactly have people turned to AI for emotional support?
‘I think AI can offer a significant amount of useful support to help a person gain clarity on their emotional experiences; this is in the thinking realm,’ Dr. Losoff said.
‘However, when someone begins to expect an emotional connection with a non-human agent, it goes against our fundamental need for authentic human connection.

‘It’s an immature method to satisfy one’s need for connection.’
He explained that this could be especially detrimental for those who may already be struggling with anxiety or depression.
Dr. Losoff explained that by using AI, these people may worsen their conditions and experience cognitive symptoms like chronic pessimism, distorted thinking, or cloudy thinking. And that in itself could create further issues.
‘Because of these cognitive symptoms, there is a risk that an individual turning to AI may misinterpret AI feedback leading to harm,’ he said.
And when it comes to people who may be in crisis, this may only exacerbate issues.
Dr. Losoff said that there is always a risk that AI will make mistakes and provide harmful feedback during crucial mental health moments.
‘There also is a profound risk for those with acute thought disorders such as schizophrenia in which they would be prone to misinterpreting AI feedback,’ he said.
But rather than totally rejecting AI, Dr. Losoff said he believes it’s important to expose younger children to it sooner.

‘I believe there should be AI courses for middle and high schools to implement into their curriculum so that kids can grow up learning responsible use of AI,’ he added.
‘At the same time, I strongly feel that SEL(Social and Emotional Learning) programs in schools are vital so that kids can learn healthy and effective skills to manage emotions.’
And in the mean time, he suggested the immediate regulation of current software.
‘AI is still the wild west such that it is unregulated and there are no mechanisms in place to prevent harm,’ Dr. Losoff said.
‘Sure, AI can be helpful in a certain “dose” but this quasi-treatment has yet to be rigorously studied; it’s no different than the pharmaceuticals going through extensive trials to earn FDA approval.’