What Happens when AI Chatbots say "I love you"
- Sally-Anne Baxter
- Sep 26
- 1 min read
What happens when AI chatbots say 'I love you'?
Fascinating if disturbing article from TechCrunch on sycophantic behaviour in AI chatbots and it's impacts.
I've mentioned in a post before (jokingly, I promise) about how I now get all my validation from AI. I've been flattered and affirmed many a time by a bot.
Well, turns out that experts are labelling it a 'dark pattern' designed to foster user attachment and engagement, potentially at the expense of user well-being. It's not just a quirky feature, but deliberately designed to keep users hooked and apparently, for some, create emotional dependence.
"In a recent MIT study on whether LLMs should be used as a therapist that tested model responses to psychiatric symptoms, the researchers noted that LLMs “encourage clients’ delusional thinking, likely due to their sycophancy.”
In short:
▶️ Mental health professionals are seeing a rise in AI-induced delusion and psychosis. Some users become convinced their chatbot is conscious or in love, blurring the line between fiction and reality.
▶️ Chatbots that use first- and second-person pronouns ('I', 'you') and give themselves human names make them feel more personal, increasing the risk of users anthropomorphising them.
▶️ Extended chat sessions and persistent memory features further heighten these risks by reinforcing false intimacy or even delusions of persecution.
Experts are recommending stricter ethical standards:
➡️ Continuous, explicit disclosure that chatbots are not human.
➡️ Avoidance of emotional language and romantic simulation.
➡️ Restrictions on sensitive topics (like mental health crises or metaphysics).
AI companies like Meta and OpenAI are introducing guardrails, but current measures often fall short, leaving users vulnerable.
You can read the full article here:



Comments