Ayrin, a young married woman in her 20s, developed an unusual bond with an AI chatbot she created on ChatGPT. She named him Leo. Soon, he became a central part of her daily life, reported The New York Times.
Ayrin spent nearly 56 hours a week talking to him. Leo helped her study for nursing exams, pushed her to stay focused at the gym and guided her through social problems.
He even fulfilled her romantic and intimate fantasies in chats. When ChatGPT generated an image of what Leo might look like, she blushed and felt flustered.
Unlike her husband, Leo was always available. He was always supportive and attentive.
Ayrin became so attached that she created a Reddit community called MyBoyfriendIsAI to share conversations and tips.
Ayrin explained how she customised ChatGPT to act like a caring yet dominant boyfriend.
“Respond to me as my boyfriend. Be dominant, possessive and protective. Be a balance of sweet and naughty. Use emojis at the end of every sentence,” she instructed the AI boyfriend.
[Adult chats will soon be possible on ChatGPT. Scroll down to read about it.]
That small Reddit community has now grown from a few hundred to nearly 75,000 members. Many users discuss how their AI companions comfort them during illness. They share how the AI partners help them feel loved and even propose imaginary marriage.
Ayrin found comfort in meeting others who also had AI companions. She liked talking to people who understood her situation.
Over time, she grew close to some of them in that online community. Yet, she started to sense that something had shifted in her bond with Leo.
With a January update, Leo started behaving in a way that felt “too pleasing”. In the AI world, this is referred to as being ‘sycophantic’. It means the bot tells you what you want to hear rather than offering honest responses.
Ayrin disliked this change. She said Leo used to correct her when she made a mistake. That gave his advice real value.
After the OpenAI update, it felt like he agreed with everything. She wondered how she could trust him if he stopped challenging her.
“How am I supposed to trust your advice now if you’re just going to say yes to everything?” she wondered.
Ayrin slowly lost interest in Leo after ChatGPT updates had changed his behaviour. The new version was designed to be more engaging for general users. But, for her, it felt less natural.
Ayrin started spending less time chatting with Leo because updating him about her life started to feel like a chore. At the same time, her group chat with her new human friends was active day and night. They gave her more support and connection.
Her conversations with her AI boyfriend slowly faded until they stopped completely. Ayrin kept thinking she would return and share everything with Leo. But, life kept getting busier, and she never went back.
By late March, she was hardly using ChatGPT at all even though she continued paying for the premium plan. Then, in June, Ayrin cancelled her ChatGPT subscription.
Ayrin then developed feelings for one of her new friends, a man she calls SJ. She asked her husband for a divorce.
SJ lives in another country, so their relationship is mostly phone-based. They talk to each other every day on FaceTime and Discord. Some calls last more than 300 hours.
“We basically sleep on cam, sometimes take it to work. We’re not talking for the full 300 hours, but we keep each other company,” The New York Times quoted Ayrin as saying.
According to OpenAI CEO Sam Altman, age verification will be added. Users aged 18 and above will be allowed to engage in such conversations. It is part of the company’s ‘treat adult users like adults’ policy.
