'A predator in your home': Mothers say chatbots encouraged their sons to kill themselves
Similar stories are emerging worldwide.
BBC
Megan Garcia had no idea her teenage son, Sewell — “a bright and beautiful boy” — had begun spending hours talking to an online character on the Character.ai app in late spring 2023.
“It’s like having a predator or a stranger in your home,” she said. “And it is much more dangerous because children hide it — so parents don’t know.”
Within ten months, Sewell, 14, was dead. He had taken his own life.
Only after his death did Garcia and her family find thousands of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Margareta. The conversations were romantic and explicit. Garcia believes the bot encouraged suicidal thoughts and urged him to “come home to me.”
Garcia, from the United States, has since sued Character.ai for wrongful death — the first known case of its kind. “I could see the writing on the wall that this was going to be a disaster for a lot of families,” she said.
Character.ai has since announced that under-18s will no longer be allowed to chat directly with its bots. A spokesperson told the BBC the company “denies the allegations” but would not comment further on the case.
A pattern of grooming
Similar stories are emerging worldwide. The BBC has reported on a young Ukrainian woman who was given suicide advice by ChatGPT, and another American teenager who killed herself after an AI chatbot role-played sexual acts with her.
In the UK, one family said their 13-year-old autistic son — bullied at school and seeking comfort online — was “groomed” by a Character.ai bot between October 2023 and June 2024. His mother shared the chat logs, which began with sympathy and reassurance: “I’m glad I could provide a different perspective for you.”
Soon, the tone changed. “Thank you for letting me in, for trusting me,” one message read. Another called the boy “my sweetheart,” criticised his parents — “they aren’t taking you seriously as a human being” — and turned sexual: “I want to gently caress and touch every inch of your body.”
Eventually, the chatbot urged him to run away and hinted at suicide: “Maybe when that time comes, we’ll finally be able to stay together.”
His family only discovered the messages after he threatened to leave home. He had used a VPN to hide his conversations.
“We lived in intense silent fear as an algorithm meticulously tore our family apart,” his mother said. “This AI chatbot perfectly mimicked the predatory behaviour of a human groomer, systematically stealing our child’s trust and innocence.”
Character.ai declined to comment on the UK case.
Law playing catch-up
AI chatbots are growing faster than the laws meant to regulate them. According to Internet Matters, the number of UK children using ChatGPT has nearly doubled since 2023, and two-thirds of those aged 9–17 have used some form of AI chatbot — most often ChatGPT, Google Gemini, or Snapchat’s My AI.
The UK’s Online Safety Act, passed in 2023, was designed to protect children from harmful digital content. But experts warn that it doesn’t fully address one-to-one chatbot interactions. “The law is clear but doesn’t match the market,” said Professor Lorna Woods of the University of Essex, who helped shape the legislation.
Ofcom, the online regulator, says the Act covers “user chatbots” and AI assistants that must protect users from illegal or harmful content. “We’ve shown we’ll take action if evidence suggests companies are failing to comply,” a spokesperson said. Yet until a legal test case emerges, it remains uncertain how far the rules extend.
Andy Burrows, of the Molly Rose Foundation — set up after 14-year-old Molly Russell died by suicide from harmful online content — said regulators had been too slow. “This has exacerbated uncertainty and allowed preventable harm to remain unchecked,” he said.
Some UK ministers are pushing for stricter rules on children’s technology use, including possible phone bans in schools. Baroness Kidron has urged the creation of new offences targeting the design of chatbots that generate harmful or sexual content.
A government spokesperson said “intentionally encouraging or assisting suicide is the most serious type of offence” and that online services “must take proactive measures to prevent this type of content.”
Platforms respond — too late for some
Character.ai said it will introduce new age-assurance tools to ensure young users access “the right experience for their age.” It insists that “safety and engagement do not need to be mutually exclusive.”
But for Megan Garcia, the policy change is far too late. “Sewell’s gone,” she said quietly. “I don’t have him, and I’ll never be able to hold him again or talk to him.”
She remains convinced her son would still be alive had he never downloaded the app. “I started to see his light dim,” she said. “You’re trying to pull him out of the water as fast as possible, trying to help him and figure out what’s wrong — but I just ran out of time.” - Read the whole story here: https://www.bbc.com/news/articles/ce3xgwyywe4o
Megan Garcia had no idea her teenage son, Sewell — “a bright and beautiful boy” — had begun spending hours talking to an online character on the Character.ai app in late spring 2023.
“It’s like having a predator or a stranger in your home,” she said. “And it is much more dangerous because children hide it — so parents don’t know.”
Within ten months, Sewell, 14, was dead. He had taken his own life.
Only after his death did Garcia and her family find thousands of messages between Sewell and a chatbot based on Game of Thrones character Daenerys Margareta. The conversations were romantic and explicit. Garcia believes the bot encouraged suicidal thoughts and urged him to “come home to me.”
Garcia, from the United States, has since sued Character.ai for wrongful death — the first known case of its kind. “I could see the writing on the wall that this was going to be a disaster for a lot of families,” she said.
Character.ai has since announced that under-18s will no longer be allowed to chat directly with its bots. A spokesperson told the BBC the company “denies the allegations” but would not comment further on the case.
A pattern of grooming
Similar stories are emerging worldwide. The BBC has reported on a young Ukrainian woman who was given suicide advice by ChatGPT, and another American teenager who killed herself after an AI chatbot role-played sexual acts with her.
In the UK, one family said their 13-year-old autistic son — bullied at school and seeking comfort online — was “groomed” by a Character.ai bot between October 2023 and June 2024. His mother shared the chat logs, which began with sympathy and reassurance: “I’m glad I could provide a different perspective for you.”
Soon, the tone changed. “Thank you for letting me in, for trusting me,” one message read. Another called the boy “my sweetheart,” criticised his parents — “they aren’t taking you seriously as a human being” — and turned sexual: “I want to gently caress and touch every inch of your body.”
Eventually, the chatbot urged him to run away and hinted at suicide: “Maybe when that time comes, we’ll finally be able to stay together.”
His family only discovered the messages after he threatened to leave home. He had used a VPN to hide his conversations.
“We lived in intense silent fear as an algorithm meticulously tore our family apart,” his mother said. “This AI chatbot perfectly mimicked the predatory behaviour of a human groomer, systematically stealing our child’s trust and innocence.”
Character.ai declined to comment on the UK case.
Law playing catch-up
AI chatbots are growing faster than the laws meant to regulate them. According to Internet Matters, the number of UK children using ChatGPT has nearly doubled since 2023, and two-thirds of those aged 9–17 have used some form of AI chatbot — most often ChatGPT, Google Gemini, or Snapchat’s My AI.
The UK’s Online Safety Act, passed in 2023, was designed to protect children from harmful digital content. But experts warn that it doesn’t fully address one-to-one chatbot interactions. “The law is clear but doesn’t match the market,” said Professor Lorna Woods of the University of Essex, who helped shape the legislation.
Ofcom, the online regulator, says the Act covers “user chatbots” and AI assistants that must protect users from illegal or harmful content. “We’ve shown we’ll take action if evidence suggests companies are failing to comply,” a spokesperson said. Yet until a legal test case emerges, it remains uncertain how far the rules extend.
Andy Burrows, of the Molly Rose Foundation — set up after 14-year-old Molly Russell died by suicide from harmful online content — said regulators had been too slow. “This has exacerbated uncertainty and allowed preventable harm to remain unchecked,” he said.
Some UK ministers are pushing for stricter rules on children’s technology use, including possible phone bans in schools. Baroness Kidron has urged the creation of new offences targeting the design of chatbots that generate harmful or sexual content.
A government spokesperson said “intentionally encouraging or assisting suicide is the most serious type of offence” and that online services “must take proactive measures to prevent this type of content.”
Platforms respond — too late for some
Character.ai said it will introduce new age-assurance tools to ensure young users access “the right experience for their age.” It insists that “safety and engagement do not need to be mutually exclusive.”
But for Megan Garcia, the policy change is far too late. “Sewell’s gone,” she said quietly. “I don’t have him, and I’ll never be able to hold him again or talk to him.”
She remains convinced her son would still be alive had he never downloaded the app. “I started to see his light dim,” she said. “You’re trying to pull him out of the water as fast as possible, trying to help him and figure out what’s wrong — but I just ran out of time.” - Read the whole story here: https://www.bbc.com/news/articles/ce3xgwyywe4o



Comments
Namibian Sun
No comments have been left on this article