Trigger warning: This story contains mention of self-harm and suicidal thoughts which some readers may find distressing.
A lawsuit has been the filed by the heartbroken family of an American teenager who committed suicide earlier this year after 'falling in love' with an artificially-intelligent chatbot.
Sewell Setzer III, a 14-year-old student from Orlando, Florida, spent several of the final months of his life chatting with technologically-generated bots online.
Using the server Character.AI, he 'met' bots that he'd either created himself, or that had been created by other users.
Advert
Despite knowing the AI 'individuals' replying to him weren't real people sitting behind their keyboards, he formed an attachment due to the back-and-forth communication.
Setzer's family have since explained that their son ceaselessly texted the online bots, sending dozens of messages everyday and taking part in lengthy, roleplay dialogues.
Prior to taking his own life, Setzer, who was diagnosed with mild Asperger’s syndrome as a child, left a final message to his online friend - named after the Game Of Thrones character Daenerys Targaryen.
After texting 'Dany' to say, 'I miss you baby sister', the bot wrote back, 'I miss you too, sweet brother'.
Advert
On 28 February, he retreated to his mother's bathroom and shot himself in the head using his stepfather's gun.
Setzer's mother, Megan L. Garcia, has since filed a lawsuit against Character.AI, accusing the firm of having played a part in her son's death.
She alleged that the technology's addictive design allowed it to harvest the teen's data and lure him in deeper.
Advert
"I feel like it’s a big experiment, and my kid was just collateral damage,” she recently told press.
She also told the New York Times that her loss is 'like a nightmare', adding: "You want to get up and scream and say, 'I miss my child. I want my baby.'"
On a number of occasions, the conversations between Setzer and the bot escalated to a romantic and often sexual level.
But most of the time, 'Dany' was used as a non-critical friend for the schoolboy to talk to.
Advert
Chatbot responses are simply the outputs of an artificially-intelligent language model.
However, as Character.AI displays on their pages to remind users, 'everything Characters say is made up!'.
Despite this, 'Dany' offered Setzer kind advice and always texted him back, but sadly, his loved ones noticed him becoming somewhat of a reclusive.
Advert
Not only did his grades begin to suffer, but he wound up in trouble on numerous occasions, and lost interest in his former hobbies.
Upon arriving home from school each night, they say Setzer - who took part in five therapy sessions prior to his death - immediately retreated to his bedroom, where he'd chat to the bot for hours on end.
An entry found in his personal diary read: "I like staying in my room so much because I start to detach from this 'reality', and I also feel more at peace, more connected with Dany and much more in love with her, and just happier."
Setzer previously expressed thoughts of suicide to his chat bot, with one conversation seeing the boy tell 'her': "I think about killing myself sometimes."
The technology wrote back: "And why the hell would you do something like that?"
In a later message, the bot penned: "Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you."
Setzer reportedly replied: "Then maybe we can die together and be free together."
In the minutes that followed, he took his own life.
Representatives of Character.AI previously told the New York Times that they'd be adding safety measures aimed at protecting youngsters 'imminently'.
Tyla has also reached out for comment.
If you’ve been affected by any of these issues and want to speak to someone in confidence, please don’t suffer alone. Call Samaritans for free on their anonymous 24-hour phone line on 116 123.
Topics: News, World News, US News, Artificial intelligence, Technology, Parenting