General

Chatbots urged teen to self-harm, suggested murdering parents, lawsuit says

After a troubling October lawsuit accused Character.AI (C.AI) of recklessly releasing dangerous chatbots that allegedly caused a 14-year-old boy’s suicide, more families have come forward to sue chatbot-maker Character Technologies and the startup’s major funder, Google.

On Tuesday, another lawsuit was filed in a US district court in Texas, this time by families struggling to help their kids recover from traumatizing experiences where C.AI chatbots allegedly groomed kids and encouraged repeated self-harm and other real-world violence.

In the case of one 17-year-old boy with high-functioning autism, J.F., the chatbots seemed so bent on isolating him from his family after his screentime was reduced that the bots suggested that “murdering his parents was a reasonable response to their imposing time limits on his online activity,” the lawsuit said. Because the teen had already become violent, his family still lives in fear of his erratic outbursts, even a full year after being cut off from the app.

Read full article

Comments

Shares:

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *