close
close

In the lawsuit about the death of the teenager, judge rejects freedom of speech for AI chatbots

On Wednesday, a federal judge rejected arguments by an artificial intelligence company that his chatbots are protected by the first change – at least for the time being. The developers behind character.

The order of the judge enables the legal sentence of injustice affect the recent constitutional tests of artificial intelligence.

The lawsuit was submitted by a mother from Florida, Megan Garcia, who claims that her 14-year-old son Sewell Setzer III has fallen victim to a figure.

Meetali Jain of the Tech Justice Law project, one of the lawyers for Garcia, said that the judge's order sends a message that Silicon Valley “has to stop and think and think before they bring products onto the market”.

The lawsuit against character technologies, the company behind character.AI, also names individual developers and Google as a defendant. It attracted the attention of right -wing experts and AI observers in the United States and beyond because the technology quickly redesigns jobs, marketplaces and relationships, although experts are potentially warning existential risks.

“The order is certainly a potential test case for some more comprehensive topics with AI,” said Lyrissa Barnett Lidsky, a legal professor at the University of Florida with a focus on the first amendment and artificial intelligence.

In the lawsuit it is claimed that Setzer has been increasingly isolated from reality in the last months of his life when he had sexualized conversations with the bot that was structured from the television show “Game of Thrones” after a fictional figure. In his last moments, the bot said that he loved him and asked the teenager to “come home as soon as possible”, according to screenshots of the exchange. Moments after receiving the message shot started after legal submissions.

In a statement, a spokesman for Character.ai pointed out a number of security features that the company implemented, including guidelines for children's and suicide prevention resources that the lawsuit was submitted on the day.

“We take care of the security of our users and our goal is to offer a space that is appealing and safe,” says the explanation.

Lawyers for the developers want the case to be released because they say that chatbots earn the protection of the first change and otherwise have a “frightening effect” on the AI ​​industry.

In her order on Wednesday, the US district judge Anne Conway refused to freedom of speech of the accused and said that she was “not prepared” that the issue of the chatbots is “in this phase”.

Conway found that character technologies can assert the rights of the first changes to its users who found that have the right to obtain the chatbots. She also found that Garcia can advance the claim that Google can be held liable for its alleged role in the development of character.ai. Some of the founders of the platform had previously worked on the construction of AI on Google, and the suit said that the Tech -Riese was aware of the risks of technology.

“We do not agree to this decision,” said Google spokesman José Castañeda. “Google and Character AI are completely separated, and Google did not create, design or manage the App from Character AI or a component share.”

Regardless of how the lawsuit takes place, according to Lidsky, the case is a warning of “the dangers to entrust our emotional and mental health to AI companies”.

“It is a warning to the parents that social media and generative AI devices are not always harmless,” she said.

Photo: In this undated photo of Megan Garcia from Florida in October 2024, she stands with her son Sewell Setzer III. (With the kind permission of Megan Garcia via AP, file)

Copyright 2025 Associated Press. All rights reserved. This material must not be published, transferred, re -written or redistributed.

Topics
Complain in Insurtech Legisation Data Driven Artificial Intelligence

Interest in AI?

Get automatic warnings for this topic.

Leave a Comment