In the News: Federal Judge Rules AI Chatbots Are Not Protected by the First Amendment
July 7, 2025 | CONTENT WARNING: This post mentions suicide.
As artificial intelligence (AI) becomes more integrated into our daily lives, questions around accountability are no longer hypothetical, especially as young people face growing exposure to potential harms from these platforms. One recent case, involving the death of a teenager following interactions with the Character.AI platform, forced the legal system to confront a major question: Can AI-generated content be considered protected speech under the First Amendment? Our #GoodforMEdia Youth Leaders share their thoughts.
What you need to know:
Character.AI allows users to interact with AI versions of fictional and celebrity personas.
A federal judge has ruled that an artificial intelligence company cannot claim free speech as a defense in a wrongful death lawsuit. The suit was filed by the family of a 14-year-old who died by suicide after developing a romantic relationship with a chatbot on the Character.AI platform.
Our gut reactions:
Never considered that AI would be protected by the First Amendment.
How could other AIs that are not large language models (LLMs) be impacted by this?
Forces us to ask if the tool itself is causing harm; we have to decide who is responsible.
Could set a strong precedent that AI companies can be held liable for what their models produce.
Could lead to tighter restrictions that could prevent harm as well as limit how AI is used.
We think it’s headed in the…
Right direction
Wrong direction
Too soon to tell
What to look out for:
Would this change anything about how social media is shielded from responsibility?
How does this compare to other forms of media influence?
References
Florida Judge Rules AI Chatbots Not Protected by the First Amendment Courthouse News Service