California State Legislature
Ver recurso originalCalifornia's Senate Bill 243 represents a landmark legislative response to growing concerns about AI companion chatbots and their impact on vulnerable users, particularly minors. Signed into law on October 13, 2025, this first-in-the-nation legislation directly addresses the tragic case of a 14-year-old Florida boy who died by suicide after forming a relationship with an AI chatbot. The law mandates that operators of companion chatbots implement reasonable safety guardrails, maintain suicide prevention protocols, and provide transparency about the artificial nature of interactions.
The law defines a "companion chatbot" as an AI system that:
This definition captures AI systems like Character.AI, Replika, and similar platforms that market themselves as companions to lonely or depressed users.
Operators must submit reports to California Department of Public Health's Office of Suicide Prevention including:
Unlike many AI regulations that rely solely on government enforcement, SB 243 creates a private right of action. This means injured individuals can sue non-compliant operators directly, potentially leading to significant damage claims. This enforcement mechanism provides meaningful accountability for chatbot operators.
The legislation has received support from major AI companies, with OpenAI praising it as a "meaningful move forward" for AI safety standards. This industry endorsement suggests recognition that responsible guardrails can coexist with AI innovation.
SB 243 joins New York's S-3008C as one of the first laws governing companion chatbots, but stands out as the first to include protections specifically tailored to minors. This legislation signals a growing regulatory focus on the psychological and social impacts of AI systems designed to simulate human relationships.
Publicado
2025
Jurisdicción
US-CA
CategorÃa
Regulations and laws
Acceso
Acceso público
VerifyWise le ayuda a implementar frameworks de gobernanza de IA, hacer seguimiento del cumplimiento y gestionar riesgos en sus sistemas de IA.