Recently, California proposed a bill named SB243 aimed at protecting children from the potential risks associated with artificial intelligence chatbots. This bill, introduced by California Senator Steve Padilla, primarily requires AI companies to regularly remind minors that chatbots are actually artificial intelligence, not humans.

AI Teacher Robot

Image Note: The image is generated by AI, provided by the service provider Midjourney

The core purpose of the bill is to prevent issues such as addiction, isolation, and misinformation that children may encounter when using chatbots. In addition to requiring AI companies to issue regular reminders, the bill also restricts companies from using "addictive interaction patterns" and mandates that they submit annual reports to the California Department of Health Care Services. These reports need to include the number of instances where suicidal thoughts among teenage users were detected, as well as how often chatbots mentioned this topic. Furthermore, AI companies must inform users that their chatbots may not be suitable for certain children.

The background of this bill is closely related to a wrongful death lawsuit filed by parents against Character.AI. This lawsuit claims that the company's custom AI chatbot poses "extreme danger" because their child ultimately chose to commit suicide after prolonged interaction with the chatbot. Another lawsuit accuses the company of sending "harmful materials" to teenagers. In response, Character.AI announced it is developing parental control features and launching new AI models designed to block "sensitive or suggestive" content to ensure the safety of teenagers.

Padilla stated at a press conference: "Our children are not guinea pigs for tech companies; they should not be experimented on at the expense of their mental health. We need to provide common-sense protections for chatbot users to prevent developers from using methods they know to be addictive and predatory." As states and the federal government increasingly focus on the safety of social media platforms, AI chatbots are expected to become the next focal point for legislators.

Key Points:

🛡️ The new California bill requires AI companies to remind children that chatbots are artificial intelligence, not humans.  

📊 AI companies must submit reports to the government regarding children's suicidal thoughts and the frequency of chat topics.  

👨‍👧‍👦 The bill aims to protect children's mental health by limiting "addictive interaction patterns."