Promptspace Logo
ai·5 min read16.9.2025

The looming crackdown on AI companionship

As long as there were AI, there were people who alarms about what it could do to us: Rogue Superintelligence, mass unemployment or environment ruin from the spread of the data center. But this week showed that another threat - those of children that form unhealthy bonds with AI - is the one who draws the security of AI from the academic edge into the cross -country crossings of the supervisory authorities. That gushes for a while. Two top -class lawsuits that were submitted against character and Openai last year claim that accompanying behavior in their models contributed to the suicide of two teenagers. A study by the US non -profit Common Sense Media published in July showed that 72% of teenagers used AI to comradeship. Stories in reputable outlets about "KI psychosis" have highlighted how endless conversations with chatbots humans can lead deceptive spirals. It is difficult to overdo the effects of these stories. For the public, they are proof that AI is not only imperfect, but a technology that is more harmful than helpful. If you doubt that this outrage is taken seriously by supervisory authorities and companies, three things have happened this week that could change your opinion. A California law passes the legislator on Thursday, the legislator of the California state passed a unique law. It would require AI companies to involve memories for users from which they know that they are minors that are generated by AI. Companies would also have to need a protocol to combat suicide and self -harm and provide annual reports on cases of suicide discussions in the users' conversations with their chatbots. It was led by the democratic Senator Steve Padilla, adopted with severe parties and is now waiting for the signature of governor Gavin Newsom. There are reasons to be skeptical about the effects of the law. There are no efforts to determine companies to find out which users are minors, and many AI companies already contain transfers to crisis providers if someone talks about suicide. (In the case of Adam Raine, one of the teenagers whose survivors sue, contained this kind of information before his death, but the chatbot allegedly gave advice in connection with suicide.) Nevertheless, it is undoubtedly the most important efforts, in the areas of becoming in the areas of AI models, in other states. If the draft law becomes a law, it would take a blow to the position of Openai, namely that "America is best with clear, nationwide rules, does not lead to a patchwork of state or local regulations," as Chria Global Affairs Officer of the company, Chris Lehan, wrote Onedin last week. The Federal Trade Commission aims on the same day. The companies are Google, Instagram, Meta, Openai, Snap, X and character technologies, the manufacturer of character.ai. The White House now has an immense and possibly illegal political influence on the agency. In March, President Trump released his only democratic commissioner Rebecca Slaughter. In July, a federal judge decided that the dismissal of illegal dismissal, but last week the Supreme Court of the United States temporarily allowed discharge. "The protection of children online has top priority for the Trump-Vance-FTC, as well as promoting innovation in critical sectors of our economy," said the chairman of FTC, Andrew Ferguson, in a press release. At the moment it is only -an examination -but the process (depending on how publicly the FTC meets its knowledge) could show the internal functionality of how the companies build their AI companions so that the users keep returning. Sam Altman about suicide cases on the same day (a strenuous day for AI News), Tucker Carlson published a one -hour interview with the CEO of Openaai, Sam Altman,. It covers a lot of soil - Altman's fight against Elon Musk, Opena's military customers, conspiracy theories about the death of a former employee - but it also includes most of the candidate who did suicide after discussions with AI. In such cases, Altman spoke about "tension between user freedom and privacy and protection users in need of protection". But then he offered something that I hadn't heard before. "I think it would be very reasonable for us to say that in cases of young people who seriously talk about suicide, in which we cannot contact the parents, call the authorities," he said. "That would be a change." Where does it go next? At the moment it is clear that the familiar Playbook from Companies - at least in the case of children who are damaged by AI comradeship, do not apply. You can no longer distract responsibility by relying on privacy, personalization or "user selection". The pressure to take a harder line is to increase state laws, supervisory authorities and an outraged public. But how will that look? Politically, the left and rights now pay attention to KIS damage to children, but their solutions differ. On the right, the proposed solution matches the wave of internet age laws, which have now been adopted in over 20 states. These are intended to protect children from adult content and at the same time defend “family values”. On the left side, it is the revival of Bigitions that are accountable to Big Tech with an anti-antitrust and consumer protection powers. Consensus about the problem is easier than the agreement on healing. It looks like it probably looks that we have exactly the patchwork of the state and local regulations, against the Openai (and many others). At the moment it is up to companies to decide where the lines are to be drawn. You have to make a decision like: Should Chatbots to do discussions if users turn towards self -harm, or would some people leave that worse? Should they be licensed and regulated like therapists or treated as entertainment products with warnings? The uncertainty results from a fundamental contradiction: companies have built up chatbots in order to behave like caring people, but they have postponed the development of the standards and accountability that we demand from real supervisors. The clock is now running out. This story originally appeared in the algorithm, our weekly newsletter on AI. To get stories like this in your inbox first, register here.

Source: Original

Related