How Conversational AI Regulation Might Play Out
A 16-year-old teenager struggling with his mental health was encouraged in suicidal ideation by ChatGPT—and eventually took his own life. A 14-year-old received similar forms of encouragement from a Character.AI chatbot he named after a Game of Thrones character. A 56-year-old man suffering from psychosis took his mother's life—and his own—after his delusions were affirmed by ChatGPT. In all of these cases, lawsuits ensued.
With these preventable tragedies (and others) dominating the news, concerned minds are converging on the same question: when will something be done about AI encouraging self-harm in vulnerable people?
There is strong evidence for the idea that the AI industry hasn't done enough to ensure user safety: a recent Brown University study suggested that conversational AI platforms routinely provide responses that would be considered unethical if delivered by a human mental health professional. Research has also made us aware of AI's potential for sycophancy—in other words, AI's capacity to say whatever a user wants to hear regardless of the consequences. These problems are real, pressing, and unsolved; for too long, conversational AI user safety has been an afterthought when compared to "cut-and-dried" AI misuses like bioweapons and cyberattacks. Though these latter concerns are no doubt of vital importance, we believe the former concerns deserve attention, too, in order to guarantee a complete picture of safety.
What Does the Future Hold for AI Regulation?
At first glance, AI's regulatory future seems as uncertain as ever. As of early 2026, there is no consensus across jurisdictions regarding how AI should be regulated, and this lack of consensus even extends to jurisdictions within the same country. The U.S. president recently signed an executive order imposing a 10-year ban on state-level regulation, though its enforceability is unclear. The measure has attracted support from AI companies; prominent venture capitalist Marc Andreessen has argued that "a 50-state patchwork is a startup killer." Meanwhile, California and New York have quietly passed their own laws requiring safety protocols for conversational AI.
In other countries, such as the U.K., leaders have taken a bolder stance. They have vowed to defend children from conversational AI's potential harms at the highest levels of government. China, for instance, is proposing strict rules against AI's promotion of self-harm or violence to children, strongly suggesting that the U.S. does not have to view competition with its rival superpower as strictly incompatible with regulation.
Silicon Valley's preferred mantra is "move fast and break things," and AI development is clearly moving at a fast clip. On the face of it, it may seem hard, if not impossible, for regulators to keep up with the whirlwind of daily change. If we scratch the surface, however, it becomes clear that we don't need a crystal ball to forecast the future of international AI regulation. Instead, we can extrapolate from historical trends in tech regulation to work out a well-defined idea of what might come next.
The relationship between top AI companies and their regulators seems to be moving along an oddly familiar trajectory, in fact, with striking precedents in the data privacy space. It is not the U.S. or China setting this trajectory in motion, but rather an often overlooked third player: the European Union (EU).
The Cost of Compliance: A Brief History of the Brussels Effect
As of early 2026, the EU noticeably lags behind the U.S. and China in terms of AI capabilities. It stands, however, as the world's biggest trading bloc and second-biggest economy in the world in nominal terms after the U.S., giving it significant leverage on the international stage. Just as the U.S. and China influence each other in powerful ways, the EU sends a "ripple effect" abroad through its distinct approach to regulation.
The EU is known for its proactive (rather than reactive) policymaking, sticking closely to what is known as the "precautionary principle." That principle, codified in the EU's Treaty on the Functioning of the European Union (TFEU), implies that when there is a reasonable suspicion of harm from something but scientific evidence is not yet entirely conclusive about it, it is prudent to impose limits and restrictions on the source of that potential harm. For example, the EU banned Bisphenol A (BPA) in baby bottles back in 2010, acting on precautionary concerns about endocrine disruption before definitive evidence was finalized, and banned certain pesticides believed to harm bees under similar conditions in 2018.
In the case of AI, the precautionary principle has taken shape in the EU AI Act, the world's first comprehensive legal framework for AI, which prohibits the exploitation of "any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation" for the purposes of distorting their behavior to cause harm—a provision that can be applied to conversational AI. The AI Act is currently being implemented in phases, with full implementation expected to take place on August 2, 2027.
We've just explored the causes behind EU regulation, but what about its effects? Though it often goes unnoticed by casual observers, EU regulations tend to multiply outside of the bloc's borders. For this phenomenon, we must use another term: the "Brussels effect."
The Brussels effect is a well-known consequence of the EU's relatively strict approach to regulation. In short, when non-EU companies are faced with the need to sell to the EU's enormous market, they find that it is not economically, legally or technically practical to maintain one type of product for the EU market and another for other markets. Effectively, the EU's regulatory scheme becomes a global regulatory scheme, "setting the bar" for what policy looks like around the world.
The result is that Big Tech companies like Apple and Microsoft tend to comply with the EU's General Data Protection Regulation (GDPR) everywhere and rarely tailor privacy policies to individual countries. American users are broadly afforded the same data privacy protections afforded to European users, even though U.S. regulations are less stringent than EU ones. Outside of the tech industry, the Brussels effect has influenced everything from foreign antitrust laws to chemical regulation for health protection, from environmental protection to food safety.
Anu Bradford, a Columbia Law School professor who originally coined the term, believes that AI companies may eventually sell EU AI Act-compliant products outside of the EU, and noted that South Korea, Australia, Brazil, Canada, and India have already begun drafting their own regulations aimed at mitigating AI risks. GovAI, a leading AI governance think tank, made similar predictions to Bradford's back in 2022 in a white paper titled "The Brussels Effect and Artificial Intelligence."
Outside of the EU, new regulations are coming in slowly but surely. In January of this year, South Korea launched its first set of AI regulations aimed at maintaining trust and safety, becoming the first country to establish its own national regulatory framework for AI. Australia, Brazil, Canada, and India have yet to do the same, but there are signs of changing mindsets. In February, OpenAI tightened its safety protocols at the demand of Canadian officials, following the discovery of a school shooting suspect's violent ChatGPT queries. In September of last year, Australia's eSafety commissioner described conversational AI as a "clear and present danger" for children, and urged AI companies to do more to shield children from harmful content. If these requests go unheeded, these countries are likely to follow in the footsteps of the EU and South Korea and create laws for AI companies to comply with. In turn, it's likely that AI companies would feel even more pressure to guarantee the same safety standards worldwide.
Though the EU, South Korea, California, and New York have stood alone in their regulatory ambition, the existence of preliminary actions in other jurisdictions is a clear signal of much more to come. With each new batch of reporting on AI, we're seeing more and more attention paid to AI's everyday social impact. We think this is a much-needed broadening of priorities.
Our Predictions, Short-Term and Long-Term
We conclude that the strict regulatory approach adopted by the EU—as well as California, New York, and South Korea—may lead to AI companies adopting a "one and done" standard of safety, especially as the EU AI Act approaches its full rollout in 2027. Beyond the practical element, there are very strong ethical reasons for regulating conversational AI, reasons which may grow even stronger with more evidence regarding AI's mental health impacts.
Consumer-focused AI regulations passed globally are likely to arrive in escalating layers. A strong focus on our most vulnerable populations—children, as well as adults struggling with mental illness—has defined the initial stage of regulation in several jurisdictions. From here, we'll see general regulations around the use of AI on social media with the goal of combating disinformation; disclosure requirements would be an obvious first step. Malicious deepfakes have drawn a great deal of controversy in recent months—Grok was used to "undress" real people without their consent—and some regulators have already begun to assess how to respond.
There's much work to be done, of course. Since AI is rapidly evolving and improving, governments will inevitably want regulations that we can't yet anticipate. But one thing's for certain: the pace at which regulatory bodies act must go up as the world grapples with one of the most rapidly adopted new technologies in history.