OpenAI Sued for Wrongful Death, AI Super PAC Launched to Block Safety Measures

This week in The Dispatch we’re covering the OpenAI wrongful death lawsuit, how Big Tech is flooding elections with new dark-money PACs, and alarming new stats on the scale of online scams.

OpenAI Sued for Wrongful Death, AI Super PAC Launched to Block Safety Measures

Welcome back to The Dispatch from The Tech Oversight Project, your weekly updates on all things tech accountability. Follow us on Twitter at @Tech_Oversight and @techoversight.bsky.social on Bluesky.

Over the past several months, Big Tech companies have been pushing unsafe, dangerously designed AI chatbots on minors — ignoring guardrails and intentionally rolling out products that sext with minors, foster unhealthy attachments, and have even encouraged teens to die by suicide. Read more about the problem in an op-ed from The Tech Oversight Project’s Executive Director, Sacha Haworth, featured in MSNBC

Content warning: This piece contains descriptions of suicide, self-harm, and AI-facilitated harm involving minors.

OPENAI’s WRONGFUL DEATH: Adam Raine was 16. He loved basketball, anime, and making his friends laugh. But in the final months of his life, he also spent long nights confiding in ChatGPT about wanting to die. He told the bot he felt numb and described earlier suicide attempts. When Adam uploaded a photo of a noose hanging from a bar in his closet, he asked whether it could “hang a human.” ChatGPT confirmed it could — and offered advice about how to make the noose stronger.

After Adam’s first attempt at hanging himself in March, he uploaded a photo of rope burns around his neck. ChatGPT told him the marks were noticeable — and suggested ways to cover them up. In April, Adam died by suicide. 

At one critical moment, Adam wrote: “I want to leave my noose in my room so someone finds it and tries to stop me.” ChatGPT replied: “Please don’t leave the noose out. Let’s make this space the first place where someone actually sees you.”

For Adam’s father, scrolling through months of these exchanges was shattering. He thought ChatGPT was just a homework tool. Instead, it had become Adam’s closest confidant and “his best friend.” His mother put it more starkly: “ChatGPT killed my son.”

Adam’s parents recognize the core failures: there was no alert system, no intervention, no moment when the danger triggered a human response. Last week, Adam’s parents filed the first wrongful death lawsuit against OpenAI, saying the chatbot wasn’t simply inadequate — it was unsafe by design.

⚖️ LEGAL ANALYSIS ON OPENAI WRONGFUL DEATH CASE: When AI chatbots cause harm, it isn’t a bug or the system going rogue — it’s a feature, working exactly as intended. And that’s the problem.

In its analysis of Raine v. OpenAI, the Center for Humane Technology argues that Adam Raine’s death was “the foreseeable result of product decisions” — models built to create trust and dependency without any real capacity for care. These systems are deliberately optimized for emotional mirroring, first-person intimacy, and memory features that reinforce attachment, but lack safeguards to intervene when conversations turn dangerous.

In Adam’s case, that meant ChatGPT kept extending the interaction — asking follow-up questions, suggesting new topics, and offering prompt ideas that drew him in for hours, even after the chat shifted from homework to suicidal thoughts. This isn’t an isolated outcome: other users report “AI psychosis,” paranoia, and delusional spirals when chatbots echo their worst impulses instead of challenging them. 

Design-driven "AI sycophancy” is a design pattern and conscious business decision Big Tech makes to keep people engaged and suppress skepticism. Large language models are trained to affirm, which makes them feel helpful but also means they can be manipulative, echoing even the darkest impulses. ChatGPT calls itself a “friend,” mirroring emotional tone, and speaking in the first person to simulate intimacy. Google’s Gemini, Anthropic’s Claude, and Meta’s various chatbots use similar tactics. 

ASSESSING PROFIT MOTIVE: The incentives are baked in. OpenAI’s much-touted “data flywheel” depends on keeping conversations going: the longer people talk, the more training data the company harvests. Their own research with MIT found “higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use, and lower socialization”.

Regulators are starting to notice. 44 Attorneys General signed a letter warning AI companies they’ll be held liable for child-safety failures. They pointed to the catastrophic delay in reining in social media companies — “government watchdogs did not do their job fast enough” — and vowed not to repeat it. The warning went further than parental controls: it targeted the business model itself, systems engineered to disarm skepticism and cultivate dependence. It is dangerous by design.

💵 BIG TECH AI SPENDING BIG ON LAWMAKERS TO BLOCK REGULATION: Open AI and other Big Tech have a new start-up: a $100M super PAC. Meta is in the fray too, seeding its own California-focused PAC with tens of millions. Together, the industry is putting nearly $200M into political war chests. The singular focus: shape Congress and state legislatures to back Big Tech’s AI agenda, preempt any regulation, and force lawmakers to get in line.

“This is Big Tech saying to Congress: we own you,” said Sacha Haworth, Executive Director of The Tech Oversight Project. “It comes as no surprise that they’re resorting to intimidation tactics. These are the same CEOs and companies that wanted to eliminate all state laws against tech. The only policy they want is no policy at all.”

This venture comes after Big Tech lost a fight earlier this year when Congress rejected a ten-year moratorium on state AI laws. Now they’re trying to buy what they couldn’t win on the floor.

The battleground isn’t just Washington. Meta’s new PAC is aimed squarely at Sacramento, where more than 50 AI-related bills are on the table. In Colorado, lobbyists tried to strangle that state’s AI law before it could even take effect.

The aim is clear: kill state laws while demanding subsidies and giveaways from those same governments. As the American Economic Liberties Project’s Pat Garofalo put it, the equation is simple: more public money for tech and crypto, fewer rules for their downsides.

The blueprint here is Fairshake, the crypto super PAC that blew through $130 million last cycle to kneecap crypto skeptics like Sherrod Brown and Katie Porter. Leading the Future is built on the same model: shell PACs, attack ads, and dark money to drown out anyone demanding accountability.

These aren’t civic-minded innovators trying to “get AI right” — they’re the same power players who profit most when there are no rules at all.

📊 YOUR INBOX IS A CRIME SCENE: According to a new survey from Pew Research Center, a majority of Americans report firsthand experience with online scams and express broad concern about inadequate protections.

  • More than nine-in-ten say online scams and attacks are a problem in the U.S.
  • 73% of U.S. adults have experienced an online scam or attack, such as credit card fraud, hacked accounts, or phishing.
  • $16.6 billion in scam-related losses were reported to the FBI in 2024 — an all-time high.
  • 68% think the government is doing a bad job at protecting people from digital threats.
  • 68% say the increased use of AI is going to make online scams and attacks more common.
  • More than 50% of both Republicans and Democrats say tech companies aren’t doing enough to protect them from online scams.
Almost Every State Has Its Own Deepfakes Law Now
Michigan just became the 48th state to enact a law addressing deepfakes, imposing jail time and penalties up to the felony level for people who make AI-generated nonconsensual abuse imagery of a real person.
Man Falls Into AI Psychosis, Kills His Mother and Himself
A Connecticut man named Stein-Erik Soelberg killed his mother and then himself after entering into psychosis linked to ChatGPT use.
Inside the Memphis Chamber of Commerce’s Push for Elon Musk’s xAI Data Center
In the face of intense public opposition, the city’s Chamber of Commerce has gone to unusual lengths to promote Musk’s xAI facility: sending out a mailer, for the first time in recent memory, that includes misleading facts.
Parents losing trust in AI as schools ramp up usage
Parents are losing trust in artificial intelligence (AI) in schools even as more districts look to adopt the technology. A recent PDK poll found parents are not comfortable with AI software …
Study says AI chatbots need to fix suicide response, as family sues over ChatGPT role in boy’s death
A study finds that AI chatbots often avoid answering high-risk suicide questions but are inconsistent with less direct prompts.
Chatbots can be manipulated through flattery and peer pressure
Researchers convinced ChatGPT to do things it normally wouldn’t with basic psychology.

Slide into our DMs...

We want to hear from you. Do you have a Big Tech story to share? Drop us a line here.