TOP REPORT: Adam Mosseri Haunted by Past Social Media Addiction Comments, Meta Document Deletion
WASHINGTON, DC – Today, The Tech Oversight Project published the following top takeaways from the testimony from Adam Mosseri, Head of Instagram and one of Meta’s highest-paid executives, in the social media addiction trials. In his testimony, Mosseri was challenged about his past statements on social media addiction, Meta’s document-deletion regime, the company’s codeword for addiction, and efforts to roll back safety features that impacted Meta’s – a cold and systematic explainer of the company’s growth-at-all-costs business strategy.
“Mosseri’s testimony was as much about backpedaling as it was about gaslighting the jury. He pretended that ‘problematic use’ doesn’t mean ‘addiction,’ that deleting data doesn’t prove culpability, and that his own words acknowledging social media addiction meant nothing,” said Sacha Haworth, Executive Director of The Tech Oversight Project. “Companies that value the lives of young people do not delete evidence, bury research, blame parents, or deny the existence of very real problems like social media addiction that their own data proved. We look forward to seeing Mark Zuckerberg on the stand."
All quotations below are attributable to a court transcript.
- "Problematic Use": Meta's Codeword for Addiction
One of the most striking revelations from Mosseri's testimony was that Meta has all but institutionalized a magic-word test to avoid ever using the word “addiction.” When plaintiff’s attorney Mark Lanier asked Mosseri directly whether “there's such a thing as being addicted to social media platforms.”Mosseri offered up a cold example of Meta’s indifference to the real problem of addiction:
"I think it's important to differentiate between clinical addiction and between problematic use." — Adam Mosseri, courtroom testimony, Feb. 11, 2026
Mosseri then went on to explain that at Meta, the company likes to use the term “problematic use” to whitewash the uncomfortable end result of its dangerous product designs: addiction. Mosseri said, “We use the word 'problematic use' to refer to when someone's spending more time on Instagram than they feel good about, and that definitely happens.”
The problem for Meta? As “problematic use” is described, it sounds an awful lot like what Dr. Anna Lembke, Stanford’s leading expert on the topic, described as “addiction.” It’s blatantly obvious to any neutral party that Meta is using this codeword to avoid fixing their damaging products.
Plaintiff’s attorney came with receipts. He produced Mosseri's own prior statements — made on an NBC podcast in March 2020:
"I don’t know that I have a good name for it. Maybe I probably should, now that you say that. But I think that problematic use, for that, whether or not you call it addiction, I think that’s probably reasonable to call it.” — Adam Mosseri, March 10, 2020
When Mosseri isn’t in front of a judge, and billions of dollars aren’t on the line, Mosseri – just like any reasonable person – acknowledges that social media addiction is, in fact, a very real problem. The medical experts know it, and so do millions of parents across the country. Mosseri then tried to walk his own remarks back by falsely equating social media addiction with late-night TV binges.
"I said that. I am talking about using any social media platform. And quite frankly, anything. It could be watching TV too late at night, more than you feel good about." — Adam Mosseri, testimony from Feb. 11, 2026
The pattern is clear: when speaking publicly without legal consequence, Mosseri freely acknowledges that social media is addictive. When under oath in a courtroom where children's lives hang in the balance, he suddenly discovers nuance. The term "problematic use" is not a clinical distinction — it is a liability shield, carefully engineered by Meta's legal and communications teams to insulate the company from the consequences of a product it knows hooks children.
- BEEF Data: Meta’s Research and Document Deletion Regime
Meta's internal research program, known as BEEF (Bad Experiences & Encounters Framework) surveyed approximately 269,000 users about their negative experiences on Instagram. It is precisely the kind of large-scale internal research that would establish what Meta knew about the harm its platform causes. That might be why Meta appears to have directed its employees to destroy key portions of it.
Lanier introduced a message thread between two Meta researchers. In the exchange, a PhD researcher was asked about a question measuring emotional impact — specifically, "How bad did this make you feel?" The response was chilling:
"BEEF asks a question about emotional impact. But I was told I need to delete that data. We can't analyze it.” — Meta researcher, internal message
The researcher continued:
“...For policy/legal reasons, I was told we need to delete the data and not analyze it. We're not allowed to ask about emotions in surveys anymore." — Meta researcher
This is damning, smoking gun evidence showing that Meta acknowledges that its products are harming children and teens, but instead of addressing the issue, Zuckerberg and company want to erase the evidence altogether. Mosseri, who admitted he didn't even know who the researcher was, nevertheless felt comfortable declaring that the researcher was "mistaken” and added that they still had BEEF data. Mosseri is trying to intentionally deceive the courtroom because the presence of some BEEF data does not mean that data was not deleted along the way.
But Mosseri then made another stunning admission: he has never read the BEEF survey in its entirety. The head of Instagram, the man responsible for a platform used by billions of people, has not bothered to read one of the most significant internal studies about harm on his own platform. That speaks volumes about Meta’s faux safety commitments and their devotion to growth over safety. Move fast and break things – in anecdotal form. In this case, they’re also deleting things.
- A Stunning Reversal: Meta Chose Safety on Harmful Filters, Changed Course to Serve Growth-At-All-Costs
In late 2019, Meta employees raised alarms about third-party augmented reality filters on Instagram that mimicked cosmetic surgery — digitally reshaping users' noses, jawlines, and facial features in ways that could not be achieved with makeup. It doesn’t take a rocket scientist to figure out that these filters could lead to negative self-comparison and body dysmorphia issues in children and teens. Meta’s own employees tried to sound the alarm, and internal documents paint a damning picture of what happened next.
Margaret Stewart, a senior Meta employee, emailed Mosseri and others requesting support to ban filters that mimicked plastic surgery. Her email cited the consensus of outside experts:
"It's unrealistic to expect a large body of academic research on these subjects given the newness of the technology, but the outside academics and experts consulted were nearly unanimous on the harm here." — Margaret Stewart, Meta internal email, Oct. 2019
Another internal document flagged that these filters were "overwhelmingly used by teen girls" and stated:
"We're talking about actively encouraging young girls into body dysmorphia and enabling self-view of an idealized face, and a very Western definition of that face, by the way, that can result in serious issues." — Meta internal document
A temporary ban was implemented. Then the growth team weighed in. Meta executive Jon Hegeman wrote:
"Plastic surgery: I think a blanket ban on things that can't be done with makeup is going to limit our ability to be competitive in Asian markets, including India."" — Jon Hegeman, Meta internal email
Mosseri's response, introduced as an exhibit, was revealing. He wrote: "I agree with John, actually, but would probably frame things differently, as I think he is" -- "his not quite" -- "isn't quite earning him credibility with Margaret and crew." In other words, Mosseri agreed the ban should be loosened — he just wanted to repackage the argument so it wouldn't sound like it was about money.
A decision memo presented to leadership laid out two options:
Option 1: Continue the temporary ban. Pros: Mitigate well-being concerns, no PR/regulatory risk. Cons: Limits growth.
Option 2: Lift the ban but remove filters from recommendation surfaces. Pros: Lower impact to growth. Cons: Still notable well-being risk.
Mosseri chose Option 2. Mark Zuckerberg chose Option 2. The ban was lifted approximately two months after it was implemented. Meta weighed whether or not it was worth it to protect the lives of children and teens, and they chose to put money over safety – to disastrous effect.
The human cost of this decision was captured by Margaret Stewart in her response to Zuckerberg's decision:
“As a parent of two teenage girls, one of whom has been hospitalized twice, in part for body dysmorphia, I can tell you the pressure on them and their peers coming through social media is intense with respect to body image. I recognize my family situation makes me somewhat biased, but it also gives me firsthand knowledge that most of the people looking at this issue don't have. There won't be hard data to prove causal harm for many years, if ever, but I was hoping we could maintain a moderately protective stance here given the risks to minors." — Margaret Stewart, Meta employee, email to Mark Zuckerberg
Even Nick Clegg, Meta's President of Global Affairs, warned that reversing the ban would be "a very unwise thing to do" and that they would "rightly be accused of putting growth over responsibility.”
Zuckerberg and Mosseri’s indifference isn’t just cold, it’s also about money. The court also learned that over the last five years, Mosseri made between forty and fifty million dollars. Growth does not have to be a zero-sum game with safety, but when the incentives the company sets are so closely linked to stock price, it’s crystal clear why we need the courts and lawmakers to intervene. Zuckerberg and Mosseri will always choose compensation over protecting children and teens.
- Meta Sandbags Congressional Testimony with Blog Post: Problem? They Acknowledge Social Media Addiction
On December 7, 2021, the day before Mosseri was scheduled to testify before the U.S. Senate Commerce Subcommittee on Consumer Protection, Instagram published a blog post under Mosseri's name titled "Raising the Standard for Protecting Teens and Supporting Parents Online." As the plaintiff’s attorney noted in court, the timing wasn’t a mistake: this was all a PR stunt designed to sandbag Congress – making Meta seem like they’re doing something, without actually doing anything at all. Public reports years later corroborate that these features do not work.
The blog post announced features like "Take a Break" reminders and parental controls — tools that should have existed years earlier. But the most consequential detail is buried in the footnotes. Mosseri's blog stated:
"Our research shows — and external experts agree — that if people are dwelling on one topic for a while, it could be helpful to nudge them towards other topics at the right moment." — Adam Mosseri, Instagram blog post, Dec. 7, 2021
This statement was supported by two footnotes citing academic studies:
Footnote 1: Purohit, Aditya & Barclay, Louis & Holzer, Adrian. (2020). "Designing for Digital Detox: Making Social Media Less Addictive with Digital Nudges." Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems. ACM.
Footnote 2: Schneider, Christoph & Weinmann, Markus & Brocke, Jan vom. (2018). "Digital Nudging: Guiding Online User Choices through Interface Design." Communications of the ACM.
What the Studies Actually Say:
The Purohit & Barclay report’s abstract opens with the sentence: "Social media addiction concerns have increased steadily over the past decade." The paper's stated purpose is to investigate "how digital nudges can reduce the addictive features of social media and other addictive sites." The introduction states that "210 million people are suffering from social media addiction worldwide," and that "social media companies rely on the continued attention of users for their revenue generation in what is sometimes called the attention economy. As a result, these technologies are now designed to be intrinsically persuasive to attract people's attention."
In other words, the very research Mosseri cited to justify Instagram's safety tools explicitly describes social media platforms as addictive by design — the exact sentiment Mosseri denied under oath. When Lanier confronted him with this contradiction, Mosseri's response was:
“I would need to read the whole article. I think it's – talks about a lot of things. It talks – detox is about taking time off. It talks about nudges, which is why it's cited. It talks about limiting user interactions. It talks about evaluation. It talks about many different concepts.” — Adam Mosseri, courtroom testimony
It’s a bald-faced deception. Meta cannot cite a study centered on social media addiction as its pretense for safety features, while on the other hand denying that problem’s very existence. Two things can be true: Meta acknowledged that social media addiction, and now, it wishes it hadn’t because Zuckerberg and Mosseri chose to ignore it.
- MYST: Meta's Platforms More Powerful than Parental Intervention
Perhaps the evidence that undermined Meta the most was the exchange with Mosseri on Project MYST, Meta Youth and Social Emotional Trends, an internal study that Mosseri himself approved for funding. The study was conducted in partnership with the University of Chicago. Despite approving the study, Mosseri claimed on the stand that he could not remember anything about it beyond its title. "We do lots of research projects," he said. "I don't, I apologize, remember this specific study.”
The findings of Project MYST, as read into the record by Lanier, undermine the central pillar of Meta's defense — that parents bear sole responsibility for outcomes and that Meta's parental tools are sufficient to address the problem.
Key findings from Project MYST include:
Finding 1: Parental Supervision Has No Measurable Effect
"Parental and household factors have little association with teens' reported levels of attentiveness to their social media use." — Project MYST study findings
"There is no association between either parental reports or teen reports of parental digital caregiving/supervision and teens' survey measures of attentiveness or capability." — Project MYST study findings
In other words, even when parents are actively trying to supervise their teens' social media use, the exact behavior that Meta says its parental controls enable, it makes no measurable difference in whether teens use social media attentively or compulsively. That’s how powerful Meta’s product designs are, and that’show inadequate Meta’s entire framework of parental tools really is.
Finding 2: Vulnerable Teens Are the Most at Risk
"Teens who reported a greater number of life experiences on the Adverse Childhood Events Scale, such as having a close relationship with someone who was a problem drinker or alcoholic, or experiencing bullying or harassment at school, reported less attentiveness over their social media use." — Project MYST study findings
Most tragic? Children who have experienced trauma, family instability, or other adverse childhood events, precisely the children who are most vulnerable, are the least capable of regulating their own social media consumption.
Meta knows that. They paid for the study, and Mosseri approved the funding. And yet Meta has not meaningfully made their products any less dangerous or exploitative. They choose every day to keep their business model centered on the vulnerabilities of children least equipped to help themselves. In a surprise to no one, the MYST study was never published.
Conclusion
Meta’s core argument centers on two falsehoods: that social media addiction is a figment of the imagination and that the company is already doing enough. Mosseri’s testimony not only made it clear that the company does not view safety or well-being as a deterrent to growth-at-all-costs, but it also showed that the company has a callous indifference to children and teens. To Mark Zuckerberg, they are a means to an end. Companies that value the lives of young people do not delete evidence, bury research, indiscriminately blame parents, and deny the existence of very real problems like social media addiction.
Meta had the option to chart another path; in fact, their employees pushed them to do it, but Zuckerberg and Mosseri were unmoved by facts. That fundamental dynamic needs to change. We wholeheartedly believe it is within the jury’s right to meaningfully punish Meta, and we hope Congress will follow their lead by recommitting itself to passing bipartisan, commonsense bills like the Senate’s Kids Online Safety Act.