TOP REPORT: Mark Zuckerberg Lied to Congress. We Can’t Trust His Testimony.

WASHINGTON, DC – Today, The Tech Oversight Project issued the following report on the eve of Meta CEO Mark Zuckerberg’s testimony in the social media addiction trials. The report analyzes Zuckerberg’s testimony in front of the Senate Judiciary Committee in 2024 against newly unsealed documents that show Zuckerberg lied and deceived the Committee. The Tech Oversight Project has compiled some of the most damning evidence against Meta on our Big Tech on Trial microsite, which will be updated throughout the proceedings. View the microsite here.

“It’s important to remember that Meta has hidden behind Section 230 for so long that people like Mark Zuckerberg thought they were bulletproof. Meta’s team of attorneys bet on the fact that these documents would never see the light of day because a product liability case would never make it to trial, and they guessed wrong,” said Sacha Haworth, Executive Director of The Tech Oversight Project. “Never-before-seen documents prove that Zuckerberg lied to Congress. We know that they will lie, bury research, and continue recklessly harming young people until Congress forces them to clean up their act. The only way to outlaw Meta’s dangerous and egregious behavior is to pass legislation, like the Kids Online Safety Act, which will hold their feet to the fire and force them to protect children and teens.”

MARK ZUCKERBERG LIED

WHAT HE SAID 

WHAT THE EVIDENCE PROVES 

“No one should have to go through the things that your families have suffered and this is why we invest so much and are going to continue doing industry leading efforts to make sure that no one has to go through the types of things that your families have had to suffer,” Zuckerberg said directly to families who lost a child to Big Tech’s products in his now-infamous apology.


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

Despite Zuckerberg’s claims during the 2024 US Senate Judiciary Committee hearing, Meta’s post-hearing investment in teen safety measures (i.e. Teen Accounts) are a PR stunt. A report conducted a comprehensive study of teen accounts, testing 47 of Instagram’s 53 listed safety features, finding that:


  •  64% (30 tools) were rated “red” — either no longer available or ineffective.

  • 19% (9 tools) reduced harm but had major limitations.

  • 17% (8 tools) worked as advertised, with no notable limitations.


The results make clear that despite public promises, the majority of Instagram’s teen safety features fail to protect young users.



– Source: Teen Accounts, Broken Promises: How Instagram is Failing to Protect Minors  (Authored by Fairplay, Arturo Bejar, Cybersecurity for Democracy, Molly Rose Foundation, ParentsSOS, and The Heat Initiative)

“I don’t think that that’s my job is to make good tools.” Zuckerberg said when Senator Josh Hawley asked whether he would establish a fund to compensate victims.


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

Expert findings in ongoing litigation directly challenge that claim. An expert report filed by Tim Ested, Founder and CEO of AngelQ AI, concluded that the defendants’ platforms were not designed to be safe for kids, citing broken child-safety features including weak age verification, ineffective parental controls, infinite scroll, autoplay, notifications, and appearance-altering filters, among others.


The report was filed after Mark Zuckerberg appeared before the US Senate Judiciary Committee in 2024 (published May 16, 2025).


– Source: Expert Report from Tim Estes 

“I think it’s important to look at the science. I know people widely talk about [social media harms] as if that is something that’s already been proven and I think that the bulk of the scientific evidence does not support that.” 


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

The 2021 Facebook Files investigation by WSJ revealed that both external studies and Meta’s own internal research consistently linked Instagram use to worsened teen mental health—especially around body image, anxiety, depression, and social comparison. 


Internal findings showed harms were platform-specific, with evidence that the app amplified self-esteem issues and eating-disorder risk among adolescents, particularly girls, while design features encouraged prolonged engagement despite those risks.


– Source: The Facebook Files 

“We don't allow sexually explicit content on the service for people of any age.”



Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

Meta knowingly allowed sex trafficking on its platform, and had a 17-strike policy for accounts known to engage in trafficking. “You could incur 16 violations for prostitution and sexual solicitation, and upon the 17th violation, your account would be suspended…by any measure across the industry, [it was] a very, very high strike threshold,” said Instagram’s former Head of Safety and Well-being Vaishnavi Jayakumar. 


– Source: Meta’s Unsealed Internal Documents Prove Years of Deliberate Harm and Inaction to Protect Minors 


  • 79% of all child sex trafficking in 2020 occurred on Meta’s platforms. (Link)

  • 22% of minors on Instagram reported a sexually explicit interaction. (Link)

“We don't allow people under the age of 13 on our service. So if we find anyone who's under the age of 13, we remove them from our service.”


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

Internal document stating the goal for Meta to be the most relevant social products for kids worldwide. To do so, Meta will focus on “each youth life stage, ‘Kid’ (6-10), ‘Tween’ (10-13), and ‘Teen’ 13+.’”


– Source: Internal Document from Meta - Exhibit 45 


– Source: February 2020 Facebook presentation titled, ‘Tweens Competitive Audit’



– Source: Internal Facebook memo titled, ‘Youth Privacy: Defining five groups to guide age-appropriate design’

“The research that we’ve seen is that using social apps to connect with other people can have positive mental-health benefits,” CEO Mark Zuckerberg said at a congressional hearing in March 2021 when asked about children and mental health."


Source: Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show (2021)

Internal messages show that it was company policy to delete Meta Bad Experiences & Encounters Framework (BEEF) research, which cataloged experience negative social comparison-promoting content; self-harm-promoting content; bullying content; unwanted advances. (Adam Mosseri’s Testimony on 2/11).


“We make body image issues worse for one in three teen girls,” said one slide from 2019, summarizing research about teen girls who experience the issues.


– Source: 2019 Instagram slide presentation called ‘Teen Mental Health Deep Dive’ 

“We are on the side of parents everywhere working hard to raise their kids”


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

“If we tell teens’ parents and teachers about their live videos, that will probably ruin the product from the start (...) My guess is we’ll need to be very good about not notifying parents.” 


– Source: Meta Internal Evidence Exhibit 15 


Another internal email reads: “One of the things we need to optimize for is sneaking a look at your phone under your desk in the middle of Chemistry :)”. 


– Source: Meta Internal Evidence Exhibit 373


According to federal law, companies must install safeguards for users under 13, and the company broke the law by pursuing aggressive “growth” strategies for hooking “tweens” and children aged 5-10 on their products.

“We put special restrictions on teen accounts on Instagram. By default, accounts for under sixteens are set to private, have the most restrictive content settings and can't be messaged by adults that they don't follow or people they aren't connected to.”


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

An internal 2022 audit allegedly found that Instagram’s Accounts You May Follow feature recommended 1.4 million potentially inappropriate adults to teenage users in a single day. 


– Source: Meta’s Unsealed Internal Documents Prove Years of Deliberate Harm and Inaction to Protect Minors 


Meta only began rolling out privacy-by-default features in 2024, seven years after identifying dangers to minors.


– Source: Meta’s Unsealed Internal Documents Prove Years of Deliberate Harm and Inaction to Protect Minors 

“Mental health is a complex issue and the existing body of scientific work has not shown a causal link between using social media and young people having worse mental health outcomes. 


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

According to internal documents, Meta designed a “deactivation study,” which found that users who stopped using Facebook and Instagram for a week showed lower rates of anxiety, depression, and loneliness. Meta halted the study and did not publicly disclose the results – citing harmful media coverage as the reason for canning the study.


An unnamed Meta employee said this about the decision, “If the results are bad and we don’t publish and they leak, is it going to look like tobacco companies doing research and knowing cigs were bad and then keeping that info to themselves?”

“We're deeply committed to doing industry-leading work in this area. A good example of this work is Messenger Kids, which is widely recognized as better and safer than alternatives.”


– Source: Mark Zuckerberg Facebook Post 

Despite Facebook’s promises, a flaw in Messenger Kids allowed thousands of children to be in group chats with users who hadn’t been approved by their parents. Facebook tried to quietly address the problem by closing violent group chats and notifying individual parents. The problems with Messenger Kids were only made public when they were covered by The Verge.


– Source: Facebook design flaw let thousands of kids join chats with unauthorized users 

“We want everyone who uses our services to have safe and positive experiences (...) I want to recognize the families who are here today who have lost a loved one or lived through some terrible things that no family should have to endure.


Zuckerberg told survivor parents who have lost their kid due to Big Tech’s product designs. 


Source: US Senate Judiciary Committee Hearing on "Big Tech and the Online Child Sexual Exploitation Crisis" (2024)

An internal email from 2018 titled “Market Landscape Review: Teen Opportunity Cost and Lifetime Value,” stating that the “US lifetime value of a 13 y/o teen is roughly $270 per teen.” 


The email also states “By 2030, Facebook will have 30 million fewer users than we could have otherwise if we do not solve the teen problem.”


– Source: Meta Internal Evidence Exhibit 71

META 

WHAT THEY SAID 

WHAT THE EVIDENCE PROVES 

Instagram head Adam Mosseri told reporters that research he had seen suggests the app’s effects on teen well-being is likely “quite small.”


– Source: Facebook Knows Instagram Is Toxic for Teen Girls, Company Documents Show (2021) 

An internal 2019 study titled “Teen Mental Health: Creatures of Habit” found the following: 


  • “Teens can’t switch off Instagram even if they want to.”

  • “Teens talk of Instagram in terms of an ‘addicts narrative’ spending too much time indulging in compulsive behavior that they know is negative but feel powerless to resist.”

  • “The pressure ‘to be present and perfect’ is a defining characteristic of the anxiety teens face around Instagram. This restricts both their ability to be emotionally honest and also to create space for themselves to switch off.” 


–Souce: Meta Internal Evidence Exhibit 75

Meta says its automated systems are highly effective at detecting and removing harmful content. In its 2023 enforcement report, the company claimed its tools removed 87.8% of bullying and harassment content, 99% of child exploitation content, and 95% of hate speech before users reported it.


— Source: Community Standards Enforcement Report

– Source: How We Review Content

In 2023, Meta whistleblower Arturo Béjar revealed those numbers only reflect the content Meta chose to remove—not the full amount of harm on the platform—and that the company relied on internal metrics that downplayed real-world experiences. He and other internal teams estimated the true accuracy of automated detection tools was to be lower than 5% and said leadership including Mark Zuckerberg, Chief Operating Officer Sheryl Sandberg, and Instagram head Adam Mosseri ignored proposed design-focused fixes, shut down the research, and fired most of the team behind it. 


– Source: Facebook Says AI Will Clean Up the Platform. Its Own Engineers Have Doubts. (2021)