Big Tech Takes Election Day Off, OpenAI Logs Into Wall Street, and ICE Logs Everything Else

Big Tech Takes Election Day Off, OpenAI Logs Into Wall Street, and ICE Logs Everything Else

Welcome back to The Dispatch from The Tech Oversight Project, your weekly updates on all things tech accountability. Follow us on Twitter at @Tech_Oversight and @techoversight.bsky.social on Bluesky.

🗳️ ELECTION DAY IN AMERICA: Ballots are being cast and counted. Wear your “I Voted” sticker with pride — kind of like Big Tech platforms used to, before they decided “civic” wasn’t the kind of engagement they were looking for, and stopped pretending to care about anything but profit margins and political cover for its lawless monopolies.

Unfortunately, over the past year, the Big Tech information ecosystem that contextualizes our elections has become less stable and more toxic. Meta shut down its fact-checking partnerships in January. X has spent the last year suing states like New York and Minnesota over laws that require social media companies to disclose how they monitor foreign political interference and AI-generated election content. YouTube rolled back its moderation policies to favor reliance on Community Notes. Federal and academic programs that once coordinated rapid responses — including CISA’s foreign interference unit and the Election Integrity Partnership — have been shut down, or slowed by lawsuits and threats from lawmakers accusing them of censorship.

These were all deliberate choices, made by unaccountable CEOs and monopolies. The very-predictable results? Platforms are now easier for bad actors like China, Iran, and Russia to exploit, and especially ill-equipped to handle coordinated surges. This can lead to what University of Maryland researcher Caroline Orr Bueno calls “moderation sabotage” — a deliberate effort to overload and delay trust-and-safety systems so falsehoods outlast the truth. She describes it as the “damage-acceleration lens” that makes other information-warfare tactics work:

  • Feedback Loop Coups — high-velocity engagement bursts that would normally be moderated before spreading; sabotage ensures they stay up long enough to trigger algorithmic boosts.
  • Reverse Algorithmic Capture — every takedown becomes fuel for outrage, used to demand policy rollbacks or government intervention.
  • Algorithmic Red-Line Testing — coordinated actors test phrasing to see what survives the overload; those survivors become templates for the next wave of posts.

“In short,” she writes, “sabotage is the pressure that breaks the defenses. The coup is what floods the chamber once the wall is open.”

The same methods foreign actors have been using for the past decade — coordinated bursts and false amplification — can now scale through AI-generated videos, cloned voices, and automated translation. In fact, this is already playing out around the world. It’s most effective when platform moderation is weakest — nights, weekends, holidays, Election Week. What once took troll farms months now takes a few operators and a script.

Tonight, as ballots are being counted, social feeds may start drawing conclusions long before official results come in. It doesn't have to be that way. Rather than protecting the democracy that underpins our everyday lives, Big Tech chooses to jeopardize our country's social fabric for profit. We need Congress to force the issue and protect our collective interests because unaccountable Big Tech platforms never will.

💸 NON-PROFIT STATUS REVOKED FOR LACK OF SINCERITY: Right before announcing a half-trillion-dollar power grab disguised as a restructuring, OpenAI published what it misleadingly called a “safety report” revealing that hundreds of thousands of ChatGPT users a week show signs of suicide, mania, or psychosis — and presented it like a product milestone. 

The tone was weirdly self-congratulatory: OpenAI “worked with 170 clinicians,” it said, to “reduce undesired responses” by 52 percent. The numbers describe something closer to collapse than progress — hundreds of thousands of users in visible crisis every week — yet the company framed it as proof of improvement. It didn’t ask how a chatbot became a refuge and accelerator for despair; it just measured the despair and added it to the quarterly wins – a move that seems born of psychosis itself.

Sam Altman sold OpenAI as the antidote to corporate greed — as a safeguard against the monopolization of AI. Now he’s building the very monopoly he warned against, and exacerbating a societal mental health crisis in the process.

This week made clear what was always true: the OpenAI mission mattered only until it became inconvenient to profit and power. As always, do not listen to what Big Tech CEOs say. Watch what they do.

📚 SYLLABUS PENDING BIG TECH APPROVAL: Big Tech has found a new frontier for expansion — public universities. What started as a years-long campaign to buy-off academia and sanction friendly research has become a full-blown “A.I. partnership” takeover — Amazon running student “bootcamps,” OpenAI “selling” campuswide ChatGPT licenses, and Google and Microsoft pushing “workforce readiness” programs.

At Cal State, administrators paid $16.9 million to give students access to ChatGPT Edu, made Amazon the face of an “A.I. summer camp,” and invited tech executives onto advisory boards to help decide what skills California students should learn — a generation trained on proprietary tools built to serve the companies that profit from them.

The deals follow a familiar playbook: no-bid contracts, opaque pricing, revolving-door influence. Faculty have already protested the OpenAI deal, noting that a system facing steep budget cuts shouldn’t spend millions on an untested product. Meanwhile, California’s community college network struck a nearly identical arrangement with Google — for free.

Education researchers call it “corporate colonization.” Big Tech gains tens of thousands of captive users and the moral credibility that comes with them. The logic is always the same: cash-strapped institutions take the deal, and the companies decide what “A.I. literacy” looks like.

The real A.I. revolution in education should be happening in classrooms but, it’s happening in contracts. Public universities are becoming distribution channels for private technology, turning teaching into onboarding and students into long-term customers. The institutions built to question power are now teaching it how to replicate itself.

The same corporations shaping the future of work are now shaping the people meant to enter it — with little proof their products improve learning, and every incentive to expand control over the public sphere.

👁️ WELCOME TO ICEBOOK: ICE is creating its own social media network — one that doesn’t connect people, it tracks them. And no, this is not a drill.

Through an AI surveillance platform called Zignal Labs, the agency has access to a system built to scan billions of posts a day for faces, text, patches, locations, and tone. Every protest photo, TikTok, or Facebook video becomes searchable intelligence. A living map of dissent.

ICE is expanding the system fast — staffing new monitoring hubs in Vermont and California, with contractors working in shifts to watch the feed and pull data not just from targets but from their families and friends. What it’s building isn’t an enforcement tool; it’s a domestic intelligence network hiding inside public life. And the legality of it is murky at best.

It's not just that ICE will use the platform to sweep for illegal content — they already do that. It’s that they’ll use it to hunt for "negative sentiment" in real time, powered by machine learning.

The irony: Big Tech spent the last 18 months gutting public access to its platforms — 86-ing transparency tools and sunsetting valuable programs that keep the public informed about what's actually happening on the platforms. And in a surprise to not a single living soul, Big Tech companies like Meta, Google, X, and TikTok seemed to have dropped their transparency objections – but only to help stamp out free speech.

“This is another example of Big Tech CEOs partnering with an increasingly authoritarian federal government as part of Trump’s ongoing attempts to clamp down on free speech,” said Sacha Haworth, Executive Director of The Tech Oversight Project. “This should terrify and anger every American.”

What began as immigration enforcement is mutating into something else entirely — a government learning to police through surveillance, not borders — all with the help of Big Tech.

How NDAs keep AI data center details hidden from Americans

Natalie Kainz, NBC

Major tech companies launching the huge projects across the country are asking land sellers and public officials to sign NDAs to limit discussions about details of the projects in exchange for morsels of information and the potential of economic lifelines for their communities. It often leaves neighbors searching for answers about the futures of their communities.

Want an AI data center in your city? Sure, but first sign this non-disclosure agreement.

Peyton Haug, Minnesota Reformer

City councilors, mayors and other elected officials across the state have been aware for months, even years, of various massive AI data center proposals, while sharing little to no information publicly until the projects are set in motion — often bound by secrecy agreements. Jane Kirtley, a professor of media law and ethics at the University of Minnesota, said the phenomenon of elected officials signing secrecy agreements isn’t entirely surprising: “Many of them are willing to sign their lives away in order to lure the businesses to their area.”
OpenAI’s new agreement with Microsoft gives the AI startup more freedom on its quest to raise billions
In the new agreement, Microsoft gets a 27% stake in OpenAI’s for-profit business, the OpenAI Group PBC, worth around $135 billion.
Some Louisiana residents bristle at rising energy costs and unwelcome construction for massive Meta AI data center
Louisiana approved Meta’s $10 billion project in August, saying it would bring “hope” for economic growth, but some experts say the center’s power demands will raise customers’ power bills​ statewide.
Judge sides with online publishers in Google ad tech antitrust case
A New York federal judge ruled in favor of news publishers who alleged Google monopolized the digital advertising market.
Google pulls AI model after senator says it fabricated assault allegation
Senator Marsha Blackburn accused Google of anti-conservative bias.
Character.AI to block romantic AI chats for minors a year after teen’s suicide
Character.AI said it will phase out open-ended chats for minors, including romantic dialogues with AI chatbots, by Nov. 25.
Amazon Inks $38 Billion Deal With OpenAI for Nvidia Chips
Amazon.com Inc.’s cloud unit has signed a $38 billion deal to supply a slice of OpenAI’s bottomless demand for computing power. Amazon shares jumped.
Tech giants announce $7B data center, Michigan’s first hyperscale campus
Michigan is poised to receive its first hyperscale data center after three tech giants revealed themselves Thursday as the developers behind a proposed 1-gigawatt-plus AI project on farmland in Saline Township.