Google workers demand Pichai reject military AI deal

Google workers demand Pichai reject military AI deal

A revolt inside Google is forcing a reckoning over where AI should and should not go

A rebellion is brewing inside Google.

More than 560 employees have signed an open letter directed at chief executive Sundar Pichai, demanding that the company refuse to allow its artificial intelligence technology to be used in classified military operations. The letter, coordinated largely within Google’s DeepMind AI lab, represents one of the most significant internal challenges the company has faced in years — and it arrives at a moment when the entire tech industry is being forced to choose sides.

The employees are not asking for small adjustments. They want a full stop.

The Demand Inside Google

The letter calls on Pichai to reject any classified workloads tied to military or surveillance operations, arguing that the only way to ensure the company’s technology is not weaponized is to draw a firm, unconditional line. Signatories warn that without such a commitment, harmful uses of the company’s AI could occur entirely outside their knowledge or ability to intervene.

More than 18 senior staff — including principals, directors, and vice presidents — have added their names to the letter. Roughly two-thirds chose to be identified publicly, with the remainder opting for anonymity. Two-fifths of signatories work within the AI division, with a similar share coming from the Cloud unit and the rest spread across the broader organization.

What Triggered the Letter

The immediate catalyst is a reported agreement between Google and the Department of Defense that would allow its Gemini AI model to be used in classified operations — without the formal safeguards that rival Anthropic insisted upon before the government pulled its access entirely.

That episode rattled the industry. Anthropic’s leadership refused to grant the government unrestricted access to its models and demanded protections against use in lethal autonomous weapons and mass domestic surveillance. The government responded by designating Anthropic a supply-chain risk and ordering federal departments to stop using its Claude chatbot. Anthropic has since challenged that designation in court.

Google’s employees are watching that situation closely — and they do not want their company to follow a different, quieter path to the same destination.

A History Worth Remembering

This is not the first time Google has faced pressure from within over its military ambitions. In 2018, thousands of employees signed a petition against Project Maven, a program that used AI to enhance drone strike capabilities. Several staff resigned in protest. Google ultimately declined to renew that contract and pledged to avoid developing AI for weapons or surveillance purposes.

That pledge, however, did not hold. Last year, the company quietly removed that language from its AI Principles, erasing a promise that had once been a cornerstone of its public ethical commitments. DeepMind co-founder Demis Hassabis defended the shift by arguing that the technological landscape had fundamentally changed since 2014 and that frontier AI companies now carry a responsibility to support national defense.

Not everyone inside the company agrees.

Gemini, the Military and a Line in the Sand

DeepMind’s chief scientist Jeff Dean has emerged as the most prominent internal voice on the issue, publicly stating that mass surveillance violates constitutional protections and carries a chilling effect on free expression. His position aligns with the broader sentiment among signatories, who argue that the risks are neither low nor theoretical.

One person involved in the campaign, speaking anonymously, pointed to ongoing global conflicts as evidence that the stakes are immediate. The concern is not hypothetical future misuse — it is about decisions being made right now, with consequences that could prove irreversible.

The letter closes with a direct warning to leadership, stating that making the wrong call at this moment could cause lasting damage to the company’s reputation, its business, and its role in the world.

The Broader Industry Reckoning

Google is not alone in navigating this tension. OpenAI faced a similar backlash from researchers after striking its own government deal following the Anthropic ban. Chief executive Sam Altman later acknowledged that the move had been handled poorly.

The pattern is becoming hard to ignore. As governments push harder for access to cutting-edge AI tools, the companies building those tools are finding that their own workforces are not willing to stay silent. Whether that pressure is enough to change outcomes remains the central, unresolved question — not just for Google, but for an entire industry standing at a crossroads.

Pichai has not publicly responded to the letter.

Leave a Comment