The Pentagon is handing out AI contracts like party favors, a wafer-scale chip startup is about to become a $26 billion company, and private equity just wrote a $10 billion check to build the internet's next layer of plumbing. Busy week.
Google Signed a Pentagon AI Deal. Employees Are Furious - and Mostly Powerless.

Google has agreed to let the U.S. military use its Gemini AI models inside classified networks for "any lawful purpose." About 1,000 Google employees have signed an open letter opposing the deal, some calling themselves "incredibly ashamed," but unlike 2018, when a similar revolt over Project Maven actually forced the company to back off, this time the pushback looks unlikely to change anything.
The power dynamic has shifted. Years of layoffs and cost-cutting across the industry have gutted the implicit leverage that engineers once had. Collective organizing doesn't land the same way when people are watching their colleagues get let go in waves. What's more, Google's deal reportedly lacks some of the safeguards baked into OpenAI's agreement, including a contractual bar on using the AI for mass domestic surveillance.
Anthropic, for what it's worth, refused to sign. The Pentagon's response? It's telling the military and all defense contractors to stop using Anthropic products within six months, labeling it a supply chain risk. There's a lesson in there about how this administration views dissent, and about what "responsible AI" costs in practice.
Cerebras Is Going Public at a $26.6 Billion Valuation

Cerebras, the company that makes AI chips the size of a dinner plate, filed to go public yesterday, targeting 28 million shares at $115-$125 each and a valuation north of $26 billion. It's a massive step up from February, when the company raised at a $23 billion valuation, and even further from last year when it scrapped IPO plans entirely.
The fundamentals actually support the ambition. Revenue hit $510 million last year, up from $290 million the year before, and the company swung to profitability - a rarity at this stage for AI hardware startups. The kicker is a multi-year deal with OpenAI worth over $20 billion, locking in 750 megawatts of Cerebras compute capacity. That's not a side contract; that's a pipeline.
NVIDIA still owns the market, but Cerebras is building a credible case that the wafer-scale approach - fewer chip-to-chip interconnects, lower latency - wins on inference speed in ways that matter for certain workloads. The listing is scheduled for May 14th on Nasdaq.
KKR Just Bet $10 Billion That Someone Else Will Build the AI Data Centers

Private equity firm KKR has launched Helix Digital Infrastructure with over $10 billion in committed capital, and tapped former AWS CEO Adam Selipsky to run it. The pitch is straightforward: hyperscalers like Amazon, Google, and Microsoft are about to spend something approaching $700 billion on AI infrastructure over the next year. Some of them would rather not own every wire, concrete pour, and transformer on their balance sheets.
Helix plans to handle the whole stack - land, power generation, transmission, data centers, cooling, connectivity - and lock customers in with long-term capacity contracts. It's essentially a bet that the AI build-out will be so massive that even the biggest companies in the world will need off-balance-sheet partners to keep pace. Selipsky built AWS through a previous era of infrastructure scaling, so the hire isn't random.
The model isn't new - it's closer to how cell towers and fiber networks got built out in the 2000s, where specialized operators built and leased the underlying infrastructure while carriers focused on services. If the parallel holds, Helix could end up being one of the quiet power brokers of the AI era.
Novo Nordisk Is Handing OpenAI the Keys to Its Drug Pipeline

Novo Nordisk, the Danish pharma giant behind Ozempic and Wegovy, announced a sweeping partnership with OpenAI to run AI across its entire business, from drug discovery and clinical trials all the way through to manufacturing, supply chains, and commercial operations. Pilot programs are live now, with full integration targeted by the end of 2026.
The ambition here is bigger than most pharma-AI deals, which tend to be narrow lab-tooling agreements. Novo is talking about deploying AI into the whole enterprise. Given that the company's obesity drug franchise has generated historic revenue and faces the pressure of competitors closing in, the speed of R&D iteration is genuinely mission-critical, not just nice-to-have.
Whether AI will materially compress the timeline to viable new drugs remains a live debate in the industry, but Novo Nordisk has the R&D budget and the business motivation to put the question to a serious test. This deal will be worth watching over the next 18 months.
Google's New Algorithm Could Make Massive Context Windows Actually Practical

Google's research team unveiled TurboQuant at ICLR 2026, an algorithm designed to cut the memory overhead from KV cache, one of the main bottlenecks that makes running large-context AI models so expensive. The technique combines two methods, PolarQuant and a variant of Johnson-Lindenstrauss compression, to reduce memory consumption while preserving output quality.
Models with long context windows - one million tokens and beyond - are increasingly useful for real enterprise workloads: analyzing entire codebases, parsing years of documents, handling multi-hour transcripts. But the compute and memory costs have kept deployment constrained. Algorithms like TurboQuant are part of how the gap between research capability and economic viability closes.
Google presented this at ICLR, which means the research community will now spend the next few months picking it apart. If it holds up, expect to see it show up in Gemini's inference stack before the end of the year.