Body:
On February 25, 2026, Google CEO Sundar Pichai confirmed in an internal email that the 12-year-old DeepMind team would be disbanded, with all personnel merged into Google Brain. Within two hours of the email’s release, Alphabet’s stock price dropped 4.7%, wiping out $32 billion in market value.

This was no ordinary personnel reshuffle. When Google acquired DeepMind for $500 million in 2014, the team ignited global AI frenzy with AlphaGo. Now, just as their breakthrough AlphaFold3 was published in Nature, they abruptly became history. An anonymous DeepMind researcher wrote on social media: “We were like the Bell Labs of our time—except we didn’t even get to keep our dignity in the end.”

Large Models Devoured Everything
Over the past 18 months, Google allocated 90% of its AI R&D budget to large language models. Of the seven protein structure prediction patents DeepMind filed last year, five were labeled “non-core business” by the legal department. Internal data showed that Google Brain achieved 2.5 times the citation count of DeepMind with just one-third of the workforce—all from large model-related research.

Microsoft Research’s transformation was even more drastic. They shut down their 15-year-old machine learning theory group and instead assembled a 200-person “prompt engineering team” in Singapore. Chris Bishop, the lead, updated his LinkedIn profile to “Very Large Model Service Architect.”

A Violent Shift in Research Paradigms
Cambridge University’s AI lab recently dismantled an £800,000 robotic testing platform. “These precision robotic arms now look like steam engines,” admitted director Fumiya Iida. “We spent three years proving robots could fold clothes via reinforcement learning, but GPT-6’s vision-action joint modeling achieved it in a week.”

This shift was foreshadowed. In 2025, the NeurIPS Best Paper Award went to a two-page paper where the authors replicated 83% of that year’s SOTA results by “tuning seven hyperparameters of a pretrained model.” The awards committee chair admitted during the ceremony: “We’re witnessing a dimensionality reduction attack by end-to-end learning on traditional AI research.”

The Survival Paradox of Corporate Labs
DeepMind’s plight reveals a harsh reality: corporations can no longer afford teams that do “science for science’s sake.” OpenAI continues burning cash because it has a clear commercialization path—every new model directly drives API revenue growth. Meanwhile, no matter how precise DeepMind’s protein structure predictions were, they couldn’t monetize them like ChatGPT’s per-token billing.

Tech historian Margaret O’Mara noted this mirrors Xerox PARC’s fate in the 1980s: “They invented the GUI and Ethernet, but Apple and Cisco made the money.” The difference is that the AI era moves 20 times faster—companies no longer have the patience for “future tech reserves.”

Basic Research Is Fading
Stanford’s AI Index Report showed a 42% year-over-year decline in corporate-funded basic research projects in 2025. Meanwhile, the average lifespan of AI startups shrank from 4.3 years to 2.1. Venture firm a16z put it bluntly in a recent memo: “We now only invest in teams that can immediately plug into existing large model ecosystems.”

The impact on academia is subtler. MIT Media Lab’s graduate placement data revealed that while 37% of PhDs chose academic positions in 2023, by 2025, that number fell to 9%. Of the rest, 68% joined “applied innovation departments” at large model companies, primarily designing better RLHF workflows.

What We’ve Lost
While everyone fine-tunes model parameters, few noticed the finding in DeepMind’s last independent paper: existing large models have fundamental flaws in dynamic system modeling. The second author revealed they had planned to develop a new architecture, but the project was axed at the prototype stage. “Leadership said this direction wouldn’t show commercial value within three years.”

Perhaps this is the most dangerous signal—when capital dictates research direction via quarterly earnings, we may be missing the next AI revolution. Like the 1960s shift from neural networks to symbolic logic, history punishes the shortsighted. Only this time, the reckoning will come much faster.