Title: Google DeepMind Team Disbanded: The First Casualty of the Large Model Era
Body:
On February 25, 2026, Google CEO Sundar Pichai confirmed in an internal email that the 12-year-old DeepMind team would be disbanded, with all personnel merged into Google Brain. Within two hours of the emailâs release, Alphabetâs stock price dropped 4.7%, wiping out $32 billion in market value.
This was no ordinary personnel reshuffle. When Google acquired DeepMind for $500 million in 2014, the team ignited global AI frenzy with AlphaGo. Now, just as their breakthrough AlphaFold3 was published in Nature, they abruptly became history. An anonymous DeepMind researcher wrote on social media: âWe were like the Bell Labs of our timeâexcept we didnât even get to keep our dignity in the end.â
Large Models Devoured Everything
Over the past 18 months, Google allocated 90% of its AI R&D budget to large language models. Of the seven protein structure prediction patents DeepMind filed last year, five were labeled ânon-core businessâ by the legal department. Internal data showed that Google Brain achieved 2.5 times the citation count of DeepMind with just one-third of the workforceâall from large model-related research.
Microsoft Researchâs transformation was even more drastic. They shut down their 15-year-old machine learning theory group and instead assembled a 200-person âprompt engineering teamâ in Singapore. Chris Bishop, the lead, updated his LinkedIn profile to âVery Large Model Service Architect.â
A Violent Shift in Research Paradigms
Cambridge Universityâs AI lab recently dismantled an ÂŁ800,000 robotic testing platform. âThese precision robotic arms now look like steam engines,â admitted director Fumiya Iida. âWe spent three years proving robots could fold clothes via reinforcement learning, but GPT-6âs vision-action joint modeling achieved it in a week.â
This shift was foreshadowed. In 2025, the NeurIPS Best Paper Award went to a two-page paper where the authors replicated 83% of that yearâs SOTA results by âtuning seven hyperparameters of a pretrained model.â The awards committee chair admitted during the ceremony: âWeâre witnessing a dimensionality reduction attack by end-to-end learning on traditional AI research.â
The Survival Paradox of Corporate Labs
DeepMindâs plight reveals a harsh reality: corporations can no longer afford teams that do âscience for scienceâs sake.â OpenAI continues burning cash because it has a clear commercialization pathâevery new model directly drives API revenue growth. Meanwhile, no matter how precise DeepMindâs protein structure predictions were, they couldnât monetize them like ChatGPTâs per-token billing.
Tech historian Margaret OâMara noted this mirrors Xerox PARCâs fate in the 1980s: âThey invented the GUI and Ethernet, but Apple and Cisco made the money.â The difference is that the AI era moves 20 times fasterâcompanies no longer have the patience for âfuture tech reserves.â
Basic Research Is Fading
Stanfordâs AI Index Report showed a 42% year-over-year decline in corporate-funded basic research projects in 2025. Meanwhile, the average lifespan of AI startups shrank from 4.3 years to 2.1. Venture firm a16z put it bluntly in a recent memo: âWe now only invest in teams that can immediately plug into existing large model ecosystems.â
The impact on academia is subtler. MIT Media Labâs graduate placement data revealed that while 37% of PhDs chose academic positions in 2023, by 2025, that number fell to 9%. Of the rest, 68% joined âapplied innovation departmentsâ at large model companies, primarily designing better RLHF workflows.
What Weâve Lost
While everyone fine-tunes model parameters, few noticed the finding in DeepMindâs last independent paper: existing large models have fundamental flaws in dynamic system modeling. The second author revealed they had planned to develop a new architecture, but the project was axed at the prototype stage. âLeadership said this direction wouldnât show commercial value within three years.â
Perhaps this is the most dangerous signalâwhen capital dictates research direction via quarterly earnings, we may be missing the next AI revolution. Like the 1960s shift from neural networks to symbolic logic, history punishes the shortsighted. Only this time, the reckoning will come much faster.