Reflection AI, a startup founded last year by two former Google DeepMind researchers, raised $2 billion at an $8 billion valuation. That’s a significant 15x increase from its $545 million valuation just seven months ago. The company originally focused on autonomous coding agents, but has now positioned itself as an open source alternative to closed frontier labs like OpenAI and Anthropic, and as the Western equivalent of Chinese AI companies like DeepSeek.
The startup was founded in March 2024 by Misha Laskin, who led the reward modeling for DeepMind’s Gemini project, and Ioannis Antonoglou, who co-developed AlphaGo, the AI system that famously defeated the world champion in the board game Go in 2016. Their background in developing these highly advanced AI systems is central to their pitch in that the right AI talent can build frontier models outside of established tech giants.
In addition to the new round, Reflection AI announced that it has hired a team of top talent from DeepMind and OpenAI to build an advanced AI training stack that it promises to make available to everyone. Perhaps most importantly, Reflection AI says it has “identified a scalable commercial model that aligns with our open intelligence strategy.”
Reflection AI currently has a team of about 60 people, mostly AI researchers and engineers in infrastructure, data training, and algorithm development, according to Laskin, the company’s CEO. Reflection AI has secured a compute cluster and hopes to release a frontier language model trained on “tens of trillions of tokens” next year, he told TechCrunch.
Reflection AI said in a post to “We have directly confirmed the effectiveness of our approach when applied to the important area of autonomous coding. With this milestone unlocked, we are now bringing these techniques to general agent inference.”
MoE refers to the specific architecture that powers Frontier LLM. Until now, training at scale has only been possible in large, closed AI labs. DeepSeek had a breakthrough moment when it figured out how to train these models at scale in an open way, followed by Qwen, Kimi, and other models in China.
“Deep Seek and Kwen and all these models are a wake-up call to us because if we don’t do anything, the de facto global standard of intelligence will be built by someone else,” Raskin said. “It’s not something America will build.”
tech crunch event
san francisco
|
October 27-29, 2025
Raskin added that this is a disadvantage for the United States and its allies, as companies and sovereign states often do not use the Chinese model due to potential legal implications.
“So you can either choose to live at a competitive disadvantage or you can choose to rise to the occasion,” Raskin said.
U.S. technologists are mostly praising Reflection AI’s new mission. “It’s great to see more open source AI models in the US,” White House AI and Crypto Czar David Sachs wrote on X. “A meaningful segment of the global market will prefer the cost, customization, and control that open source offers. We want the US to win here too.”
“This is really great news for open source AI in America,” Clem DeLang, co-founder and CEO of Hugging Face, an open collaborative platform for AI builders, told TechCrunch about the round. Delangue added, “The challenge going forward is to demonstrate how quickly open AI models and datasets can be shared (similar to what we’re seeing from the labs that dominate open source AI).”
Reflection AI’s definition of “open” seems to focus on access rather than development, similar to Llama and Mistral’s Meta strategy. Raskin said Reflection AI plans to make the model weights – the core parameters that determine how the AI system works – publicly available, while keeping most of the dataset and complete training pipeline proprietary.
“Actually, it’s the model weights that are most impactful, because anyone can use them and start tinkering with them,” Raskin says. “Only a select few companies can actually use the infrastructure stack.”
This balance also supports Reflection AI’s business model. Raskin said researchers will be free to use the models, but the revenue will come from large companies building products based on Reflection AI’s models and from governments developing “sovereign AI” systems, meaning AI models developed and managed by each country.
“When you get into the realm of large enterprises, you want an open model by default,” Raskin says. “You want something that you own. You can run it on your own infrastructure, you can control its cost, and you can customize it for different workloads. We’re paying exorbitant amounts of money for AI, so we want to be able to optimize it as much as possible. That’s really the market we’re serving.”
Raskin said Reflection AI has not yet released its first model, which is primarily text-based, with plans to include multimodal capabilities in the future. The company plans to use the funding from this latest round to acquire the computing resources needed to train new models, the first of which it aims to release early next year.
Investors in Reflection AI’s latest round include Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV, and more.