Reflection AI, a company established just last year by two ex-Google DeepMind scientists, has secured $2 billion in funding, bringing its valuation to $8 billion—a staggering 15-fold increase from its $545 million valuation only seven months prior. Initially, the startup concentrated on autonomous coding agents, but it now aims to serve as both an open source counterpart to closed AI labs like OpenAI and Anthropic, and as a Western rival to Chinese AI companies such as DeepSeek.
Founded in March 2024 by Misha Laskin, who previously led reward modeling for DeepMind’s Gemini initiative, and Ioannis Antonoglou, co-creator of AlphaGo—the AI that famously defeated the world Go champion in 2016—the company leverages their expertise in advanced AI systems. Their core message is that top-tier AI talent can develop cutting-edge models outside the realm of major tech corporations.
With its latest funding, Reflection AI revealed it has assembled a team of leading experts from DeepMind and OpenAI and developed a sophisticated AI training platform that it pledges will be accessible to everyone. Most notably, the company claims to have “found a scalable business model that fits our open intelligence vision.”
Currently, Reflection AI employs around 60 people, mainly AI researchers and engineers working on infrastructure, data training, and algorithm design, according to CEO Laskin. The company has acquired a computing cluster and plans to launch a frontier language model next year, trained on “tens of trillions of tokens,” Laskin shared with TechCrunch.
“We’ve accomplished what was once believed possible only in the world’s leading labs: a large-scale LLM and reinforcement learning system capable of training massive Mixture-of-Experts (MoEs) models at the frontier level,” Reflection AI stated on X. “We witnessed the power of our approach firsthand when applying it to the crucial field of autonomous coding. With this achievement, we’re now extending these techniques to broader agentic reasoning.”
MoE describes a particular design that enables advanced LLMs—systems that, until recently, only large, private AI labs could train at scale. DeepSeek made a significant breakthrough by training such models openly, followed by Qwen, Kimi, and other Chinese projects.
“DeepSeek, Qwen, and similar models have been a wake-up call for us, because if we don’t take action, the global benchmark for intelligence will be set by others,” Laskin remarked. “It won’t be established by America.”
He further noted that this situation puts the U.S. and its partners at a disadvantage, as businesses and governments are often reluctant to use Chinese models due to possible legal issues.
“So your choice is to either accept a competitive disadvantage or step up to meet the challenge,” Laskin commented.
Reflection AI’s new direction has been widely welcomed by American tech leaders. David Sacks, the White House’s AI and Crypto Czar, wrote on X: “It’s encouraging to see more American open source AI models. A significant portion of the global market will value the affordability, flexibility, and control that open source provides. We want the U.S. to lead in this area as well.”
Clem Delangue, co-founder and CEO of Hugging Face, an open and collaborative AI platform, told TechCrunch about the funding round, “This is truly positive news for American open-source AI.” Delangue added, “The next challenge will be to maintain a rapid pace of sharing open AI models and datasets, similar to what we’re seeing from the leaders in open-source AI.”
Reflection AI’s interpretation of “open” seems to focus on accessibility rather than collaborative development, much like Meta’s approach with Llama or Mistral. Laskin explained that while Reflection AI will make its model weights—the essential parameters that define an AI system—publicly available, it will keep most datasets and the full training process proprietary.
“In practice, the most significant asset is the model weights, since anyone can use and experiment with them,” Laskin said. “The infrastructure stack, however, is only usable by a select few companies.”
This approach also shapes Reflection AI’s revenue strategy. According to Laskin, researchers will have free access to the models, but the company will generate income from large organizations building products on Reflection AI’s models and from governments creating “sovereign AI” systems—AI developed and managed by individual countries.
“When you reach the scale of a major enterprise, you naturally want an open model,” Laskin explained. “You want something you can own, operate on your own infrastructure, manage costs, and tailor for different needs. Since you’re investing heavily in AI, you want to optimize it as much as possible, and that’s precisely the market we aim to serve.”
Reflection AI has yet to launch its first model, which will primarily handle text but is expected to gain multimodal abilities in the future, according to Laskin. The company intends to use the new funding to acquire the computing power needed to train these models, with the first release targeted for early next year.
Investors in Reflection AI’s most recent funding round include Nvidia, Disruptive, DST, 1789, B Capital, Lightspeed, GIC, Eric Yuan, Eric Schmidt, Citi, Sequoia, CRV, and several others.