Technology

A New Robotics Startup Just Raised $105 Million. Here's What That Means.

Genesis AI, a new robotics startup, raised $105 million in July 2025 to build an artificial intelligence system that can control many different kinds of robots. The company aims to create a universal

Martin HollowayPublished 12h ago4 min readBased on 2 sources
Reading level
A New Robotics Startup Just Raised $105 Million. Here's What That Means.

A New Robotics Startup Just Raised $105 Million. Here's What That Means.

Genesis AI, a robotics company founded in December 2024, announced in July 2025 that it had raised $105 million in early funding. The company aims to build what it calls a universal foundation model—a single artificial intelligence system capable of controlling many different kinds of robots across different industries and tasks.

The funding round places Genesis AI among the larger early-stage investments in robotics right now. Two major venture firms, Eclipse and Khosla Ventures, led the round. The size of this check signals that investors believe there is real opportunity in the company's approach.

Who Started Genesis AI and What Are They Trying to Do

Zhou Xian, one of the co-founders, completed a PhD in robotics at Carnegie Mellon University. His co-founder, Théophile Gervet, previously worked as a research scientist at Mistral, a French AI research lab. Together, they are combining deep knowledge of how robots work with expertise in building large-scale AI models.

The company's core goal is straightforward: build a single AI model that can power many different robots, rather than writing custom software for each robot's specific job. Today, if you want a robot to sort packages in a warehouse, or assist in a hospital, or clean a room, engineers write specialized software for that particular task. Genesis AI wants to create a more flexible system—one AI model that could adapt to different robots and jobs.

Think of it like the difference between owning a car that only drives on highways, versus one that can handle highways, city streets, and gravel roads equally well. The universal model is the second option.

The Challenge: Getting AI to Work in the Physical World

Building this kind of universal robotics model is harder than it sounds. AI systems that generate text—like ChatGPT—have been remarkably successful at learning from vast amounts of written data on the internet. But robots need to work in the real world. A robot that drops a fragile item has actually broken it; the mistake cannot be simply edited later.

Robots also need to respond instantly to what their cameras, sensors, and touch receptors are telling them, while controlling multiple moving parts at once. And different robots have different numbers of arms, joints, and capabilities, so the system has to work across that variation.

Training an AI model for robotics also requires different kinds of data. Text models learn from human writing freely available online. Robotics models need sensor and movement data—information about what a robot sees, feels, and does—and this has to be gathered through actual physical testing or detailed computer simulations. Collecting enough diverse and high-quality data to train a general-purpose model remains a major obstacle.

Why This Matters, and What Happens Next

Other companies are working on similar ideas. Boston Dynamics, Figure AI, and Physical Intelligence are all pursuing versions of this approach. What they share is the belief that if a foundation model works well enough in software, it can eventually do the same work in the physical world—controlling robots doing real tasks.

The robotics industry today is fragmented. Each type of robot—for warehouses, surgery, manufacturing, or homes—has been developed largely on its own, with specialized software for each job. A successful universal model could change that by allowing companies to adapt existing robots to new tasks more quickly and cheaply.

However, turning a powerful AI model into a working robot in a real workplace is not simple. Unlike software updates that can be deployed instantly to millions of computers, robots require safety testing, on-site customization, and often physical modifications before they can be trusted with real work.

The broader context here is that we have seen this pattern before. When cloud computing emerged two decades ago, investors poured money into companies that promised to simplify how companies managed their computer infrastructure. Some succeeded, but only those that actually reduced costs and simplified deployments in concrete ways. Technical elegance alone was not enough. Genesis AI will face a similar test: can its model deliver real, measurable advantages compared to robots purpose-built for specific tasks.

The Timeline and the Road Ahead

Building and refining a foundation model typically takes years. Adding robotics on top of that—with all its safety and integration requirements—adds further delay. Genesis AI's $105 million gives the company resources to hire researchers, build testing facilities, and acquire expensive computing power. The question is whether it will be enough to reach a point where the model produces robots that customers actually want to buy.

Success would suggest that the industry is moving toward more general-purpose, adaptable robots. Failure would indicate that the challenges of physical AI remain harder than current enthusiasm suggests, and that robots built for specific jobs will continue to dominate for the foreseeable future.

A New Robotics Startup Just Raised $105 Million. Here's What That Means. | The Brief