Nvidia announced new AI infrastructure and models on Monday as it works to create the backbone technology for physical AI, including robots and autonomous vehicles that can sense and interact with the real world.
The semiconductor giant announced the Alpamayo-R1, a open reasoning vision language model for autonomous driving research at the NeurIPS AI conference in San Diego, California. The company claims this is the first vision language action model focused on autonomous driving. Visual language models can process text and images together, allowing vehicles to “see” their environment and make decisions based on what they perceive.
This new model is based on Nvidia’s Cosmos Reason model, a reasoning model that thinks through decisions before responding. Nvidia initially released the Cosmos model family in January 2025. Additional models were released in August.
Technology like the Alpamayo-R1 is crucial for companies that want to reach Level 4 autonomous driving, which means full autonomy in a defined area and under specific conditions, Nvidia said in a blog post.
Nvidia hopes this type of reasoning model will give self-driving vehicles the “common sense” to better approach nuanced driving decisions like humans do.
This new model is available on GitHub and Hugging Face.
Alongside the new vision model, Nvidia also uploaded new step-by-step guides, inference resources, and post-training workflows to GitHub — collectively called the Cosmos Cookbook — to help developers better use and train Cosmos models for their specific use cases. The guide covers data curation, synthetic data generation, and model evaluation.
Techcrunch event
San Francisco
|
13-15 October 2026
These announcements come as the company is pushing natural artificial intelligence full speed ahead as a new avenue for advanced AI GPUs.
Nvidia co-founder and CEO Jensen Huang has repeatedly said that the next wave of artificial intelligence is natural artificial intelligence. Bill Dally, Nvidia’s chief scientist, echoed that sentiment in a conversation with TechCrunch over the summer, emphasizing natural AI in robotics.
“I think eventually robots are going to be a huge player in the world, and we want to basically make the brains of all robots,” Dally said at the time. “To do that, we need to start developing the core technologies.”
