When Bill Dally joined the 2009 Nvidia Research Laboratory, he only used about twelve people and focused on ray detection, a performance technique used in computer graphics.
This research workshop that uses once now employs more than 400 people who have helped to convert Nvidia from a start -up video game of video games in the 1990s into a $ 4 trillion company that feeds the artificial explosion of intelligence.
Now, the company’s research laboratory is set to develop the technology required for the power of robotics and AI. And some of these laboratory work is already on products. The company unveiled a new AI model on Monday, libraries and other infrastructure for robotics developers.
Dally, now a Nvidia chief scientist, began consulting for Nvidia in 2003 while working at Stanford. When he was ready to resign from being the president of the department of the Stanford section a few years later, he was planning to take a sword. Nvidia had a different idea.
David Kirk, who conducted the research lab at that time and NVIDIA CEO Jensen Huang, believed that a more permanent position in the research lab was a better idea. Dally told TechCrunch that the couple put a “full court” on why they should participate in the Nvidia research lab and eventually persuade him.
“It was over to be a perfect app for my interests and my talents,” Dally said. “I think everyone is always looking for the place in life where they can make the biggest. You know, contribute to the world and I think for me. It’s definitely nvidia.”
When Dally took over the workshop in 2009, the expansion was primarily. Researchers began working in areas outside the detection of rays immediately, including circuit design and VLSI, or very large -scale integration, a process that combines millions of transistors into a single chip.
The research lab has not stopped expanding since.
TechCrunch event
Francisco
|
27-29 October 2025
“We are trying to understand what will make the most positive difference for the company because we are constantly seeing exciting new areas, but some of them, you know, do great work, but we have a problem to say if [we’ll be] Wild successful in this, “Dally said.
For a moment when he built better GPUS for artificial intelligence. Nvidia was early for the future Ai Boom and began crumpling the idea of GPU AI in 2010 – more than a decade before the current AI frenzy.
“We said this is amazing, this will completely change the world,” Dally said. “We have to start double this and Jensen believed that when I told him. We started specializing our GPUs about it and develop a lot of software to support it, involving researchers around the world who did it, long before it was clearly relevant.”
Natural focus AI
Now, as Nvidia holds an administrative lead on the AI GPU market, the technology company has begun to look for new demands beyond AI data centers. This search led NVIDIA to the natural AI and robotics.
“I think in the end the robots are going to be a huge player in the world and we want to actually make the brains of all robots,” Dally said. “To do this we need to get started. You know, developing basic technologies.”
There comes Sanja Fidler, Vice President of AI Research in Nvidia, Fidler joined Nvidia’s 2018 research lab. When he told Huang about what they were working on in the researchers, he was interested.
“I couldn’t resist integration,” Fidler told TechCrunch in an interview. “It’s just like that. You know. It’s such a big issue that it fits and at the same time it was also such a great culture.
He joined NVIDIA and took to work by creating a research lab in Toronto called Omniverse, a Nvidia platform that focused on building a natural AI simulations.


The first challenge to build these simulated worlds was to find the necessary 3D data, Fidler said. This included finding the appropriate volume of possible images for use and building technology needed to convert these images into 3D deliveries that simulators could use.
“We have invested in this technology called differentiated performance, which essentially makes the modification of AI, right?” Fidler said. “Go go [from] The performance of the media from 3D to image or video, right? And we want to go in the other way. ”
World models
Omniverse released the first version of the model that turns images into 3D models, Ganverse3din 2021. Then he got to work to calculate the same video procedure. Fidler said he used videos of self-guiding robots and cars to create these 3D models and simulations through Nervous reconstruction machinewhich the company first announced in 2022.
He added that these technologies were the spine of the COSMOS family of the world’s AI models announced in CES in January.
Now, the workshop focuses on manufacturing these models faster. When playing a video game or simulation, you want technology to be able to respond in real time, Fidler said of robots working to make the reaction time even faster.
“The robot does not need to watch the world at the same time, in the same way that people work,” Fidler said. “He can watch it as 100x faster, so if we can make this model significantly faster than today, it will be extremely useful for robotic or natural AI applications.”
The company continues to make progress on this goal. Nvidia has announced a fleet of AI New World models designed to create synthetic data that can be used to train robots at the Siggraph graphics conference on Monday. Nvidia has also announced new libraries and infrastructure software addressed to Robotics.
Despite progress – and the current advertising campaign on robots, especially anthropoids – the Nvidia research team remains realistic.
Both Dally and Fidler said the industry is still at least a few years away from having a humanoid in your home, with Fidler comparing it with the advertising campaign and timetable for autonomous vehicles.
“We make huge progress and I think you know AI was really the factor here,” Dally said. “Starting with Visual AI for the perception of the robot and then you know the genetic AI. This is extremely valuable for designing and designing and manipulating work.
