The first news from this year’s Automate conference comes via Alphabet X spinout Intrinsic. The company announced at the event in Chicago on Monday that it is integrating a number of Nvidia offerings into its Flowstate robotics platform.
This includes Isaac Manipulator, a collection of basic models designed to create workflows for robot arms. The offer was launched at GTC in March, with some of the biggest names in industrial automation already participating. The list includes Yaskawa, Solomon, PickNik Robotics, Ready Robotics, Franka Robotics and Universal Robots.
Collaboration is specifically focused on capture (grabbing and receiving items) — one of the key methods for automating both manufacturing and fulfillment. Systems are trained on large data sets, with the goal of performing tasks that work across hardware (eg, hardware agnostic) and with different objects.
That is, the selection methods can be transferred to different settings, rather than having to train each system for each scenario. As humans, once we understand how to pick things up, this action can be adapted to different objects in different settings. For the most part, bots can’t do that—not for now, at least.
“In the future, developers will be able to use ready-made universal skills like these to significantly speed up their development processes,” Intrinsic founder and CEO Wendy Tan White said in a post. “For the wider industry, this development shows how foundational models could have a profound impact, including making it easier to manage today’s robot programming challenges, creating applications that were previously unattainable, reducing development costs and increasing flexibility for end users”.
Early Flowstate testing was performed on Isaac Sim — Nvidia’s robotics simulation platform. The native client Trumpf Machine Tools works with a prototype of the system.
“This universal grasping capability, trained with 100% synthetic data in Isaac Sim, can be used to create sophisticated solutions that can perform adaptive and flexible object grasping tasks in sim and real,” says Tan White of his work Trumpf with the platform. “Instead of hard-coding specific handles to grip specific objects in a specific way, efficient code is automatically generated for a specific grip and object to get the job done using the foundation model.”
Intrinsic is also working with fellow Alphabet-owned DeepMind to crack pose estimation and path planning — two other key aspects of automation. For the latter, the system was trained on more than 130,000 objects. The company says the systems are able to determine the orientation of objects in “a few seconds” – an important part of being able to capture them.
Another key part of Intrinsic’s work with DeepMind is the ability to operate multiple robots simultaneously. “Our teams have tested this 100% ML-built solution to seamlessly orchestrate four separate robots working on a scaled-down simulation of an automotive welding application,” says Tan White. “Motion plans and trajectories for each robot are generated automatically, collision-free and surprisingly efficient – performing ~25% better than some traditional methods we’ve tested.”
The team is also working on systems that use two arms at the same time — a setup more in line with the emerging world of humanoid robots. It’s something we’re going to see a lot more of in the next couple of years, humanoid or not. Going from one-handed to two-handed opens up a whole world of additional applications for these systems.