Founder and CEO of Figure Brett Adcock on Thursday unveiling A new model of mechanical learning for humanoid robots. The news, which arrives two weeks after ADCock’s announcement, Bay Area’s Robot’s decision to move away from an Openai collaboration, focuses on Helix, a “general” Vision-Language-Action (VLA) model.
VLAs are a new phenomenon for robotics, utilizing vision and tongue commands to process information. Currently, the most famous example of the category is Google Deepmind’s RT-2, which trains robots through a video combination of large linguistic models (LLMS).
Helix works in a similar way, combining visual data and linguistic prompts to control a real -time robot. The figure reads: “Helix exhibits strong generalization of objects, is able to pick up thousands of new home species with different shapes, sizes, colors and material properties that never met before training, simply asking for natural language.”
In an ideal world, you could just say a robot to do something and would only do it. There comes the propeller, according to the figure. The platform is designed to bridge the gap between vision and language processing. After receiving a natural language voice, the robot visually evaluates its environment and then performs the work.
The figure offers examples such as “fits the cookies bag on the robot to your right” or “Get the cookies bag from the robot to your left and place it in the open drawer”. Both of these examples include a pair of cooperation robots. This is due to the fact that Helix is designed to control two robots at the same time, with one helping the other to perform various domestic duties.
The figure presents the VLM, highlighting the work the company has done with its 02 humanoid robot in the home environment. Houses are known difficult for robots, as they do not have the structure and consistency of warehouses and factories.
The difficulty with learning and control is the big obstacles that stand between complex robots and home systems. These issues, along with five to six price digits, are the reason why the robot at home has not prevailed for most humanoid robotics. In general, the approach is to build robot for industrial customers, both improving reliability and reducing costs before housing. Homework is a discussion for a few years from now.
When TechCrunch toured Figure’s Bay Area offices in 2024, Adcock showed that some of his humanoid rhythms were placed at home. It appeared then that the project had no priority, as the number focuses on pilots in the workplace with companies such as BMW.


With the announcement of Thursday’s Helix, the figure makes it clear that the house should be a priority in itself. It is a difficult and complex environment to test these training models. Teaching robots to do complex tasks in the kitchen – for example – open them to a wide range of actions in different settings.
“In order for the household robots to be useful. They need to be able to create smart new custom -made behaviors, especially for objects they have never seen before,” says Figure. “Robot teaching even new behavior today requires significant human effort: either hours of doctoral -level experts or thousands of demonstrations.”
Manual planning will not escalate for the home. There are just too many unknown. Kitchens, lounges and bathrooms vary dramatically from one to another. The same can be said about the tools used for cooking and cleaning. In addition, people leave caresses, remodeling furniture and prefer a number of different environmental lighting. This method takes too much time and money – though the figure definitely has many of the last.
The other option is training – and many of them. Robotic weapons trained to choose and place items in the laboratories often use this method. What you don’t see is the hundreds of repetition hours to make a demo durable enough to take on extremely variable duties. To get something right for the first time, a robot must have done hundreds of times in the past.
Like so much around the humanoid robotic at the moment, the work on the propeller is still at a very early stage. Viewers should be informed that there will be a lot of work behind the scenes to create the types of short, well -produced videos observed in this post. Today’s announcement is, in essence, a recruitment tool designed to bring more engineers to the boat to help develop the project.