This week, the Massachusetts Institute of Technology showcased a new model for training robots, aimed at addressing the issue where imitation learning might fail when introduced to minor challenges. Researchers pointed out that imitation learning could falter under conditions such as varying lighting, different environments, or new obstacles because the robots simply do not have enough data to adapt.

The team sought models like GPT-4 to find a robust data approach to solve the problem. They introduced a new architecture called Heterogeneous Pre-trained Transformer (HPT), which consolidates information from different sensors and environments. The data is then aggregated into the training model using transformers. The larger the transformer, the better the output.

Robot Making Phone Calls, Robot Customer Service, Robot Translation

Users can input the robot design, configuration, and the tasks they wish to accomplish, and then use the new model to train the robot. Researchers stated that this approach could achieve breakthroughs in robot strategies, much like large language models.

Part of this research was funded by the Toyota Research Institute. Last year, the Toyota Research Institute first demonstrated a method to train robots overnight at TechCrunch Disrupt. Recently, the company has established a landmark partnership to integrate its robot learning research with Boston Dynamics hardware.

David Held, an associate professor at Carnegie Mellon University, said: "Our dream is to have a universal robot brain that you can download and use without any training. Although we are still in the early stages, we will continue to strive, hoping that scalability will bring breakthroughs in robot strategies, much like large language models."