US scientists may have developed the first robot syllabus that allows machines to transfer skills without human intervention

robo advisors
Ein Robo Advisor verwaltet das investierte Geld frei von allen Emotionen und versucht, durch rational vernünftige Entscheidungen dabei den Gewinn zu maximieren. (Image credit: Pixabay.com © Computerizer CCO Public Domain)


  • Robots struggle to learn from each other, and rely on human instruction
  • New research from UC Berkeley shows that the process could be automated
  • This would eliminate the struggles of manually training robots

Despite robots being increasingly integrated into real-world environments, one of the major challenges in robotics research is ensuring the devices can adapt to new tasks and environments efficiently.

Traditionally, training to master specific skills requires large amounts of data and specialized training for each robot model - but to overcome these limitations, researchers are now focusing on creating computational frameworks that enable the transfer of skills across different robots.

A new development in robotics comes from researchers at UC Berkeley, who have introduced RoVi-Aug - a framework designed to augment robotic data and facilitate skill transfer.

The challenge of skill transfer between robots

To ease the training process in robotics, there is a need to be able to transfer learned skills from one robot to another even if these robots have different hardware and design. This capability would make it easier to deploy robots in a wide range of applications without having to retrain each one from scratch.

However, in many current robotics datasets there is an uneven distribution of scenes and demonstrations. Some robots, such as the Franka and xArm manipulators, dominate these datasets, making it harder to generalize learned skills to other robots.

To address the limitations of existing datasets and models, the UC Berkeley team developed the RoVi-Aug framework which uses state-of-the-art diffusion models to augment robotic data. The framework works by producing synthetic visual demonstrations that vary in both robot type and camera angles. This allows researchers to train robots on a wider range of demonstrations, enabling more efficient skill transfer.

The framework consists of two key components: the robot augmentation (Ro-Aug) module and the viewpoint augmentation (Vi-Aug) module.

The Ro-Aug module generates demonstrations involving different robotic systems, while the Vi-Aug module creates demonstrations captured from various camera angles. Together, these modules provide a richer and more diverse dataset for training robots, helping to bridge the gap between different models and tasks.

"The success of modern machine learning systems, particularly generative models, demonstrates impressive generalizability and motivated robotics researchers to explore how to achieve similar generalizability in robotics," Lawrence Chen (Ph.D. Candidate, AUTOLab, EECS & IEOR, BAIR, UC Berkeley) and Chenfeng Xu (Ph.D. Candidate, Pallas Lab & MSC Lab, EECS & ME, BAIR, UC Berkeley), told Tech Xplore.

You might also like

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com