“`html

Educating a robot with new abilities previously necessitated programming knowledge. However, a new era of robots could feasibly acquire skills from virtually anyone.

Engineers are crafting robotic assistants capable of “learning through demonstration.” This more instinctive training method permits an individual to guide a robot through a task, usually in one of three manners: through remote operation, such as using a joystick to maneuver a robot from a distance; by physically guiding the robot through the actions; or by executing the task themselves while the robot observes and imitates.

Robots that learn through practice typically rely on just one of these three demonstration techniques. Nevertheless, MIT engineers have now invented a comprehensive training interface that permits a robot to learn a task using any of the three instructional styles. This interface takes the form of a handheld tool equipped with sensors that can be attached to many standard collaborative robotic arms. An individual can utilize the attachment to instruct a robot to perform a task via remote control, physical manipulation, or by showing the task themselves—whichever approach they deem most suitable for the task at hand.

The MIT team evaluated the new tool, which they have named a “versatile demonstration interface,” on a conventional collaborative robotic arm. Participants with manufacturing knowledge utilized the interface to carry out two manual tasks frequently performed on production floors.

The researchers indicate that the new interface presents enhanced training adaptability that could widen the range of users and “instructors” who interact with robots. It may also empower robots to acquire a broader array of skills. For example, one individual could remotely teach a robot to manage hazardous materials, while someone else down the production line could physically guide the robot through the process of packaging a product, and at the conclusion of the line, yet another person could use the attachment to sketch a company logo as the robot observes and learns to replicate.

“Our aim is to develop highly intelligent and capable companions that can effectively collaborate with humans to accomplish intricate tasks,” states Mike Hagenow, a postdoc at MIT in the Department of Aeronautics and Astronautics. “We envision that adaptable demonstration tools might assist far beyond the manufacturing sector, in other areas where we aspire to see increased adoption of robots, such as domestic or caregiving environments.”

Hagenow will present a paper discussing the new interface at the IEEE Intelligent Robots and Systems (IROS) conference in October. The paper’s MIT co-authors include Dimosthenis Kontogiorgos, a postdoctoral researcher at the MIT Computer Science and Artificial Intelligence Lab (CSAIL); Yanwei Wang PhD ’25, who recently completed his doctorate in electrical engineering and computer science; and Julie Shah, MIT professor and chair of the Department of Aeronautics and Astronautics.

Collaborative Training

Shah’s team at MIT develops robots that can function alongside humans in various settings, including workplaces, hospitals, and homes. A primary focus of her research is on creating systems that allow individuals to teach robots new tasks or abilities “on the job,” so to speak. Such systems would, for instance, enable a factory worker to swiftly and intuitively adjust a robot’s movements to enhance its task in real-time, as opposed to stopping to reprogram the robot’s software from the beginning—a skill that a worker may not necessarily possess.

The team’s latest research builds on a burgeoning approach in robotic learning called “learning from demonstration,” or LfD, wherein robots are designed to be trained in more organic, intuitive manners. In reviewing the LfD literature, Hagenow and Shah discovered that existing LfD training strategies generally fall into three main categories: teleoperation, kinesthetic training, and natural teaching.

One training approach may be more effective than the other two for a specific individual or task. Shah and Hagenow speculated that they could create a tool combining all three methods to allow a robot to learn more tasks from an expanded pool of individuals.

“If we could merge these three distinct ways someone might wish to engage with a robot, it could yield benefits for varied tasks and different individuals,” states Hagenow.

Tasks Ahead

With this objective in focus, the team engineered a new versatile demonstration interface (VDI). This interface is a handheld attachment that seamlessly integrates with the arm of a conventional collaborative robotic arm. The attachment is equipped with a camera and markers that monitor the tool’s position and movements over time, along with force sensors to gauge the pressure applied during a specific task.

When the interface is affixed to a robot, the entire machine can be remotely controlled, and the camera captures the robot’s movements, which the robot can then utilize as training data to autonomously learn the task. Similarly, a user can physically guide the robot through a task with the interface connected. The VDI can also be detached and manually held by an individual to execute the desired task. The camera records the VDI’s actions, which the robot can utilize to imitate the task when the VDI is reapplied.

To assess the usability of the attachment, the team brought the interface along with a collaborative robotic arm to a local innovation hub where manufacturing professionals learn about and experiment with technology that could enhance factory-floor operations. The researchers organized an experiment where they asked volunteers at the center to utilize the robot and all three of the interface’s training methods to complete two widely used manufacturing tasks: press-fitting and molding. In press-fitting, the user trained the robot to push and fit pegs into holes, akin to numerous fastening tasks. For molding, a participant instructed the robot to uniformly push and roll a pliable, dough-like substance around the surface of a central rod, similar to certain thermomolding tasks.

For each of the two tasks, the volunteers were requested to employ each of the three training techniques, beginning with teleoperating the robot via a joystick, proceeding to kinesthetically manipulating the robot, and finally, detaching the robot’s attachment and using it to “naturally” perform the task as the robot collected data on the attachment’s force and movements.

The researchers observed that the volunteers generally favored the natural method over teleoperation and kinesthetic training. The users, all of whom were manufacturing experts, provided scenarios in which each method might prove advantageous over the others. Teleoperation, for instance, may be preferred for training a robot to handle hazardous or toxic materials. Kinesthetic training could assist workers in positioning a robot tasked with relocating heavy packages. Natural teaching could be advantageous in demonstrating tasks requiring delicate and precise maneuvers.

“We envision using our demonstration interface in adaptable manufacturing ecosystems where one robot could assist across a diverse range of tasks that benefit from specific types of demonstrations,” states Hagenow, who plans to refine the design of the attachment based on user feedback and will utilize the new design to evaluate robot learning. “We regard this study as showcasing how increased flexibility in collaborative robots can be attained through interfaces that broaden the ways end-users engage with robots during instruction.”

This research was partially funded by the MIT Postdoctoral Fellowship Program for Engineering Excellence and the Wallenberg Foundation Postdoctoral Research Fellowship.

“`


Leave a Reply

Your email address will not be published. Required fields are marked *

Share This