A quadruped robot executing a jump over an obstacle in response to verbal and gesture commands. Credit: Yoon et al., RILAB (Korea University), Computational Robotics Lab (ETH Zurich)
Over the next decades, robots are expected to make their way into a growing number of households, public spaces, and professional environments. Many of the most advanced and promising robots designed to date are so-called legged robots, which consist of a central body structure with limbs attached to …
A quadruped robot executing a jump over an obstacle in response to verbal and gesture commands. Credit: Yoon et al., RILAB (Korea University), Computational Robotics Lab (ETH Zurich)
Over the next decades, robots are expected to make their way into a growing number of households, public spaces, and professional environments. Many of the most advanced and promising robots designed to date are so-called legged robots, which consist of a central body structure with limbs attached to it.
Thanks to their animal-inspired body configuration, legged robots can typically move reliably on various terrains, effectively climbing stairs, avoiding obstacles, and accessing areas that wheeled robots are unable to reach.
Despite their potential, most of these robots can only tackle tasks they were extensively trained on in simulated environments and struggle to acquire new skills during real-world interactions with humans.
Researchers at Korea University, ETH Zurich, and University of California Los Angeles (UCLA) have introduced a new dog training-inspired framework that could simplify robot training in the real world. This learning approach, introduced in a paper published on the arXiv preprint server, allows humans to guide robots using touch, gestures, and spoken commands, similarly to how they would communicate with dogs.
"This research was inspired by how dogs learn new behaviors through continuous interaction with humans," Taerim Yoon, first author of the paper, told Tech Xplore. "Dogs do not learn in isolation—they observe, follow, and adapt through physical guidance and social cues. This led us to ask a simple question: could robots be trained in a similar way?"
Data collection through luring, where a teaching rod is used to guide the robot to zigzag between tires. Credit: Yoon et al., RILAB (Korea University), Computational Robotics Lab (ETH Zurich)
Shaping a robot’s behavior via human interaction
The main objective of this work by Yoon and his colleagues was to devise a strategy that would allow human users to interact with legged robots similarly to how they interact with dogs. The researchers started by observing professional dog trainers and drawing from how they tried to teach dogs new skills.
"We noticed that dog trainers often use treats or toys to lure dogs and shape their behavior," explained Yoon. "Over time, once the behavior is learned, the dog can perform these skills even without these rewards, responding directly to commands. Our approach follows a similar principle."
Instead of relying on treats or toys, the researchers used a training rod as a reward that a robot follows during training. The robot learns new skills and behaviors by interacting with a human user and following this physical guide.
"Once the robot has learned these behaviors, it no longer requires the teaching rod and can respond directly to gestures and verbal commands alone," said Yoon. "From a more technical perspective, we focused on data efficiency, since repeatedly collecting interaction data from humans can quickly become burdensome."
The framework developed by the researchers also includes a scene reconstruction module, which recreates scenes in which the robot interacted with humans in simulation. These scenes serve as training environments in which the robot can practice new behaviors independently after a few real-world interactions with humans.
The team applied their framework to a real four-legged robot and found that it achieved very promising results. The robot was able to rapidly acquire new behaviors, such as approaching a user, jumping over obstacles, following someone, and zigzagging around obstacles with a task success rate of 97.15%.
Towards smarter legged robots that acquire skills faster
This recent study introduces an innovative approach that could simplify and speed up the training of legged robots, while also making their interactions with humans more intuitive. The framework could soon be improved further, applied to other legged robots, and tested on a broader range of tasks.
"Robots are beginning to permeate everyday home environments," said Yoon. "Even if manufacturers provide a wide range of built-in skills, there will always be limitations when operating in complex human environments. Allowing users to teach new behaviors directly can be a powerful alternative.
"If robots can learn desired behaviors through natural interaction, rather than programming or expert intervention, non-expert users could adapt robots to their own needs much more easily."
The team’s new dog training-inspired robot training approach could potentially contribute to the deployment of robots in more everyday settings. While the researchers have so far focused on teaching robots to move in specific ways, they will now also try to apply their framework to tasks that entail manipulating objects.
"We would soon like to also tackle loco-manipulation tasks that combine movement and object interaction," added Yoon.
"In this regard, we plan to extend this interaction-based teaching framework to humanoid robots, enabling users to teach more complex, whole-body behaviors through physical interaction and then control them using gestures and verbal commands. Our long-term goal is to build robots that can continuously learn new skills through natural human interaction and coexist seamlessly with people in everyday life."
Written for you by our author Ingrid Fadelli, edited by Sadie Harley, and fact-checked and reviewed by Robert Egan—this article is the result of careful human work. We rely on readers like you to keep independent science journalism alive. If this reporting matters to you, please consider a donation (especially monthly). You’ll get an ad-free account as a thank-you.
More information: Taerim Yoon et al, Teaching Robots Like Dogs: Learning Agile Navigation from Luring, Gesture, and Speech, arXiv (2026). DOI: 10.48550/arxiv.2601.08422
Journal information: arXiv
© 2026 Science X Network
Citation: Training four-legged robots as if they were dogs (2026, January 31) retrieved 31 January 2026 from https://techxplore.com/news/2026-01-legged-robots-dogs.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.