Home » Watch This Robot Run, Vault, and Climb Like a Parkour Athlete

Watch This Robot Run, Vault, and Climb Like a Parkour Athlete

by Editor
0 comments

A research team from Amazon Frontier AI & Robotics and UC Berkeley has presented Perceptive Humanoid Parkour, a framework designed to help humanoid robots run, vault, climb, and adapt to obstacles using onboard perception. Tested on a Unitree G1 robot, the system combines human parkour data, motion matching, and reinforcement learning to produce long-horizon movement sequences in real-world settings.

The work sits at the intersection of robotics, computer vision, and motion learning. Its focus is not just stable walking, which many humanoid systems can already manage, but a broader set of dynamic maneuvers that require timing, whole-body coordination, and quick decisions in cluttered environments.

According to the paper posted on arXiv by Zhen Wu, Xiaoyu Huang and colleagues, the goal is to give humanoids the ability to chain multiple skills together while reacting to what they see. That is what makes the framework notable here: it is built to decide, on the fly, whether a robot should step over, climb onto, vault across, or roll off an obstacle.

A Framework Built from Human Parkour Data

The researchers began by collecting videos of people performing parkour movements, then splitting those motions into smaller reusable segments. Those segments were later recombined into longer sequences, so the robot would not only imitate isolated actions but move through a course in a more continuous way.

According to the paper, “Our first approach leverages motion matching, formulated as nearest-neighbor search in a feature space, to compose retargeted atomic human skills into long-horizon kinematic trajectories.” The authors add that “This framework enables the flexible composition and smooth transition of complex skill chains while preserving the elegance and fluidity of dynamic human motions.”

That approach matters because the study is explicitly trying to solve a problem the authors describe early on. In the paper, they write, “While recent advances in humanoid locomotion have achieved stable walking on varied terrains, capturing the agility and adaptivity of highly dynamic human motions remains an open challenge.” A bit further on, they narrow the point: “In particular, agile parkour in complex environments demands not only low-level robustness, but also human-like motion expressiveness, long-horizon skill composition, and perception-driven decision-making.”

Training one controller to execute several skills

After generating those motion sequences, the team trained robot controllers on them through reinforcement learning. First, the controllers learned individual behaviors. After that, the researchers distilled them into a single controller that could use visual input to coordinate different actions depending on the obstacle in front of it.

As reported in the study, the resulting system works from onboard depth sensing and a simple 2D velocity command. The authors write, “Crucially, the combination of perception and skill composition enables autonomous, context-aware decision-making: using only onboard depth sensing and a discrete 2D velocity command, the robot selects and executes whether to step over, climb onto, vault or roll off obstacles of varying geometries and heights.”

Humanoid Robot Performing Parkour Cat Vault, 1.25 M Drop Landing, And 48 Second Obstacle Course With Real Time Adaptation ©arxiv

Humanoid robot performing parkour: cat vault, 1.25 m drop landing, and 48-second obstacle course with real-time adaptation ©arxiv

The paper described the policy as a multi-skill visuomotor system capable of handling “complex contact-rich maneuvers,” including vaulting at about 3 m/s, climbing onto a 1.25-meter wall, and performing a 60-second continuous traversal of a complex parkour course. That list gives a pretty clear sense of what the researchers were trying to train: not one flashy move, but a sequence of them.

Real-World Tests on the Unitree G1

For validation, the researchers deployed the framework on a Unitree G1 humanoid robot. In the real-world experiments described in the paper, the robot carried out a set of dynamic parkour behaviors, including climbing high obstacles and moving across multi-obstacle courses with closed-loop adaptation.

According to the arXiv paper, “We validate our framework with extensive real-world experiments on a Unitree G1 humanoid robot, demonstrating highly dynamic parkour skills such as climbing tall obstacles up to 1.25 m (96% robot height), as well as long-horizon multi-obstacle traversal with closed-loop adaptation to real-time obstacle perturbations.”

The work was introduced under the name Perceptive Humanoid Parkour, or PHP, and researcher Guanya Shi wrote in a post quoted in the source material that it lets “a humanoid perceive the world and decide movements on the fly — running, vaulting, climbing, adapting online.” The team also stated that the framework would be fully open-sourced soon.

 

 

Originally written by: Slamani Aghilas

Source: Indian Defence Review

Published on: 8 March 2026

Link to original article: Watch This Robot Run, Vault, and Climb Like a Parkour Athlete

You may also like

Leave a Comment