Reinforcement learning-based cascade motion policy design for robust 3D bipedal locomotion

Abstract

This paper presents a novel reinforcement learning (RL) framework to design cascade feedback control policies for 3D bipedal locomotion. Existing RL algorithms are often trained in an end-to-end manner or rely on prior knowledge of some reference joint or task space trajectories. Unlike these studies, we propose a policy structure that decouples the bipedal locomotion problem into two modules that incorporate the physical insights from the nature of the walking dynamics and the well-established Hybrid Zero Dynamics approach for 3D bipedal walking. As a result, the overall RL framework has several key advantages, including lightweight network structure, sample efficiency, and less dependence on prior knowledge. The proposed solution learns stable and robust walking gaits from scratch and allows the controller to realize omnidirectional walking with accurate tracking of the desired velocity and heading angle. The learned policies also perform robustly against various adversarial forces applied to the torso and walking blindly on a series of challenging and unstructured terrains. These results demonstrate that the proposed cascade feedback control policy is suitable for navigation of 3D bipedal robots in indoor and outdoor environments.

Publication
IEEE Access
Guillermo A Castillo
Guillermo A Castillo
Ph.D. Candidate

I am a Ph.D. candidate in Electrical and Computer Engineering at The Ohio State University.

Ayonga Hereid
Ayonga Hereid
Assistant Professor of Mechanical Engineering

My research aims to develop computational and theoretical tools to mitigate the high dimensionality and nonlinearity present in robot control and motion planning problems.