Hybrid zero dynamics inspired feedback control policy design for 3d bipedal locomotion using reinforcement learning

Abstract

This paper presents a novel model-free reinforcement learning (RL) framework to design feedback control policies for 3D bipedal walking. Existing RL algorithms are often trained in an end-to-end manner or rely on prior knowledge of some reference joint trajectories. Different from these studies, we propose a novel policy structure that appropriately incorporates physical insights gained from the hybrid nature of the walking dynamics and the well-established hybrid zero dynamics approach for 3D bipedal walking. As a result, the overall RL framework has several key advantages, including lightweight network structure, short training time, and less dependence on prior knowledge. We demonstrate the effectiveness of the proposed method on Cassie, a challenging 3D bipedal robot. The proposed solution produces stable limit walking cycles that can track various walking speed in different directions. Surprisingly, without specifically trained with disturbances to achieve robustness, it also performs robustly against various adversarial forces applied to the torso towards both the forward and the backward directions.

Publication
2020 IEEE International Conference on Robotics and Automation (ICRA)
Guillermo A Castillo
Guillermo A Castillo
Ph.D. Candidate

I am a Ph.D. candidate in Electrical and Computer Engineering at The Ohio State University.

Ayonga Hereid
Ayonga Hereid
Assistant Professor of Mechanical Engineering

My research aims to develop computational and theoretical tools to mitigate the high dimensionality and nonlinearity present in robot control and motion planning problems.