Robotic Embodiment of Human-Like Motor Skills via Reinforcement Learning

Research output: Contribution to journalArticlepeer-review

3 Scopus citations

Abstract

Current methods require robots to be reprogrammed for every new task, consuming many engineering resources. This work focuses on integrating real and simulated environments for our proposed 'Internet of Skills,' which enables robots to learn advanced skills from a small set of expert demonstrations. By expanding on recent work in the areas of Learning from Demonstrations (LfD) and Reinforcement Learning (RL), we can train robot control policies that can not only effectively complete a given task but also do so with greater performance than the expert demonstrations used to train the policy. In this work, we create simulated environments to train RL algorithms for the task of inverse kinematics and obstacle avoidance. Many state-of-the-art RL algorithms are compared, and we provide a detailed analysis of the state space and parameters chosen. Lastly, we utilize a Vicon motion tracking system and train the robot agent to follow trajectories given by a human operator. Our results show that reinforcement learning algorithms such as proximal policy optimization can develop control policies that are capable of complex control tasks that integrate with the real world, an important first step towards developing a system that can autonomously learn new skills from human demonstrations.

Original languageEnglish (US)
Pages (from-to)3711-3717
Number of pages7
JournalIEEE Robotics and Automation Letters
Volume7
Issue number2
DOIs
StatePublished - Apr 1 2022
Externally publishedYes

Bibliographical note

Publisher Copyright:
© 2016 IEEE.

Keywords

  • Collision avoidance
  • Model learning for control
  • Reinforcement learning
  • Telerobotics and teleoperation
  • Transfer learning

Fingerprint

Dive into the research topics of 'Robotic Embodiment of Human-Like Motor Skills via Reinforcement Learning'. Together they form a unique fingerprint.

Cite this