Abstract
This paper introduces a challenging object grasping task and proposes a self-supervised learning approach. The goal of the task is to grasp an object which is not feasible with a single parallel gripper, but only with harnessing environment fixtures (e.g., walls, furniture, heavy objects). This Slide-to-Wall grasping task assumes no prior knowledge except the partial observation of a target object. Hence the robot should learn an effective policy given a scene observation that may include the target object, environmental fixtures, and any other disturbing objects. We formulate the problem as visual affordances learning for which Target-Oriented Deep Q-Network (TO-DQN) is proposed to efficiently learn visual affordance maps (i.e., Q-maps) to guide robot actions. Since the training necessitates robot's exploration and collision with the fixtures, TO-DQN is first trained safely with a simulated robot manipulator and then applied to a real robot. We empirically show that TO-DQN can learn to solve the task in different environment settings in simulation and outperforms a standard and a variant of Deep Q-Network (DQN) in terms of training efficiency and robustness. The testing performance in both simulation and real-robot experiments shows that the policy trained by TO-DQN achieves comparable performance to humans.
Original language | English (US) |
---|---|
Title of host publication | 2021 IEEE International Conference on Robotics and Automation, ICRA 2021 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 6422-6428 |
Number of pages | 7 |
ISBN (Electronic) | 9781728190778 |
DOIs | |
State | Published - 2021 |
Event | 2021 IEEE International Conference on Robotics and Automation, ICRA 2021 - Xi'an, China Duration: May 30 2021 → Jun 5 2021 |
Publication series
Name | Proceedings - IEEE International Conference on Robotics and Automation |
---|---|
Volume | 2021-May |
ISSN (Print) | 1050-4729 |
Conference
Conference | 2021 IEEE International Conference on Robotics and Automation, ICRA 2021 |
---|---|
Country/Territory | China |
City | Xi'an |
Period | 5/30/21 → 6/5/21 |
Bibliographical note
Funding Information:*This work was in part supported by the MnDRIVE Initiative on Robotics, Sensors, and Advanced Manufacturing. †The authors are with the University of Minnesota, Minneapolis, MN 55455, USA. {liang656, lou00015, yang5276, cchoi}@umn.edu
Publisher Copyright:
© 2021 IEEE
Keywords
- Deep learning in grasping
- Grasping
- Manipulation
- Perception for grasping and manipulation