Abstract
In this paper, we present a realtime method for simultaneous object-level scene understanding and grasp prediction. Specifically, given a single RGBD image of a scene, our method localizes all the objects in the scene and for each object, it generates the following: full 3D shape, scale, pose with respect to the camera frame, and a dense set of feasible grasps. The main advantage of our method is its computation speed as it avoids sequential perception and grasp planning. With detailed quantitative analysis of reconstruction quality and grasp accuracy, we show that our method delivers competitive performance compared to the state-of-the-art methods, while providing fast inference at 30 frames per second speed.
Original language | English (US) |
---|---|
Title of host publication | 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023 |
Publisher | Institute of Electrical and Electronics Engineers Inc. |
Pages | 3184-3191 |
Number of pages | 8 |
ISBN (Electronic) | 9781665491907 |
DOIs | |
State | Published - 2023 |
Externally published | Yes |
Event | 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023 - Detroit, United States Duration: Oct 1 2023 → Oct 5 2023 |
Publication series
Name | IEEE International Conference on Intelligent Robots and Systems |
---|---|
ISSN (Print) | 2153-0858 |
ISSN (Electronic) | 2153-0866 |
Conference
Conference | 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems, IROS 2023 |
---|---|
Country/Territory | United States |
City | Detroit |
Period | 10/1/23 → 10/5/23 |
Bibliographical note
Publisher Copyright:© 2023 IEEE.