Robot-to-robot relative pose estimation using humans as markers

Md Jahidul Islam, Jiawei Mo, Junaed Sattar

Research output: Contribution to journalArticlepeer-review

13 Scopus citations

Abstract

In this paper, we propose a method to determine the 3D relative pose of pairs of communicating robots by using human pose-based key-points as correspondences. We adopt a ‘leader-follower’ framework, where at first, the leader robot visually detects and triangulates the key-points using the state-of-the-art pose detector named OpenPose. Afterward, the follower robots match the corresponding 2D projections on their respective calibrated cameras and find their relative poses by solving the perspective-n-point (PnP) problem. In the proposed method, we design an efficient person re-identification technique for associating the mutually visible humans in the scene. Additionally, we present an iterative optimization algorithm to refine the associated key-points based on their local structural properties in the image space. We demonstrate that these refinement processes are essential to establish accurate key-point correspondences across viewpoints. Furthermore, we evaluate the performance of the proposed relative pose estimation system through several experiments conducted in terrestrial and underwater environments. Finally, we discuss the relevant operational challenges of this approach and analyze its feasibility for multi-robot cooperative systems in human-dominated social settings and feature-deprived environments such as underwater.

Original languageEnglish (US)
Pages (from-to)579-593
Number of pages15
JournalAutonomous Robots
Volume45
Issue number4
DOIs
StatePublished - May 2021

Bibliographical note

Funding Information:
We would like to thank Hyun Soo Park (Assistant Professor, University of Minnesota) for his valuable insights which immensely enriched this paper. We gratefully acknowledge the support of the MnDrive initiative and thank NVIDIA Corporation for donating two Titan-class GPUs for this research. In addition, we are grateful to the Bellairs Research Institute of Barbados for providing us with the facilities for field experiments; we also acknowledge our colleagues at the IRVLab and the participants of the 2019 Marine Robotics Sea Trials for their assistance in collecting data and conducting the experiments.

Funding Information:
We would like to thank Hyun Soo Park (Assistant Professor, University of Minnesota) for his valuable insights which immensely enriched this paper. We gratefully acknowledge the support of the MnDrive initiative and thank NVIDIA Corporation for donating two Titan-class GPUs for this research. In addition, we are grateful to the Bellairs Research Institute of Barbados for providing us with the facilities for field experiments; we also acknowledge our colleagues at the IRVLab and the participants of the 2019 Marine Robotics Sea Trials for their assistance in collecting data and conducting the experiments.

Publisher Copyright:
© 2021, The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature.

Keywords

  • Marine robotics
  • Underwater human–robot cooperation
  • Underwater visual perception

Fingerprint

Dive into the research topics of 'Robot-to-robot relative pose estimation using humans as markers'. Together they form a unique fingerprint.

Cite this