Patient-specific pose estimation in clinical environments

Kenny Chen, Paolo Gabriel, Abdulwahab Alasfour, Chenghao Gong, Werner K. Doyle, Orrin Devinsky, Daniel Friedman, Patricia Dugan, Lucia Melloni, Thomas Thesen, David Gonda, Shifteh Sattar, Sonya Wang, Vikash Gilja

Research output: Contribution to journalArticlepeer-review

41 Scopus citations

Abstract

Reliable posture labels in hospital environments can augment research studies on neural correlates to natural behaviors and clinical applications that monitor patient activity. However, many existing pose estimation frameworks are not calibrated for these unpredictable settings. In this paper, we propose a semi-automated approach for improving upper-body pose estimation in noisy clinical environments, whereby we adapt and build around an existing joint tracking framework to improve its robustness to environmental uncertainties. The proposed framework uses subject-specific convolutional neural network models trained on a subset of a patient's RGB video recording chosen to maximize the feature variance of each joint. Furthermore, by compensating for scene lighting changes and by refining the predicted joint trajectories through a Kalman filter with fitted noise parameters, the extended system yields more consistent and accurate posture annotations when compared with the two state-of-the-art generalized pose tracking algorithms for three hospital patients recorded in two research clinics. 2018 IEEE.

Original languageEnglish (US)
Article number2101111
JournalIEEE Journal of Translational Engineering in Health and Medicine
Volume6
DOIs
StatePublished - 2018

Bibliographical note

Publisher Copyright:
© 2018 Institute of Electrical and Electronics Engineers Inc.. All rights reserved.

Keywords

  • Clinical environments
  • Convolutional neural networks
  • Kalman filter
  • Patient monitoring
  • Pose estimation

Fingerprint

Dive into the research topics of 'Patient-specific pose estimation in clinical environments'. Together they form a unique fingerprint.

Cite this