Abstract
Newborn patients in the neonatal intensive care unit (NICU) require continuous monitoring of vital signs. Non-contact patient monitoring is preferred in this setting, due to fragile condition of neonatal patients. Depth-based approaches for estimating the respiratory rate (RR) can operate effectively in conditions where an RGB-based method would typically fail, such as low-lighting or where a patient is covered with blankets. Many previously developed depth-based RR estimation techniques require careful camera placement with known geometry relative to the patient, or manual definition of a region of interest (ROI).
We here present a framework for depth-based RR estimation where the camera position is arbitrary and the ROI is determined automatically and directly from the depth data. Camera placement is addressed through perspective transformation of the scene, which is accomplished by selecting a small number of registration points known to lie in the same plane. The chest ROI is determined automatically from examining the morphology of progressive depth slices in the corrected depth data. We demonstrate the effectiveness of this RR estimation pipeline using actual neonatal patient depth data collected from an RGB-D sensor. RR estimation accuracy is measured relative to gold standard RR captured from the bedside patient monitor. Perspective transformation is shown to be critical to effectively achieve automated ROI segmentation algorithm. Furthermore, the automated ROI segmentation algorithm is shown to improve both time- and frequency-domain based RR estimation accuracy.
When combined, these pre-processing stages are shown to substantially improve the depth-based RR estimation pipeline, with a percentage of acceptable estimates (where the mean absolute error is less than 5 breaths per minute) increasing from 3.60% to 13.47% in the frequency domain and 6.12% to 8.97% in the time domain. Further development will focus on RR estimation from the perspective-corrected depth data and segmented ROI.
Download
You can view a preprint version of the paper as a PDF here. The DOI is 10.1109/MeMeA54994.2022.9856449. The Git repository containing the code and scripts used for analysis can be found here.
Citation
@INPROCEEDINGS{HajjAliMEMEA22,
author={Hajj-Ali, Zein and Greenwood, Kim and Harrold, JoAnn and Green, James R.},
booktitle={2022 IEEE International Symposium on Medical Measurements and Applications (MeMeA)},
title={Towards Depth-based Respiratory Rate Estimation with Arbitrary Camera Placement},
year={2022},
volume={},
number={},
pages={1-6},
doi={10.1109/MeMeA54994.2022.9856449}
}
Copyright Information
© 2022 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.