Skip to content

Research at St Andrews

Out of sight: a toolkit for tracking occluded human joint positions

Research output: Contribution to journalArticlepeer-review

Standard

Out of sight : a toolkit for tracking occluded human joint positions. / Wu, Chi-Jui; Quigley, Aaron John; Harris-Birtill, David Cameron Christopher.

In: Personal and Ubiquitous Computing, Vol. 21, No. 1, 02.2017, p. 125-135.

Research output: Contribution to journalArticlepeer-review

Harvard

Wu, C-J, Quigley, AJ & Harris-Birtill, DCC 2017, 'Out of sight: a toolkit for tracking occluded human joint positions', Personal and Ubiquitous Computing, vol. 21, no. 1, pp. 125-135. https://doi.org/10.1007/s00779-016-0997-6

APA

Wu, C-J., Quigley, A. J., & Harris-Birtill, D. C. C. (2017). Out of sight: a toolkit for tracking occluded human joint positions. Personal and Ubiquitous Computing, 21(1), 125-135. https://doi.org/10.1007/s00779-016-0997-6

Vancouver

Wu C-J, Quigley AJ, Harris-Birtill DCC. Out of sight: a toolkit for tracking occluded human joint positions. Personal and Ubiquitous Computing. 2017 Feb;21(1):125-135. https://doi.org/10.1007/s00779-016-0997-6

Author

Wu, Chi-Jui ; Quigley, Aaron John ; Harris-Birtill, David Cameron Christopher. / Out of sight : a toolkit for tracking occluded human joint positions. In: Personal and Ubiquitous Computing. 2017 ; Vol. 21, No. 1. pp. 125-135.

Bibtex - Download

@article{2d9d2fe70c3c44f789f113ffd06cd106,
title = "Out of sight: a toolkit for tracking occluded human joint positions",
abstract = "Real-time identification and tracking of the joint positions of people can be achieved with off-the-shelf sensing technologies such as the Microsoft Kinect, or other camera-based systems with computer vision. However, tracking is constrained by the system{\textquoteright}s field of view of people. When a person is occluded from the camera view, their position can no longer be followed. Out of Sight addresses the occlusion problem in depth-sensing tracking systems. Our new tracking infrastructure provides human skeleton joint positions during occlusion, by combining the field of view of multiple Kinects using geometric calibration and affine transformation. We verified the technique{\textquoteright}s accuracy through a system evaluation consisting of 20 participants in stationary position and in motion, with two Kinects positioned parallel, 45°, and 90° apart. Results show that our skeleton matching is accurate to within 16.1 cm (s.d. = 5.8 cm), which is within a person{\textquoteright}s personal space. In a realistic scenario study, groups of two people quickly occlude each other, and occlusion is resolved for 85% of the participants. A RESTful API was developed to allow distributed access of occlusion-free skeleton joint positions. As a further contribution, we provide the system as open source.",
keywords = "Kinect, Occlusion, Toolkit",
author = "Chi-Jui Wu and Quigley, {Aaron John} and Harris-Birtill, {David Cameron Christopher}",
year = "2017",
month = feb,
doi = "10.1007/s00779-016-0997-6",
language = "English",
volume = "21",
pages = "125--135",
journal = "Personal and Ubiquitous Computing",
issn = "1617-4909",
publisher = "Springer",
number = "1",

}

RIS (suitable for import to EndNote) - Download

TY - JOUR

T1 - Out of sight

T2 - a toolkit for tracking occluded human joint positions

AU - Wu, Chi-Jui

AU - Quigley, Aaron John

AU - Harris-Birtill, David Cameron Christopher

PY - 2017/2

Y1 - 2017/2

N2 - Real-time identification and tracking of the joint positions of people can be achieved with off-the-shelf sensing technologies such as the Microsoft Kinect, or other camera-based systems with computer vision. However, tracking is constrained by the system’s field of view of people. When a person is occluded from the camera view, their position can no longer be followed. Out of Sight addresses the occlusion problem in depth-sensing tracking systems. Our new tracking infrastructure provides human skeleton joint positions during occlusion, by combining the field of view of multiple Kinects using geometric calibration and affine transformation. We verified the technique’s accuracy through a system evaluation consisting of 20 participants in stationary position and in motion, with two Kinects positioned parallel, 45°, and 90° apart. Results show that our skeleton matching is accurate to within 16.1 cm (s.d. = 5.8 cm), which is within a person’s personal space. In a realistic scenario study, groups of two people quickly occlude each other, and occlusion is resolved for 85% of the participants. A RESTful API was developed to allow distributed access of occlusion-free skeleton joint positions. As a further contribution, we provide the system as open source.

AB - Real-time identification and tracking of the joint positions of people can be achieved with off-the-shelf sensing technologies such as the Microsoft Kinect, or other camera-based systems with computer vision. However, tracking is constrained by the system’s field of view of people. When a person is occluded from the camera view, their position can no longer be followed. Out of Sight addresses the occlusion problem in depth-sensing tracking systems. Our new tracking infrastructure provides human skeleton joint positions during occlusion, by combining the field of view of multiple Kinects using geometric calibration and affine transformation. We verified the technique’s accuracy through a system evaluation consisting of 20 participants in stationary position and in motion, with two Kinects positioned parallel, 45°, and 90° apart. Results show that our skeleton matching is accurate to within 16.1 cm (s.d. = 5.8 cm), which is within a person’s personal space. In a realistic scenario study, groups of two people quickly occlude each other, and occlusion is resolved for 85% of the participants. A RESTful API was developed to allow distributed access of occlusion-free skeleton joint positions. As a further contribution, we provide the system as open source.

KW - Kinect

KW - Occlusion

KW - Toolkit

U2 - 10.1007/s00779-016-0997-6

DO - 10.1007/s00779-016-0997-6

M3 - Article

VL - 21

SP - 125

EP - 135

JO - Personal and Ubiquitous Computing

JF - Personal and Ubiquitous Computing

SN - 1617-4909

IS - 1

ER -

Related by author

  1. Understanding computation time: a critical discussion of time as a computational performance metric

    Harris-Birtill, D. & Harris-Birtill, R., 3 Aug 2020, (Accepted/In press) Time in variance: the study of time. Parker, J., Harris, P. & Misztal, A. (eds.). Brill, Vol. 17. (The Study of Time).

    Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

  2. Autofocus Net: Auto-focused 3D CNN for Brain Tumour Segmentation.

    Stefani, A., Rahmat, R. & Harris-Birtill, D. C. C., 8 Jul 2020, In Annual Conference on Medical Image Understanding and Analysis: Part of the Communications in Computer and Information Science book series (CCIS). Springer, Vol. 1248. p. 43-55 13 p.

    Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

  3. Paying per-label attention for multi-label extraction from radiology reports

    Schrempf, P., Watson, H., Mikhael, S., Pajak, M., Falis, M., Lisowska, A., Muir, K. W., Harris-Birtill, D. & O'Neil, A. Q., 2020, Interpretable and Annotation-Efficient Learning for Medical Image Computing: Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3iD 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4–8, 2020, Proceedings. Cardoso, J., Van Nguyen, H., Heller, N., Henriques Abreu, P., Isgum, I., Silva, W., Cruz, R., Pereira Amorim, J., Patel, V., Roysam, B., Zhou, K., Jiang, S., Le, N., Luu, K., Sznitman, R., Cheplygina, V., Mateus, D., Trucco, E. & Abbasi, S. (eds.). Cham: Springer, p. 277-289 13 p. (Lecture Notes in Computer Science (including subseries Image Processing, Computer Vision, Pattern Recognition, and Graphics); vol. 12446 LNCS).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. Smart Homes for elderly to promote their health and wellbeing

    Pirzada, P., Wilde, A. G. & Harris-Birtill, D. C. C., 16 Sep 2019. 1 p.

    Research output: Contribution to conferencePosterpeer-review

  5. A comparison of level set models in image segmentation

    Rahmat, R. & Harris-Birtill, D., 6 Dec 2018, In: IET Image Processing. 12, 12, p. 2212-2221 11 p.

    Research output: Contribution to journalArticlepeer-review

Related by journal

  1. A taxonomy for and analysis of multi-person-display ecosystems

    Terrenghi, L., Quigley, A. & Dix, A., Nov 2009, In: Personal and Ubiquitous Computing. 13, 8, p. 583-598 16 p.

    Research output: Contribution to journalArticlepeer-review

  2. Special issue on interaction with coupled and public displays

    Quigley, A., Subramanian, S. & Izadi, S., Nov 2009, In: Personal and Ubiquitous Computing. 13, 8, p. 549-550 2 p.

    Research output: Contribution to journalEditorialpeer-review

  3. MEMENTO: a digital-physical scrapbook for memory sharing

    West, D., Quigley, A. & Kay, J., 2007, In: Personal and Ubiquitous Computing. 11, 4, p. 313-328 16 p.

    Research output: Contribution to journalArticlepeer-review

  4. Epidemic Messaging Middleware for Ad hoc networks

    Musolesi, M., Mascolo, C. & Hailes, S., 2006, In: Personal and Ubiquitous Computing. 10, 1, p. 28-36 9 p.

    Research output: Contribution to journalArticlepeer-review

ID: 248040132

Top