Skip to content

Research at St Andrews

RadarCat : Radar Categorization for input & interaction

Research output: Chapter in Book/Report/Conference proceedingConference contribution

DOI

Open Access permissions

Open

Standard

RadarCat  : Radar Categorization for input & interaction. / Yeo, Hui Shyong; Flamich, Gergely; Schrempf, Patrick; Harris-Birtill, David Cameron Christopher; Quigley, Aaron John.

Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 2016. p. 833-841.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Yeo, HS, Flamich, G, Schrempf, P, Harris-Birtill, DCC & Quigley, AJ 2016, RadarCat : Radar Categorization for input & interaction. in Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, pp. 833-841, 29th ACM User Interface Software and Technology Symposium, Tokyo, Japan, 16-19 October. DOI: 10.1145/2984511.2984515

APA

Yeo, H. S., Flamich, G., Schrempf, P., Harris-Birtill, D. C. C., & Quigley, A. J. (2016). RadarCat : Radar Categorization for input & interaction. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. (pp. 833-841). ACM. DOI: 10.1145/2984511.2984515

Vancouver

Yeo HS, Flamich G, Schrempf P, Harris-Birtill DCC, Quigley AJ. RadarCat : Radar Categorization for input & interaction. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM. 2016. p. 833-841. Available from, DOI: 10.1145/2984511.2984515

Author

Yeo, Hui Shyong; Flamich, Gergely; Schrempf, Patrick; Harris-Birtill, David Cameron Christopher; Quigley, Aaron John / RadarCat  : Radar Categorization for input & interaction.

Proceedings of the 29th Annual Symposium on User Interface Software and Technology. ACM, 2016. p. 833-841.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Bibtex - Download

@inbook{c267eae228484520ac0adb13fd9068a3,
title = "RadarCat : Radar Categorization for input & interaction",
keywords = "Context-aware interaction, Machine learning, Material classification, Object recognition, Ubiquitous computing",
author = "Yeo, {Hui Shyong} and Gergely Flamich and Patrick Schrempf and Harris-Birtill, {David Cameron Christopher} and Quigley, {Aaron John}",
note = "The research described here was supported by the University of St Andrews and the Scottish Informatics and Computer Science Alliance (SICSA).",
year = "2016",
month = "10",
doi = "10.1145/2984511.2984515",
isbn = "9781450341899",
pages = "833--841",
booktitle = "Proceedings of the 29th Annual Symposium on User Interface Software and Technology",
publisher = "ACM",
address = "United States",

}

RIS (suitable for import to EndNote) - Download

TY - CHAP

T1 - RadarCat 

T2 - Radar Categorization for input & interaction

AU - Yeo,Hui Shyong

AU - Flamich,Gergely

AU - Schrempf,Patrick

AU - Harris-Birtill,David Cameron Christopher

AU - Quigley,Aaron John

N1 - The research described here was supported by the University of St Andrews and the Scottish Informatics and Computer Science Alliance (SICSA).

PY - 2016/10/16

Y1 - 2016/10/16

N2 - In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.

AB - In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.

KW - Context-aware interaction

KW - Machine learning

KW - Material classification

KW - Object recognition

KW - Ubiquitous computing

U2 - 10.1145/2984511.2984515

DO - 10.1145/2984511.2984515

M3 - Conference contribution

SN - 9781450341899

SP - 833

EP - 841

BT - Proceedings of the 29th Annual Symposium on User Interface Software and Technology

PB - ACM

ER -

Related by author

  1. Out of sight: a toolkit for tracking occluded human joint positions

    Wu, C-J., Quigley, A. J. & Harris-Birtill, D. C. C. Feb 2017 In : Personal and Ubiquitous Computing. 21, 1, p. 125-135 11 p.

    Research output: Contribution to journalArticle

  2. Workshop on object recognition for input and mobile interaction

    Yeo, H. S., Laput, G., Gillian, N. & Quigley, A. J. 16 Mar 2017 Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct. ACM

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  3. Investigating tilt-based gesture keyboard entry for single-handed text entry on large devices

    Yeo, H. S., Phang, X-S., Castellucci, S. J., Kristensson, P. O. & Quigley, A. J. 12 Jan 2017 Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems . ACM

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. Sidetap & slingshot gestures on unmodified smartwatches

    Yeo, H. S., Lee, J., Bianchi, A. & Quigley, A. J. 16 Oct 2016 Adjunct Proceedings of the 29th Annual Symposium on User Interface Software and Technology. New York: ACM, p. 189-190

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  5. WatchMI: pressure touch, twist and pan gesture input on unmodified smartwatches

    Yeo, H. S., Lee, . J., Bianchi, A. & Quigley, A. J. 7 Sep 2016 Proceedings of the 18th International Conference on Human-Computer Interaction with Mobile Devices and Services: MobileHCI '16 . ACM, p. 394-399

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

ID: 245678240