Skip to content

Research at St Andrews

RadarCat : Radar Categorization for input & interaction

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Author(s)

School/Research organisations

Abstract

In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.
Close

Details

Original languageEnglish
Title of host publicationProceedings of the 29th Annual Symposium on User Interface Software and Technology
PublisherACM
Pages833-841
ISBN (Print)9781450341899
DOIs
Publication statusPublished - 16 Oct 2016
Event29th ACM User Interface Software and Technology Symposium - Hitotsubashi Hall, National Center of Sciences Building, Tokyo, Japan
Duration: 16 Oct 201619 Oct 2016
Conference number: 29
http://uist.acm.org/uist2016/

Conference

Conference29th ACM User Interface Software and Technology Symposium
Abbreviated titleUIST
CountryJapan
CityTokyo
Period16/10/1619/10/16
Internet address

    Research areas

  • Context-aware interaction, Machine learning, Material classification, Object recognition, Ubiquitous computing

Discover related content
Find related publications, people, projects and more using interactive charts.

View graph of relations

Related by author

  1. ‘Multiplicity embarrasses the eye’: The digital mapping of literary Edinburgh

    Loxley, J., Alex, B., Anderson, M., Hinrichs, U., Grover, C., Harris-Birtill, D., Thomson, T., Quigley, A. & Oberlander, J., 1 Jan 2018, The Routledge Companion to Spatial History. Taylor and Francis, p. 604-628 25 p.

    Research output: Chapter in Book/Report/Conference proceedingChapter

  2. SpeCam: sensing surface color and material with the front-facing camera of mobile device

    Yeo, H. S., Lee, J., Bianchi, A., Harris-Birtill, D. & Quigley, A. J., 4 Sep 2017, Proceedings of the 19th International Conference on Human-Computer Interaction with Mobile Devices and Services. New York, NY: ACM, 9 p. 25

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  3. Out of sight: a toolkit for tracking occluded human joint positions

    Wu, C-J., Quigley, A. J. & Harris-Birtill, D. C. C., Feb 2017, In : Personal and Ubiquitous Computing. 21, 1, p. 125-135 11 p.

    Research output: Contribution to journalArticle

  4. WRIST: Watch-Ring Interaction and Sensing Technique for wrist gestures and macro-micro pointing

    Yeo, H. S., Lee, J., Kim, H., Gupta, A., Bianchi, A., Vogel, D., Koike, H., Woo, W. & Quigley, A. J., 1 Oct 2019, Proceedings of the 21st International Conference on Human-Computer Interaction with Mobile Devices and Services (MobileHCI '19). New York: ACM, 15 p. 19

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  5. Smart Homes for elderly to promote their health and wellbeing

    Pirzada, P., Wilde, A. G. & Harris-Birtill, D. C. C., 16 Sep 2019. 1 p.

    Research output: Contribution to conferencePoster

ID: 245678240

Top