Skip to content

Research at St Andrews

RadarCat : Radar Categorization for input & interaction

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

RadarCat  : Radar Categorization for input & interaction. / Yeo, Hui Shyong; Flamich, Gergely; Schrempf, Patrick Maurice; Harris-Birtill, David Cameron Christopher; Quigley, Aaron John.

Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, 2016. p. 833-841.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Yeo, HS, Flamich, G, Schrempf, PM, Harris-Birtill, DCC & Quigley, AJ 2016, RadarCat : Radar Categorization for input & interaction. in Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, pp. 833-841, 29th ACM User Interface Software and Technology Symposium, Tokyo, Japan, 16/10/16. https://doi.org/10.1145/2984511.2984515

APA

Yeo, H. S., Flamich, G., Schrempf, P. M., Harris-Birtill, D. C. C., & Quigley, A. J. (2016). RadarCat : Radar Categorization for input & interaction. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16) (pp. 833-841). ACM. https://doi.org/10.1145/2984511.2984515

Vancouver

Yeo HS, Flamich G, Schrempf PM, Harris-Birtill DCC, Quigley AJ. RadarCat : Radar Categorization for input & interaction. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM. 2016. p. 833-841 https://doi.org/10.1145/2984511.2984515

Author

Yeo, Hui Shyong ; Flamich, Gergely ; Schrempf, Patrick Maurice ; Harris-Birtill, David Cameron Christopher ; Quigley, Aaron John. / RadarCat  : Radar Categorization for input & interaction. Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, 2016. pp. 833-841

Bibtex - Download

@inproceedings{c267eae228484520ac0adb13fd9068a3,
title = "RadarCat : Radar Categorization for input & interaction",
abstract = "In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.",
keywords = "Context-aware interaction, Machine learning, Material classification, Object recognition, Ubiquitous computing",
author = "Yeo, {Hui Shyong} and Gergely Flamich and Schrempf, {Patrick Maurice} and Harris-Birtill, {David Cameron Christopher} and Quigley, {Aaron John}",
note = "The research described here was supported by the University of St Andrews and the Scottish Informatics and Computer Science Alliance (SICSA).; 29th ACM User Interface Software and Technology Symposium, UIST ; Conference date: 16-10-2016 Through 19-10-2016",
year = "2016",
month = oct,
day = "16",
doi = "10.1145/2984511.2984515",
language = "English",
isbn = "9781450341899 ",
pages = "833--841",
booktitle = "Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16)",
publisher = "ACM",
address = "United States",
url = "http://uist.acm.org/uist2016/",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - RadarCat 

T2 - 29th ACM User Interface Software and Technology Symposium

AU - Yeo, Hui Shyong

AU - Flamich, Gergely

AU - Schrempf, Patrick Maurice

AU - Harris-Birtill, David Cameron Christopher

AU - Quigley, Aaron John

N1 - Conference code: 29

PY - 2016/10/16

Y1 - 2016/10/16

N2 - In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.

AB - In RadarCat we present a small, versatile radar-based system for material and object classification which enables new forms of everyday proximate interaction with digital devices. We demonstrate that we can train and classify different types of materials and objects which we can then recognize in real time. Based on established research designs, we report on the results of three studies, first with 26 materials (including complex composite objects), next with 16 transparent materials (with different thickness and varying dyes) and finally 10 body parts from 6 participants. Both leave one-out and 10-fold cross-validation demonstrate that our approach of classification of radar signals using random forest classifier is robust and accurate. We further demonstrate four working examples including a physical object dictionary, painting and photo editing application, body shortcuts and automatic refill based on RadarCat. We conclude with a discussion of our results, limitations and outline future directions.

KW - Context-aware interaction

KW - Machine learning

KW - Material classification

KW - Object recognition

KW - Ubiquitous computing

U2 - 10.1145/2984511.2984515

DO - 10.1145/2984511.2984515

M3 - Conference contribution

SN - 9781450341899

SP - 833

EP - 841

BT - Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16)

PB - ACM

Y2 - 16 October 2016 through 19 October 2016

ER -

Related by author

  1. Templated text synthesis for expert-guided multi-label extraction from radiology reports

    Schrempf, P., Watson, H., Park, E., Pajak, M., MacKinnon, H., Muir, K. W., Harris-Birtill, D. & O’Neil, A. Q., 24 Mar 2021, In: Machine Learning and Knowledge Extraction. 3, 2, p. 299-317 19 p.

    Research output: Contribution to journalArticlepeer-review

  2. Paying per-label attention for multi-label extraction from radiology reports

    Schrempf, P., Watson, H., Mikhael, S., Pajak, M., Falis, M., Lisowska, A., Muir, K. W., Harris-Birtill, D. & O'Neil, A. Q., 2020, Interpretable and Annotation-Efficient Learning for Medical Image Computing: Third International Workshop, iMIMIC 2020, Second International Workshop, MIL3iD 2020, and 5th International Workshop, LABELS 2020, Held in Conjunction with MICCAI 2020, Lima, Peru, October 4–8, 2020, Proceedings. Cardoso, J., Van Nguyen, H., Heller, N., Henriques Abreu, P., Isgum, I., Silva, W., Cruz, R., Pereira Amorim, J., Patel, V., Roysam, B., Zhou, K., Jiang, S., Le, N., Luu, K., Sznitman, R., Cheplygina, V., Mateus, D., Trucco, E. & Abbasi, S. (eds.). Cham: Springer, p. 277-289 13 p. (Lecture Notes in Computer Science (including subseries Image Processing, Computer Vision, Pattern Recognition, and Graphics); vol. 12446 LNCS).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  3. Generative deep learning in digital pathology workflows

    Morrison, D., Harris-Birtill, D. & Caie, P. D., 8 Apr 2021, In: The American Journal of Pathology.

    Research output: Contribution to journalReview articlepeer-review

  4. Understanding computation time: a critical discussion of time as a computational performance metric

    Harris-Birtill, D. & Harris-Birtill, R., 3 Aug 2020, (Accepted/In press) Time in variance: the study of time. Parker, J., Harris, P. & Misztal, A. (eds.). Brill, Vol. 17. (The Study of Time).

    Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

  5. Autofocus Net: Auto-focused 3D CNN for Brain Tumour Segmentation.

    Stefani, A., Rahmat, R. & Harris-Birtill, D. C. C., 8 Jul 2020, In Annual Conference on Medical Image Understanding and Analysis: Part of the Communications in Computer and Information Science book series (CCIS). Springer, Vol. 1248. p. 43-55 13 p.

    Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

ID: 245678240

Top