Skip to content

Research at St Andrews

Biologically inspired vision for human-robot interaction

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

Biologically inspired vision for human-robot interaction. / Saleiro, Mario; Farrajota, Miguel; Terzić, Kasim; Krishna, Sai; Rodrigues, João M.F.; du Buf, J. M.Hans.

Universal Access in Human-Computer Interaction. Access to Interaction: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II. ed. / Margherita Antona; Constantine Stephanidis. Cham : Springer, 2015. p. 505-517 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9176).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Saleiro, M, Farrajota, M, Terzić, K, Krishna, S, Rodrigues, JMF & du Buf, JMH 2015, Biologically inspired vision for human-robot interaction. in M Antona & C Stephanidis (eds), Universal Access in Human-Computer Interaction. Access to Interaction: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 9176, Springer, Cham, pp. 505-517, 9th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2015 Held as Part of 17th International Conference on Human-Computer Interaction, HCI International 2015, Los Angeles, United States, 2/08/15. https://doi.org/10.1007/978-3-319-20681-3_48

APA

Saleiro, M., Farrajota, M., Terzić, K., Krishna, S., Rodrigues, J. M. F., & du Buf, J. M. H. (2015). Biologically inspired vision for human-robot interaction. In M. Antona, & C. Stephanidis (Eds.), Universal Access in Human-Computer Interaction. Access to Interaction: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II (pp. 505-517). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 9176). Springer. https://doi.org/10.1007/978-3-319-20681-3_48

Vancouver

Saleiro M, Farrajota M, Terzić K, Krishna S, Rodrigues JMF, du Buf JMH. Biologically inspired vision for human-robot interaction. In Antona M, Stephanidis C, editors, Universal Access in Human-Computer Interaction. Access to Interaction: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II. Cham: Springer. 2015. p. 505-517. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-20681-3_48

Author

Saleiro, Mario ; Farrajota, Miguel ; Terzić, Kasim ; Krishna, Sai ; Rodrigues, João M.F. ; du Buf, J. M.Hans. / Biologically inspired vision for human-robot interaction. Universal Access in Human-Computer Interaction. Access to Interaction: 9th International Conference, UAHCI 2015, Held as Part of HCI International 2015, Los Angeles, CA, USA, August 2-7, 2015, Proceedings, Part II. editor / Margherita Antona ; Constantine Stephanidis. Cham : Springer, 2015. pp. 505-517 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).

Bibtex - Download

@inproceedings{c25dfc2be48042699cd5c73e23912193,
title = "Biologically inspired vision for human-robot interaction",
abstract = "Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.",
keywords = "Biological framework, Hand gestures, Human-robot interaction",
author = "Mario Saleiro and Miguel Farrajota and Kasim Terzi{\'c} and Sai Krishna and Rodrigues, {Jo{\~a}o M.F.} and {du Buf}, {J. M.Hans}",
year = "2015",
doi = "10.1007/978-3-319-20681-3_48",
language = "English",
isbn = "9783319206806",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "505--517",
editor = "Margherita Antona and Constantine Stephanidis",
booktitle = "Universal Access in Human-Computer Interaction. Access to Interaction",
address = "Netherlands",
note = "9th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2015 Held as Part of 17th International Conference on Human-Computer Interaction, HCI International 2015, UAHCI ; Conference date: 02-08-2015 Through 07-08-2015",
url = "http://2015.hci.international/uahci",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - Biologically inspired vision for human-robot interaction

AU - Saleiro, Mario

AU - Farrajota, Miguel

AU - Terzić, Kasim

AU - Krishna, Sai

AU - Rodrigues, João M.F.

AU - du Buf, J. M.Hans

N1 - Conference code: 9

PY - 2015

Y1 - 2015

N2 - Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.

AB - Human-robot interaction is an interdisciplinary research area that is becoming more and more relevant as robots start to enter our homes, workplaces, schools, etc. In order to navigate safely among us, robots must be able to understand human behavior, to communicate, and to interpret instructions from humans, either by recognizing their speech or by understanding their body movements and gestures. We present a biologically inspired vision system for human-robot interaction which integrates several components: visual saliency, stereo vision, face and hand detection and gesture recognition. Visual saliency is computed using color, motion and disparity. Both the stereo vision and gesture recognition components are based on keypoints coded by means of cortical V1 simple, complex and end-stopped cells. Hand and face detection is achieved by using a linear SVM classifier. The system was tested on a child-sized robot.

KW - Biological framework

KW - Hand gestures

KW - Human-robot interaction

U2 - 10.1007/978-3-319-20681-3_48

DO - 10.1007/978-3-319-20681-3_48

M3 - Conference contribution

AN - SCOPUS:84945911426

SN - 9783319206806

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 505

EP - 517

BT - Universal Access in Human-Computer Interaction. Access to Interaction

A2 - Antona, Margherita

A2 - Stephanidis, Constantine

PB - Springer

CY - Cham

T2 - 9th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2015 Held as Part of 17th International Conference on Human-Computer Interaction, HCI International 2015

Y2 - 2 August 2015 through 7 August 2015

ER -

Related by author

  1. Few-shot linguistic grounding of visual attributes and relations using gaussian kernels

    Koudouna, D. & Terzić, K., 8 Feb 2021, Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - (Volume 5). Farinella, G. M., Radeva, P., Braz, J. & Bouatouch, K. (eds.). SCITEPRESS - Science and Technology Publications, Vol. 5 VISAPP. p. 146-156

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  2. Visualization as Intermediate Representations (VLAIR) for human activity recognition

    Jiang, A., Nacenta, M., Terzić, K. & Ye, J., 18 May 2020, PervasiveHealth '20: Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare. Munson, S. A. & Schueller, S. M. (eds.). ACM, p. 201-210 10 p.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  3. Supervisor recommendation tool for Computer Science projects

    Zemaityte, G. & Terzic, K., 9 Jan 2019, Proceedings of the 3rd Conference on Computing Education Practice (CEP '19) . New York: ACM, 4 p. 1

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. BINK: Biological Binary Keypoint Descriptor

    Saleiro, M., Terzić, K., Rodrigues, J. M. F. & du Buf, J. M. H., Dec 2017, In: BioSystems. 162, p. 147-156

    Research output: Contribution to journalArticlepeer-review

  5. Texture features for object salience

    Terzić, K., Krishna, S. & du Buf, J. M. H., Nov 2017, In: Image and Vision Computing. 67, p. 43-51

    Research output: Contribution to journalArticlepeer-review

ID: 255500590

Top