Skip to content

Research at St Andrews

Interpretable feature maps for robot attention

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Standard

Interpretable feature maps for robot attention. / Terzić, Kasim; du Buf, J. M.H.

Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods: 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I. ed. / Margherita Antona; Constantine Stephanidis. Cham : Springer, 2017. p. 456-467 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10277).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Harvard

Terzić, K & du Buf, JMH 2017, Interpretable feature maps for robot attention. in M Antona & C Stephanidis (eds), Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods: 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 10277, Springer, Cham, pp. 456-467, 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017, Vancouver, Canada, 9/07/17. https://doi.org/10.1007/978-3-319-58706-6_37

APA

Terzić, K., & du Buf, J. M. H. (2017). Interpretable feature maps for robot attention. In M. Antona, & C. Stephanidis (Eds.), Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods: 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I (pp. 456-467). (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Vol. 10277). Springer. https://doi.org/10.1007/978-3-319-58706-6_37

Vancouver

Terzić K, du Buf JMH. Interpretable feature maps for robot attention. In Antona M, Stephanidis C, editors, Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods: 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I. Cham: Springer. 2017. p. 456-467. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)). https://doi.org/10.1007/978-3-319-58706-6_37

Author

Terzić, Kasim ; du Buf, J. M.H. / Interpretable feature maps for robot attention. Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods: 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I. editor / Margherita Antona ; Constantine Stephanidis. Cham : Springer, 2017. pp. 456-467 (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)).

Bibtex - Download

@inproceedings{725a4c9a2ad1491780a49232d5ff1ccc,
title = "Interpretable feature maps for robot attention",
abstract = "Attention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture.",
author = "Kasim Terzi{\'c} and {du Buf}, {J. M.H.}",
year = "2017",
doi = "10.1007/978-3-319-58706-6_37",
language = "English",
isbn = "9783319587059",
series = "Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)",
publisher = "Springer",
pages = "456--467",
editor = "Margherita Antona and Constantine Stephanidis",
booktitle = "Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods",
address = "Netherlands",
note = "11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017, UAHCI ; Conference date: 09-07-2017 Through 14-07-2017",
url = "http://2017.hci.international/index.php",

}

RIS (suitable for import to EndNote) - Download

TY - GEN

T1 - Interpretable feature maps for robot attention

AU - Terzić, Kasim

AU - du Buf, J. M.H.

N1 - Conference code: 11

PY - 2017

Y1 - 2017

N2 - Attention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture.

AB - Attention is crucial for autonomous agents interacting with complex environments. In a real scenario, our expectations drive attention, as we look for crucial objects to complete our understanding of the scene. But most visual attention models to date are designed to drive attention in a bottom-up fashion, without context, and the features they use are not always suitable for driving top-down attention. In this paper, we present an attentional mechanism based on semantically meaningful, interpretable features. We show how to generate a low-level semantic representation of the scene in real time, which can be used to search for objects based on specific features such as colour, shape, orientation, speed, and texture.

U2 - 10.1007/978-3-319-58706-6_37

DO - 10.1007/978-3-319-58706-6_37

M3 - Conference contribution

AN - SCOPUS:85025168961

SN - 9783319587059

T3 - Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

SP - 456

EP - 467

BT - Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods

A2 - Antona, Margherita

A2 - Stephanidis, Constantine

PB - Springer

CY - Cham

T2 - 11th International Conference on Universal Access in Human-Computer Interaction, UAHCI 2017, held as part of the 19th International Conference on Human-Computer Interaction, HCI 2017

Y2 - 9 July 2017 through 14 July 2017

ER -

Related by author

  1. Few-shot linguistic grounding of visual attributes and relations using gaussian kernels

    Koudouna, D. & Terzić, K., 8 Feb 2021, Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - (Volume 5). Farinella, G. M., Radeva, P., Braz, J. & Bouatouch, K. (eds.). SCITEPRESS - Science and Technology Publications, Vol. 5 VISAPP. p. 146-156

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  2. Visualization as Intermediate Representations (VLAIR) for human activity recognition

    Jiang, A., Nacenta, M., Terzić, K. & Ye, J., 18 May 2020, PervasiveHealth '20: Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare. Munson, S. A. & Schueller, S. M. (eds.). ACM, p. 201-210 10 p.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  3. Supervisor recommendation tool for Computer Science projects

    Zemaityte, G. & Terzic, K., 9 Jan 2019, Proceedings of the 3rd Conference on Computing Education Practice (CEP '19) . New York: ACM, 4 p. 1

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. BINK: Biological Binary Keypoint Descriptor

    Saleiro, M., Terzić, K., Rodrigues, J. M. F. & du Buf, J. M. H., Dec 2017, In: BioSystems. 162, p. 147-156

    Research output: Contribution to journalArticlepeer-review

  5. Texture features for object salience

    Terzić, K., Krishna, S. & du Buf, J. M. H., Nov 2017, In: Image and Vision Computing. 67, p. 43-51

    Research output: Contribution to journalArticlepeer-review

ID: 255500481

Top