Skip to content

Research at St Andrews

Investigating multisensory integration in emotion recognition through bio-inspired computational models

Research output: Contribution to journalArticlepeer-review

Author(s)

Esma Mansouri Benssassi, Juan Ye

School/Research organisations

Abstract

Emotion understanding represents a core aspect of human communication. Our social behaviours are closely linked to expressing our emotions and understanding others emotional and mental states through social signals. The majority of the existing work proceeds by extracting meaningful features from each modality and applying fusion techniques either at a feature level or decision level. However, these techniques are incapable of translating the constant talk and feedback between different modalities. Such constant talk is particularly important in continuous emotion recognition, where one modality can predict, enhance and complement the other. This paper proposes three multisensory integration models, based on different pathways of multisensory integration in the brain; that is, integration by convergence, early cross-modal enhancement, and integration through neural synchrony. The proposed models are designed and implemented using third-generation neural networks, Spiking Neural Networks (SNN). The models are evaluated using widely adopted, third-party datasets and compared to state-of-the-art multimodal fusion techniques, such as early, late and deep learning fusion. Evaluation results show that the three proposed models have achieved comparable results to the state-of-the-art supervised learning techniques. More importantly, this paper demonstrates plausible ways to translate constant talk between modalities during the training phase, which also brings advantages in generalisation and robustness to noise.
Close

Details

Original languageEnglish
Number of pages13
JournalIEEE Transactions on Affective Computing
VolumeEarly Access
Early online date19 Aug 2021
DOIs
Publication statusE-pub ahead of print - 19 Aug 2021

    Research areas

  • Spiking neural network, Multisensory integration, Emotion recognition, Neural synchrony, Graph neural network

Discover related content
Find related publications, people, projects and more using interactive charts.

View graph of relations

Related by author

  1. ContrasGAN: unsupervised domain adaptation in Human Activity Recognition via adversarial and contrastive learning

    Rosales Sanabria, A., Zambonelli, F., Dobson, S. A. & Ye, J., 6 Nov 2021, (E-pub ahead of print) In: Pervasive and Mobile Computing. In Press, p. 1-34 34 p., 101477.

    Research output: Contribution to journalArticlepeer-review

  2. Collaborative activity recognition with heterogeneous activity sets and privacy preferences

    Civitarese, G., Ye, J., Zampatti, M. & Bettini, C., 4 Nov 2021, (E-pub ahead of print) In: Journal of Ambient Intelligence and Smart Environments. Pre-press, p. 1-20 20 p.

    Research output: Contribution to journalArticlepeer-review

  3. Continual learning in sensor-based human activity recognition: an empirical benchmark analysis

    Jha, S., Schiemer, M., Zambonelli, F. & Ye, J., 16 Apr 2021, (E-pub ahead of print) In: Information Sciences. In Press, p. 1-35 35 p.

    Research output: Contribution to journalArticlepeer-review

  4. Continual activity recognition with generative adversarial networks

    Ye, J., Nakwijit, P., Schiemer, M., Jha, S. & Zambonelli, F., 27 Mar 2021, In: ACM Transactions on Internet of Things. 2, 2, p. 1-25 25 p., 9.

    Research output: Contribution to journalArticlepeer-review

  5. Shared learning activity labels across heterogeneous datasets

    Ye, J., 9 Mar 2021, In: Journal of Ambient Intelligence and Smart Environments. Pre-press, p. 1-18

    Research output: Contribution to journalArticlepeer-review

ID: 275505623

Top