Skip to content

Research at St Andrews

Texture features for object salience

Research output: Contribution to journalArticlepeer-review

Standard

Texture features for object salience. / Terzić, Kasim; Krishna, Sai; du Buf, J. M. H.

In: Image and Vision Computing, Vol. 67, 11.2017, p. 43-51.

Research output: Contribution to journalArticlepeer-review

Harvard

Terzić, K, Krishna, S & du Buf, JMH 2017, 'Texture features for object salience', Image and Vision Computing, vol. 67, pp. 43-51. https://doi.org/10.1016/j.imavis.2017.09.007

APA

Terzić, K., Krishna, S., & du Buf, J. M. H. (2017). Texture features for object salience. Image and Vision Computing, 67, 43-51. https://doi.org/10.1016/j.imavis.2017.09.007

Vancouver

Terzić K, Krishna S, du Buf JMH. Texture features for object salience. Image and Vision Computing. 2017 Nov;67:43-51. https://doi.org/10.1016/j.imavis.2017.09.007

Author

Terzić, Kasim ; Krishna, Sai ; du Buf, J. M. H. / Texture features for object salience. In: Image and Vision Computing. 2017 ; Vol. 67. pp. 43-51.

Bibtex - Download

@article{b8be4adfc92848f1825d8a057802ead7,
title = "Texture features for object salience",
abstract = "Although texture is important for many vision-related tasks, it is not used in most salience models. As a consequence, there are images where all existing salience algorithms fail. We introduce a novel set of texture features built on top of a fast model of complex cells in striate cortex, i.e., visual area V1. The texture at each position is characterised by the two-dimensional local power spectrum obtained from Gabor filters which are tuned to many scales and orientations. We then apply a parametric model and describe the local spectrum by the combination of two one-dimensional Gaussian approximations: the scale and orientation distributions. The scale distribution indicates whether the texture has a dominant frequency and what frequency it is. Likewise, the orientation distribution attests the degree of anisotropy. We evaluate the features in combination with the state-of-the-art VOCUS2 salience algorithm. We found that using our novel texture features in addition to colour improves AUC by 3.8% on the PASCAL-S dataset when compared to the colour-only baseline, and by 62% on a novel texture-based dataset.",
keywords = "Texture, Colour, Salience, Attention, Benchmark",
author = "Kasim Terzi{\'c} and Sai Krishna and {du Buf}, {J. M. H.}",
note = "This work was supported by the EU under the FP-7 grant ICT-2009.2.1-270247 NeuralDynamics and by the FCT under the grants LarSYS UID/EEA/50009/2013 and SparseCoding EXPL/EEI-SII/1982/2013.",
year = "2017",
month = nov,
doi = "10.1016/j.imavis.2017.09.007",
language = "English",
volume = "67",
pages = "43--51",
journal = "Image and Vision Computing",
issn = "0262-8856",
publisher = "Elsevier",

}

RIS (suitable for import to EndNote) - Download

TY - JOUR

T1 - Texture features for object salience

AU - Terzić, Kasim

AU - Krishna, Sai

AU - du Buf, J. M. H.

N1 - This work was supported by the EU under the FP-7 grant ICT-2009.2.1-270247 NeuralDynamics and by the FCT under the grants LarSYS UID/EEA/50009/2013 and SparseCoding EXPL/EEI-SII/1982/2013.

PY - 2017/11

Y1 - 2017/11

N2 - Although texture is important for many vision-related tasks, it is not used in most salience models. As a consequence, there are images where all existing salience algorithms fail. We introduce a novel set of texture features built on top of a fast model of complex cells in striate cortex, i.e., visual area V1. The texture at each position is characterised by the two-dimensional local power spectrum obtained from Gabor filters which are tuned to many scales and orientations. We then apply a parametric model and describe the local spectrum by the combination of two one-dimensional Gaussian approximations: the scale and orientation distributions. The scale distribution indicates whether the texture has a dominant frequency and what frequency it is. Likewise, the orientation distribution attests the degree of anisotropy. We evaluate the features in combination with the state-of-the-art VOCUS2 salience algorithm. We found that using our novel texture features in addition to colour improves AUC by 3.8% on the PASCAL-S dataset when compared to the colour-only baseline, and by 62% on a novel texture-based dataset.

AB - Although texture is important for many vision-related tasks, it is not used in most salience models. As a consequence, there are images where all existing salience algorithms fail. We introduce a novel set of texture features built on top of a fast model of complex cells in striate cortex, i.e., visual area V1. The texture at each position is characterised by the two-dimensional local power spectrum obtained from Gabor filters which are tuned to many scales and orientations. We then apply a parametric model and describe the local spectrum by the combination of two one-dimensional Gaussian approximations: the scale and orientation distributions. The scale distribution indicates whether the texture has a dominant frequency and what frequency it is. Likewise, the orientation distribution attests the degree of anisotropy. We evaluate the features in combination with the state-of-the-art VOCUS2 salience algorithm. We found that using our novel texture features in addition to colour improves AUC by 3.8% on the PASCAL-S dataset when compared to the colour-only baseline, and by 62% on a novel texture-based dataset.

KW - Texture

KW - Colour

KW - Salience

KW - Attention

KW - Benchmark

U2 - 10.1016/j.imavis.2017.09.007

DO - 10.1016/j.imavis.2017.09.007

M3 - Article

VL - 67

SP - 43

EP - 51

JO - Image and Vision Computing

JF - Image and Vision Computing

SN - 0262-8856

ER -

Related by author

  1. Few-shot linguistic grounding of visual attributes and relations using gaussian kernels

    Koudouna, D. & Terzić, K., 8 Feb 2021, Proceedings of the 16th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - (Volume 5). Farinella, G. M., Radeva, P., Braz, J. & Bouatouch, K. (eds.). SCITEPRESS - Science and Technology Publications, Vol. 5 VISAPP. p. 146-156

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  2. Visualization as Intermediate Representations (VLAIR) for human activity recognition

    Jiang, A., Nacenta, M., Terzić, K. & Ye, J., 18 May 2020, PervasiveHealth '20: Proceedings of the 14th EAI International Conference on Pervasive Computing Technologies for Healthcare. Munson, S. A. & Schueller, S. M. (eds.). ACM, p. 201-210 10 p.

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  3. Supervisor recommendation tool for Computer Science projects

    Zemaityte, G. & Terzic, K., 9 Jan 2019, Proceedings of the 3rd Conference on Computing Education Practice (CEP '19) . New York: ACM, 4 p. 1

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. BINK: Biological Binary Keypoint Descriptor

    Saleiro, M., Terzić, K., Rodrigues, J. M. F. & du Buf, J. M. H., Dec 2017, In: BioSystems. 162, p. 147-156

    Research output: Contribution to journalArticlepeer-review

  5. Interpretable feature maps for robot attention

    Terzić, K. & du Buf, J. M. H., 2017, Universal Access in Human–Computer Interaction. Design and Development Approaches and Methods: 11th International Conference, UAHCI 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9–14, 2017, Proceedings, Part I. Antona, M. & Stephanidis, C. (eds.). Cham: Springer, p. 456-467 12 p. (Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); vol. 10277).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

Related by journal

  1. Digitization of non-regular shapes in arbitrary dimensions

    Stelldinger, P. & Terzic, K., 1 Oct 2008, In: Image and Vision Computing. 26, 10, p. 1338-1346 9 p.

    Research output: Contribution to journalArticlepeer-review

  2. An information-theoretic approach to face recognition from face motion manifolds

    Arandelovic, O. & Cipolla, R., 1 Jun 2006, In: Image and Vision Computing. 24, 6, p. 639-647 9 p.

    Research output: Contribution to journalArticlepeer-review

ID: 251156304

Top