Skip to content

Research at St Andrews

Autofocus Net: Auto-focused 3D CNN for Brain Tumour Segmentation.

Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review


Andreas Stefani, Roushanak Rahmat, David Cameron Christopher Harris-Birtill

School/Research organisations


Several approaches based on convolutional neural networks (CNNs) are only able to process 2D images while most brain data consists of 3D volumes. Recent network architectures which have demonstrated promising results are able to process 3D images. In this work, we propose an adapted approach based on a CNN to process 3D contextual information in brain MRI scans for the challenging task of brain tumour segmentation. Our CNN is trained end-to-end on multi-modal MRI volumes and is able to predict segmentation for the binary case, which segments the whole tumour, and multi-class case, which segments the whole tumour (WT), tumour core (TC) and enhancing tumour (ET). Our network includes multiple layers of dilated convolutions and autofocus convolutions with residual connections to improve segmentation performance. Autofocus layers consist of multiple parallel convolutions each with a different dilation rate. We replaced standard convolutional layers with autofocus layers to adaptively change the size of the effective receptive field to generate more powerful features. Experiments with our autofocus settings on the BraTS 2018 glioma dataset show that the proposed method achieved average Dice scores of 83.92 for WT in the binary case and 66.88, 55.16, 64.13 for WT, TC and ET, respectively, in the multi-class case. We introduce the first publicly and freely available NiftyNet-based implementation of the autofocus convolutional layer for semantic image segmentation.


Original languageEnglish
Title of host publicationIn Annual Conference on Medical Image Understanding and Analysis
Subtitle of host publicationPart of the Communications in Computer and Information Science book series (CCIS)
Number of pages13
ISBN (Electronic)978-3-030-52791-4
Publication statusPublished - 8 Jul 2020

Discover related content
Find related publications, people, projects and more using interactive charts.

View graph of relations

Related by author

  1. Generative deep learning in digital pathology workflows

    Morrison, D., Harris-Birtill, D. & Caie, P. D., 1 Oct 2021, In: The American Journal of Pathology. 191, 10, p. 1717-1723 6 p.

    Research output: Contribution to journalReview articlepeer-review

  2. Ethics and acceptance of smart homes for older adults

    Pirzada, P., Wilde, A., Doherty, G. H. & Harris-Birtill, D., 9 Jul 2021, (E-pub ahead of print) In: Informatics for Health and Social Care. Latest Articles, 28 p.

    Research output: Contribution to journalReview articlepeer-review

  3. Radiomics-led monitoring of non-small cell lung cancer patients during radiotherapy

    Rahmat, R., Harris-Birtill, D., Finn, D., Feng, Y., Montgomery, D., Nailon, W. H. & McLaughlin, S., 6 Jul 2021, (E-pub ahead of print) Medical image understanding and analysis: 25th annual conference, MIUA 2021. Papież, B. W., Yaqub, M., Jiao, J., Namburete, A. I. L. & Noble, J. A. (eds.). Cham: Springer, p. 532–546 15 p. (Lecture notes in computer science; vol. 12722).

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

  4. Templated text synthesis for expert-guided multi-label extraction from radiology reports

    Schrempf, P., Watson, H., Park, E., Pajak, M., MacKinnon, H., Muir, K. W., Harris-Birtill, D. & O’Neil, A. Q., 24 Mar 2021, In: Machine Learning and Knowledge Extraction. 3, 2, p. 299-317 19 p.

    Research output: Contribution to journalArticlepeer-review

  5. Understanding computation time: a critical discussion of time as a computational performance metric

    Harris-Birtill, D. & Harris-Birtill, R., 3 Aug 2020, (Accepted/In press) Time in variance: the study of time. Parker, J., Harris, P. & Misztal, A. (eds.). Brill, Vol. 17. (The Study of Time).

    Research output: Chapter in Book/Report/Conference proceedingChapter (peer-reviewed)peer-review

ID: 271504917