Abstract
In 3D echocardiography (3D echo), the image orientation varies depending on the position and direction of the transducer during examination. As a result, when reviewing images the user must initially identify anatomical landmarks to understand image orientation – a potentially challenging and time-consuming task. We automated this initial step by training a deep residual neural network (ResNet) to predict the rotation required to re-orient an image to the standard apical four-chamber view). Three data pre-processing strategies were explored: 2D, 2.5D and 3D. Three different loss function strategies were investigated: classification of discrete integer angles, regression with mean absolute angle error loss, and regression with geodesic loss. We then integrated the model into a virtual reality application and aligned the re-oriented 3D echo images with a standard anatomical heart model. The deep learning strategy with the highest accuracy – 2.5D classification of discrete integer angles – achieved a mean absolute angle error on the test set of 9.0∘. This work demonstrates the potential of artificial intelligence to support visualisation and interaction in virtual reality.
Original language | English |
---|---|
Title of host publication | Medical Image Understanding and Analysis - 25th Annual Conference, MIUA 2021, Proceedings |
Editors | Bartłomiej W. Papież, Mohammad Yaqub, Jianbo Jiao, Ana I. Namburete, J. Alison Noble |
Publisher | Springer Science and Business Media Deutschland GmbH |
Pages | 177-188 |
Number of pages | 12 |
ISBN (Print) | 9783030804312 |
DOIs | |
Publication status | Published - 2021 |
Externally published | Yes |
Event | 25th Annual Conference on Medical Image Understanding and Analysis, MIUA 2021 - Virtual, Online Duration: 12 Jul 2021 → 14 Jul 2021 |
Publication series
Name | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
---|---|
Volume | 12722 LNCS |
ISSN (Print) | 0302-9743 |
ISSN (Electronic) | 1611-3349 |
Conference
Conference | 25th Annual Conference on Medical Image Understanding and Analysis, MIUA 2021 |
---|---|
City | Virtual, Online |
Period | 12/07/21 → 14/07/21 |
Bibliographical note
Funding Information:L. Munroe—This work is independent research funded by the National Institute for Health Research (NIHRi4i, 3D Heart Project, II-LA-0716-20001, https://www.3dheart. co.uk/). This work was also supported by the Wellcome/EPSRC Centre for Medical Engineering (WT203148/Z/16/Z). Lindsay Munroe and Suryava Bhattacharya would like to acknowledge funding from the EPSRC Centre for Doctoral Training in Smart Medical Imaging (EP/S022104/1). Authors also acknowledge financial support from the Department of Health via the National Institute for Health Research (NIHR) comprehensive Biomedical Research Centre award to Guy’s and St Thomas’ NHS Foundation Trust in partnership with King’s College London and King’s College Hospital NHS Foundation Trust.
Publisher Copyright:
© 2021, Springer Nature Switzerland AG.
Keywords
- 3D echocardiography
- Deep learning
- Virtual reality