Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning - PubMed (original) (raw)
Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning
Guorong Wu et al. IEEE Trans Biomed Eng. 2016 Jul.
Erratum in
- Correction to "Scalable High-Performance Image Registration Framework by Unsupervised Deep Feature Representations Learning".
Wu G, Kim M, Wang Q, Munsell BC, Shen D. Wu G, et al. IEEE Trans Biomed Eng. 2017 Jan;64(1):250. doi: 10.1109/TBME.2016.2633139. IEEE Trans Biomed Eng. 2017. PMID: 28029618 No abstract available.
Abstract
Feature selection is a critical step in deformable image registration. In particular, selecting the most discriminative features that accurately and concisely describe complex morphological patterns in image patches improves correspondence detection, which in turn improves image registration accuracy. Furthermore, since more and more imaging modalities are being invented to better identify morphological changes in medical imaging data, the development of deformable image registration method that scales well to new image modalities or new image applications with little to no human intervention would have a significant impact on the medical image analysis community. To address these concerns, a learning-based image registration framework is proposed that uses deep learning to discover compact and highly discriminative features upon observed imaging data. Specifically, the proposed feature selection method uses a convolutional stacked autoencoder to identify intrinsic deep feature representations in image patches. Since deep learning is an unsupervised learning method, no ground truth label knowledge is required. This makes the proposed feature selection method more flexible to new imaging modalities since feature representations can be directly learned from the observed imaging data in a very short amount of time. Using the LONI and ADNI imaging datasets, image registration performance was compared to two existing state-of-the-art deformable image registration methods that use handcrafted features. To demonstrate the scalability of the proposed image registration framework, image registration experiments were conducted on 7.0-T brain MR images. In all experiments, the results showed that the new image registration framework consistently demonstrated more accurate registration results when compared to state of the art.
Figures
Fig. 1
The reconstructed image patches by single Auto-Encoder (b) and Stacked Auto-Encoder (e). The bright and dark colors indicate large and small reconstruction errors, respectively.
Fig. 2
The hierarchical architecture of Stacked Auto-Encoder (SAE).
Fig. 3
The 3×3 max pooling procedure in convolutional network.
Fig. 4
The importance map and the sampled image patches (denoted by the red dots) for deep learning. The color bar indicates the varying importance values for individual voxels.
Fig. 5
The similarity maps of identifying the correspondence for the red-crossed point in the template (a) w.r.t. the subject (b) by handcraft features (d–e) and the learned features by unsupervised deep learning (f). The registered subject image is shown in (c). It is clear that the in-accurate registration results might undermine the supervised feature representation learning that highly relies on the correspondences across all training images.
Fig. 6
The Dice ratios of 56 ROIs on LONI dataset by 6 registration methods.
Fig. 7
Large structural difference around hippocampus between 1.5-tesla (a) and 7.0-tesla (b) MR images. The 1.5-tesla image is enlarged w.r.t. the image resolution of the 7.0-tesla image for convenience of visual comparison. .
Fig. 8
Typical registration results on 7.0-tesla MR brain images by Demons, HAMMER, and H+DP, respectively. Three rows represent three different slices in the template, subject, and registered subjects.
Fig. 9
The Dice ratios of 56 ROIs in LONI dataset by HAMMER (blue), H+DP-LONI (red), and H+DP-ADNI (green), respectively. Note, H+DP-LONI denotes for the HAMMER registration integrating with the feature representations learned directly from LONI dataset, while H+DP-ADNI stands for applying HAMMER registration on LONI dataset but using the feature representations learned from ADNI dataset, respectively.
Similar articles
- Deformable registration of magnetic resonance images using unsupervised deep learning in neuro-/radiation oncology.
Osman AFI, Al-Mugren KS, Tamam NM, Shahine B. Osman AFI, et al. Radiat Oncol. 2024 May 21;19(1):61. doi: 10.1186/s13014-024-02452-3. Radiat Oncol. 2024. PMID: 38773620 Free PMC article. - Unsupervised deep feature learning for deformable registration of MR brain images.
Wu G, Kim M, Wang Q, Gao Y, Liao S, Shen D. Wu G, et al. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):649-56. doi: 10.1007/978-3-642-40763-5_80. Med Image Comput Comput Assist Interv. 2013. PMID: 24579196 Free PMC article. - FDRN: A fast deformable registration network for medical images.
Sun K, Simon S. Sun K, et al. Med Phys. 2021 Oct;48(10):6453-6463. doi: 10.1002/mp.15011. Epub 2021 Jul 6. Med Phys. 2021. PMID: 34053089 - Deep Learning in Medical Image Analysis.
Shen D, Wu G, Suk HI. Shen D, et al. Annu Rev Biomed Eng. 2017 Jun 21;19:221-248. doi: 10.1146/annurev-bioeng-071516-044442. Epub 2017 Mar 9. Annu Rev Biomed Eng. 2017. PMID: 28301734 Free PMC article. Review. - Deep learning in medical image registration: a review.
Fu Y, Lei Y, Wang T, Curran WJ, Liu T, Yang X. Fu Y, et al. Phys Med Biol. 2020 Oct 22;65(20):20TR01. doi: 10.1088/1361-6560/ab843e. Phys Med Biol. 2020. PMID: 32217829 Free PMC article. Review.
Cited by
- Review of robotic systems for thoracoabdominal puncture interventional surgery.
Wang C, Guo L, Zhu J, Zhu L, Li C, Zhu H, Song A, Lu L, Teng GJ, Navab N, Jiang Z. Wang C, et al. APL Bioeng. 2024 Apr 1;8(2):021501. doi: 10.1063/5.0180494. eCollection 2024 Jun. APL Bioeng. 2024. PMID: 38572313 Free PMC article. Review. - LEARNING MRI CONTRAST-AGNOSTIC REGISTRATION.
Hoffmann M, Billot B, Iglesias JE, Fischl B, Dalca AV. Hoffmann M, et al. Proc IEEE Int Symp Biomed Imaging. 2021 Apr;2023:899-903. doi: 10.1109/isbi48211.2021.9434113. Epub 2021 May 25. Proc IEEE Int Symp Biomed Imaging. 2021. PMID: 38213549 Free PMC article. - 4D-CT deformable image registration using unsupervised recursive cascaded full-resolution residual networks.
Xu L, Jiang P, Tsui T, Liu J, Zhang X, Yu L, Niu T. Xu L, et al. Bioeng Transl Med. 2023 Aug 22;8(6):e10587. doi: 10.1002/btm2.10587. eCollection 2023 Nov. Bioeng Transl Med. 2023. PMID: 38023695 Free PMC article. - A robust and interpretable deep learning framework for multi-modal registration via keypoints.
Wang AQ, Yu EM, Dalca AV, Sabuncu MR. Wang AQ, et al. Med Image Anal. 2023 Dec;90:102962. doi: 10.1016/j.media.2023.102962. Epub 2023 Sep 13. Med Image Anal. 2023. PMID: 37769550 - A two-step deep learning method for 3DCT-2DUS kidney registration during breathing.
Chi Y, Xu Y, Liu H, Wu X, Liu Z, Mao J, Xu G, Huang W. Chi Y, et al. Sci Rep. 2023 Aug 8;13(1):12846. doi: 10.1038/s41598-023-40133-5. Sci Rep. 2023. PMID: 37553480 Free PMC article.
References
- Crum WR, Hartkens T, Hill DLG. Non-rigid image registration: theory and practice. British Journal of Radiology. 2004;77:S140–S153. - PubMed
- Friston KJ, Ashburner J, Frith CD, Poline JB, Heather JD, Frackowiak RSJ. Spatial registration and normalization of images. Human Brain Mapping. 1995;3:165–189.
- Shen D, Davatzikos C. HAMMER: Hierarchical attribute matching mechanism for elastic registration. Medical Imaging, IEEE Transactions on. 2002;21:1421–1439. - PubMed
- Zitová B, Flusser J. Image registration methods: a survey. Image and Vision Computing. 2003;21:977–1000.
MeSH terms
LinkOut - more resources
Full Text Sources
Other Literature Sources