Feature Neighbourhood Mutual Information for Multi-modal Image Registration: An (original) (raw)

Incorporating neighbourhood feature derivatives with Mutual Information to improve accuracy of multi-modal image registration

2008

In this paper we present an improved method for performing image registration of different modalities. Russakoff [1] proposed the method of Regional Mutual Information (RMI) which allows neighbourhood information to be considered in the Mutual Information (MI) algorithm. We extend this method by taking local multi-scale feature derivatives in a gauge coordinate frame to represent the structural information of the images [2]. By incorporating these images into RMI, we can combine aspects of both structural and neighbourhood information together, which provides a high level of registration accuracy that is essential in application to the medical domain. Our images to be registered are retinal fundus photographs and SLO (Scanning Laser Ophthalmoscopy) images. The combination of these two modalities has received little attention in image registration, yet could provide much useful information to an Ophthalmic clinician. One application is the detection of glaucoma in its early stages, where prevention of further infection is possible before irreversible damage occurs. Results indicate that our method offers a vast improvement to Regional MI, with 25 of our 26 test images being registered to a high standard.

Feature Neighbourhood Mutual Information for multi-modal image registration: An application to eye fundus imaging

Pattern Recognition

Multi-modal image registration is becoming an increasingly powerful tool for medical diagnosis and treatment. The combination of different image modalities facilitates much greater understanding of the underlying condition, resulting in improved patient care. Mutual Information is a popular image similarity measure for performing multi-modal image registration. However, it is recognised that there are limitations with the technique that can compromise the accuracy of the registration, such as the lack of spatial information that is accounted for by the similarity measure. In this paper, we present a two-stage non-rigid registration process using a novel similarity measure, Feature Neighbourhood Mutual Information. The similarity measure efficiently incorporates both spatial and structural image properties that are not traditionally considered by MI. By incorporating such features, we find that this method is capable of achieving much greater registration accuracy when compared to existing methods, whilst also achieving efficient computational runtime. To demonstrate our method, we use a challenging medical image dataset consisting of paired retinal fundus photographs and confocal scanning laser ophthalmoscope images. Accurate registration of these image pairs facilitates improved clinical diagnosis, and can be used for the early detection and prevention of glaucoma disease.

A COMBINED FEATURE ENSEMBLE BASED MUTUAL INFORMATION SCHEME FOR ROBUST INTER-MODAL, INTER-PROTOCOL IMAGE REGISTRATION

2007 4th IEEE International Symposium on Biomedical Imaging: From Nano to Macro, 2007

In this paper we present a new robust method for medical image registration called combined feature ensemble mutual information (COFEMI). While mutual information (MI) has become arguably the most popular similarity metric for image registration, intensity based MI schemes have been found wanting in inter-modal and interprotocol image registration, especially when (1) signi cant image differences across modalities (e.g. pathological and radiological studies) exist, and (2) when imaging artifacts have signi cantly altered the characteristics of one or both of the images to be registered. Intensity-based MI registration methods operate by maximization of MI between two images A and B. The COFEMI scheme extracts over 450 feature representations of image B that provide additional information about A not conveyed by image B alone and are more robust to the artifacts affecting original intensity image B. COFEMI registration operates by maximization of combined mutual information (CMI) of the image A with the feature ensemble associated with B. The combination of information from several feature images provides a more robust similarity metric compared to the use of a single feature image or the original intensity image alone. We also present a computer-assisted scheme for determining an optimal set of maximally informative features for use with our CMI formulation. We quantitatively and qualitatively demonstrate the improvement in registration accuracy by using our COFEMI scheme over the traditional intensity based-MI scheme in registering (1) prostate whole mount histological sections with corresponding magnetic resonance imaging (MRI) slices; and (2) phantom brain T1 and T2 MRI studies, which were adversely affected by imaging artifacts.

Enhanced Multimodality Image Registration Based On Mutual Information

Different modalities can be achieved by the maximization of suitable statistical similarity measures within a given class of geometric transformations. The registration functions are less sensitive to low sampling resolution, do not contain incorrect global maxima which are sometimes found in the mutual information. This paper proposes a novel and straightforward multimodal image registration method based on mutual information, in which two matching criteria are used. It has been extensively shown that metrics based on the evaluation of mutual information are well suited for overcomin g the difficulties of multi-modality registration.

A robust solution to multi-modal image registration by combining mutual information with multi-scale derivatives

Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention, 2009

In this paper we present a novel method for performing image registration of different modalities. Mutual Information (MI) is an established method for performing such registration. However, it is recognised that standard MI is not without some problems, in particular it does not utilise spatial information within the images. Various modifications have been proposed to resolve this, however these only offer slight improvement to the accuracy of registration. We present Feature Neighbourhood Mutual Information (FNMI) that combines both image structure and spatial neighbourhood information which is efficiently incorporated into Mutual Information by approximating the joint distribution with a covariance matrix (c.f. Russakoff's Regional Mutual Information). Results show that our approach offers a very high level of accuracy that improves greatly on previous methods. In comparison to Regional MI, our method also improves runtime for more demanding registration problems where a high...

Enhanced mutual information based medical image registration

IET Image Processing, 2016

Similarity measure plays a significant task in intensity-based image registration. Nowadays, mutual information (MI) has been used as an efficient similarity measure for multimodal image registration. MI reflects the quantitative aspects of the information as it considers the probabilities of the voxels. However, different voxels have distinct efficiency towards the gratification of the elementary target, which may be self-reliant of their probability of occurrence. Therefore, both intensity distributions and effectiveness are essential to characterise a voxel. In this study, a novel similarity measure has been proposed by integrating the effectiveness of each voxel along with the intensity distributions for computing the enhanced MI using joint histogram of the two images. Penalised spline interpolation is incorporated to the joint histogram of the similarity measure, where each grid point is penalised with a weighted factor to avoid the local extrema and to achieve better registration accuracy as compared with existing methods with efficient computational runtime. To demonstrate the proposed method, the authors have used a challenging medical image dataset consisting of pre-and post-operative brain magnetic resonance imaging. The registration accuracy for the dataset improves the clinical diagnosis, and detection of growth of tumour in post-operative image.

Two Phase Non-Rigid Multi-Modal Image Registration Using Weber Local Descriptor-Based Similarity Metrics and Normalized Mutual Information

Sensors, 2013

Non-rigid multi-modal image registration plays an important role in medical image processing and analysis. Existing image registration methods based on similarity metrics such as mutual information (MI) and sum of squared differences (SSD) cannot achieve either high registration accuracy or high registration efficiency. To address this problem, we propose a novel two phase non-rigid multi-modal image registration method by combining Weber local descriptor (WLD) based similarity metrics with the normalized mutual information (NMI) using the diffeomorphic free-form deformation (FFD) model. The first phase aims at recovering the large deformation component using the WLD based non-local SSD (wldNSSD) or weighted structural similarity (wldWSSIM). Based on the output of the former phase, the second phase is focused on getting accurate transformation parameters related to the small deformation using the NMI. Extensive experiments on T1, T2 and PD weighted MR images demonstrate that the proposed wldNSSD-NMI or wldWSSIM-NMI method outperforms the registration methods based on the NMI, the conditional mutual information (CMI), the SSD on entropy images (ESSD) and the ESSD-NMI in terms of registration accuracy and computation efficiency.

Medical image registration using mutual information

Proceedings of the IEEE, 2003

While solutions for intrapatient affine registration based on this concept are already commercially available, current research in the field focuses on interpatient nonrigid matching.

Automated multimodal image registration by maximiza-tion of mutual information: from theory, implementation and validation to a useful tool in routine clinical practice

1998

Recently we proposed mutual information of corresponding voxel intensities as a new criterion for multimodal medical image registration. The mutual information registration criterion allows fully automated, highly robust affine registration of multimodal images in a variety of applications without need for segmentation, pre-processing or user interaction, which makes is well suited for routine use in clinical practice. The method has been extensively evaluated for registration of CT, MR and PET images of the brain and its subvoxel accuracy has been validated by comparison with external marker based registration. The method has been applied successfully to a variety of clinical applications, including registration of CT and PET images of the thorax for lung cancer staging and of CT and MR images of the pelvis for radiotherapy planning.

Multimodality Image Registration by Maximization of Mutual Information

IEEE Transactions on Medical Imaging, 1997

A new approach to the problem of multimodality medical image registration is proposed, using a basic concept from information theory, Mutual Information or relative entropy, as a new matching criterion. The method presented in this paper applies Mutual Information to measure the statistical dependence or information redundancy between the image intensities of corresponding voxels in both images, which is assumed to be maximal if the images are geometrically aligned. Maximization of Mutual Information is a very general and powerful criterion, because no assumptions are made regarding the nature of this dependence and no limiting constraints are imposed on the image content of the modalities involved. The accuracy of the mutual information criterion is validated for rigid body registration of CT, MR and PET images by comparison with the stereotactic registration solution, while robustness is evaluated with respect to implementation issues, such as interpolation and optimization, and image content, including partial overlap and image degradation. Our results demonstrate that subvoxel accuracy with respect to the stereotactic reference solution can be achieved completely automatically and without any prior segmentation, feature extraction or other pre-processing steps, which makes this method very well suited for clinical applications.