FUSION OF MULTIMODALITY MEDICAL IMAGES BASED ON MULTISCALE DECOMPOSITION USING CONTOURLET TRANSFORM (original) (raw)
In this paper, we propose a novel multimodality Medical Image Fusion (MIF) method based on improved Contourlet Transform (CNT) for spatially registered, multi-sensor, multiresolution medical images. The major drawback of the contourlet construction is that its basis images are not localized in the frequency domain. In this paper we propose a new contourlet construction as a solution. The source medical images are first decomposed by improved CNT. Instead of using the laplacian pyramid in contourlet transform, we employ a new multiscale decomposition defined in the frequency domain. So that the resulting basis images are sharply localized in the frequency domain and exhibit smoothness along their main ridges in the spatial domain. The low-frequency subbands (LFSs) are fused using the novel combined Activity Level Measurement and the high-frequency subbands (HFSs) are fused according to their 'local average energy' of the neighborhood of coefficients. Then inverse contourlet transform (ICNT) is applied to the fused coefficients to get the fused image. The performance, experimental results or comparison of the proposed scheme is evaluated by various quantitative measures like Mutual Information, Spatial Frequency and Entropy etc. The purpose of this paper is to replace the pyramid decomposition with multiscale decomposition to make image much smoother and to increase the efficiency of the fusion method and quality in the Image. about visceral anatomy such as CT, MRI. Multi-modality medical image fusion is to combine complementary medical image information of various modals into one image, so as to provide far more comprehensive information and improves reliability of clinical diagnosis and therapy [4-6]. Image fusion has three levels: pixel-level, feature-level and symbol-level respectively. Image fusion at pixel-level means fusion at the lowest processing level referring to the merging of the measured physical parameters and its application is very wide. Pixel-level fusion is divided into two parts, signal-level and image-point fusion. Signal-level fusion refers to synthesize a group of signals offered by sensors. The purpose is to obtain high-quality signals, which format is consistent with the original. Image points of every image are directly synthesized in the process of image-point fusion. Pixel-level fusion is operated in the phase of image pre-processing. The purpose is to obtain a further clear image, which is involved in more information. Pixel-level fusion is a low level fusion. Before fusing images, image registration of original images must be done. Because the imaging mechanisms of different from different time, different views of angle and different circumstance, the gray values and features of different images are inconsistent. So the original images must be registered at first. Image registration is the process of matching two or more three images that get from the same scene derived from different time, different sensors or different views of angle. Featurelevel fusion is done in the course of image feature extraction. It's the medium level fusion and prepared for decision-level fusion. In the process of feature-level fusion, features of every image are extracted. The typical features are edge, shape, profile, angle, texture, similar lighting area and similar depth of focus area. Decision-level fusion is the highest-level fusion. All decision and control are decided according to the results of decision-level fusion. Medical image fusion is a research focus in the academic circles. With the development of modern medical imaging technology, more and more medical image used in clinical practice [7] [8]. Nowadays, with the rapid development in high-technology and modern instrumentations, medical imaging has become a vital component of a large number of applications, including diagnosis, research, and treatment. In order to support more accurate clinical information for physicians to deal with medical diagnosis and evaluation, multimodality medical images are needed, such as X-ray, computed tomography (CT), magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), and positron emission tomography (PET) images. These multimodality medical images usually provide complementary and occasionally conflicting information. For example, the CT image can provide dense structures like bones and implants with less distortion, but it cannot detect physiological changes, while the MRI IJREAS Volume 2, Issue 7 (July 2012)