An automatic method for estimating noise-induced signal variance in magnitude-reconstructed magnetic resonance images (original) (raw)

Signal intensity in magnetic resonance images (MRIs) is affected by random noise. Assessing noise-induced signal variance is important for controlling image quality. Knowledge of signal variance is required for correctly computing the chi-square value, a measure of goodness of fit, when fitting signal data to estimate quantitative parameters such as T1 and T2 relaxation times or diffusion tensor elements. Signal variance can be estimated from measurements of the noise variance in an object-and ghost-free region of the image background. However, identifying a large homogeneous region automatically is problematic. In this paper, a novel, fully automated approach for estimating the noise-induced signal variance in magnitude-reconstructed MRIs is proposed. This approach is based on the histogram analysis of the image signal intensity, explicitly by extracting the peak of the underlining Rayleigh distribution that would characterize the distribution of the background noise. The peak is extracted using a nonparametric univariate density estimation like the Parzen window density estimation; the corresponding peak position is shown here to be the expected signal variance in the object. The proposed method does not depend on prior foreground segmentation, and only one image with a small amount of background is required when the signal-to-noise ratio (SNR) is greater than three. This method is applicable to magnitude-reconstructed MRIs, though diffusion tensor (DT)-MRI is used here to demonstrate the approach.