Ljiljana Platisa | Ghent University (original) (raw)

inproceedings by Ljiljana Platisa

Research paper thumbnail of Channelized hotelling observers for detection tasks in multi-slice images

We investigate numerical observers for signal detection in volumetric imaging data sets viewed in... more We investigate numerical observers for signal detection in volumetric imaging data sets viewed in stack browsing mode. Three types of multi-slice CHO (msCHO) as well as a single-slice CHO (ssCHO) are considered. We study the influence of signal size on model observer performance for detection of exactly known signals in three-dimensional (3D) non-Gaussian lumpy backgrounds (LB). The size of the signal is varied separately in the {\textacutedbl}frontal{\textacutedbl} or {\textacutedbl}coronal{\textacutedbl} plane (xy-plane) and in the {\textacutedbl}sagittal{\textacutedbl} plane (z-direction). METHODS The three msCHO designs: type a (msCHOa), type b (msCHOb) and type c (msCHOc), differ in how they treat the channelized slice data to infer the classification decision for the image. In one case, applied to msCHOa and msCHOb, the observers first build a test statistic for each slice and then use these to estimate the final test statistic for the image. The first step is modeled by the regular 2D-CHO applied on each slice in the stack and by the 1D-HO applied on the array of slice test statistics in the following step. For msCHOa, a separate 2D-CHO template is built for each position of the slice, while the msCHOb applies the same templates on multiple adjacent slices. In the other case, applied to msCHOc, the observers build their final statistics for the multi-slice image directly using the channelized slice data with no intermediate {\textacutedbl}scoring{\textacutedbl} on the slice level. Hence, the model applies the 1D-HO directly on the concatenated channelized slice data. Finally, the ssCHO corresponds to the conventional 2D-CHO applied on the central slice of the signal. CONCLUSION In general, the results of our study suggest higher observer performance of msCHO design compared to the ssCHO which confirms the expected benefit from using the data of multiple image slices in contrast to the single slice only. Due to its least restrictive assumptions among three types of msCHO models, type a may be applied in the greatest range of detection tasks. When applicable, type b achieves the highest performance, especially if the signal is spread across fewer slices. Not surprisingly, the results indicate significant dependency of the detection performance with regard to the signal characteristics, especially the signal size in xy-plane and signal spread across z-direction, but also with regard to the relation between the signal and the background structure. It is part of our future research plans to pursue a study with psychovisual experiments to compare the models and human observer performance.

Research paper thumbnail of Effects of common image manipulations on diagnostic performance in digital pathology: human study

A very recent work of Ref.[1] studied the effects of image manipulation and image degradation on ... more A very recent work of Ref.[1] studied the effects of image manipulation and image degradation on the perceived attributes of image quality (IQ) of digital pathology slides. However, before any conclusions and recommendations can be formulated regarding specific image manipulations (and IQ attributes), it is necessary to investigate their effects on the diagnostic performance of clinicians when interpreting these images. In this study, 6 expert pathologists interpreted digital images of H\&E stained animal pathology samples in a free-response (FROC) experiment. Participants marked locations suspicious for viral inclusions (inclusion bodies) and rated them using a continuous scale from 0 (low confidence) to 100\% (high confidence). The images were the same as in Ref.[1]: crops of digital pathology slides of 3 different animal tissue samples, all 1200{\texttimes}750 pixels in size. Each participant viewed a total of 72 images: 12 nonmanipulated (reference) images (4 of each tissue type), and 60 manipulated images (5 for each reference image). The extent of artificial manipulations was adjusted relative to the reference images using the HDR-VDP metric [2] in the luminance domain: added Gaussian blur (sb=3), decreased gamma (-5\%), added white Gaussian noise (sn=10), decreased color saturation (-5\%), and JPG compression (libjpeg 50). The images were displayed on a 3MP medical color LCD in a controlled viewing environment. Preliminary analysis assessing the change in the number of positive markings in the reference and manipulated images indicates that blurring and changes in gamma, followed by changes in color saturation, could have an effect on diagnostic performance. This largely coincides with the findings from Ref.[1], where IQ ratings appeared to be most affected by changes in color and gamma parameters. Importantly, diagnostic performance appears to be content dependent; it is different across tissue types. Further data analysis (including JAFROC) is ongoing and shall be reported in the conference talk.

Research paper thumbnail of Effect of video latency on performance and subjective experience in laparoscopic surgery

Research paper thumbnail of The roles and limitations of model observer studies

Research paper thumbnail of Subjective quality and depth assessment in stereoscopic viewing of volume-rendered medical images

No study to-date explored the relationship between perceived image quality (IQ) and perceived dep... more No study to-date explored the relationship between perceived image quality (IQ) and perceived depth (DP) in stereoscopic medical images. However, this is crucial to design objective quality metrics suitable for stereoscopic medical images. This study examined this relationship using volume-rendered stereoscopic medical images for both dual-and single-view distortions. The reference image was modified to simulate common alterations occurring during the image acquisition stage or at the display side: added white Gaussian noise, Gaussian filtering, changes in luminance, brightness and contrast. We followed a double stimulus five-point quality scale methodology to conduct subjective tests with eight non-expert human observers. The results suggested that DP was very robust to luminance, contrast and brightness alterations and insensitive to noise distortions until standard deviation sigma=20 and crosstalk rates of 7\%. In contrast, IQ seemed sensitive to all distortions. Finally, for both DP and IQ, the Friedman test indicated that the quality scores for dual-view distortions were significantly worse than scores for single-view distortions for multiple blur levels and crosstalk impairments. No differences were found for most levels of brightness, contrast and noise distortions. So, DP and IQ didn't react equivalently to identical impairments, and both depended whether dual-or single-view distortions were applied.

Research paper thumbnail of A full reference video quality measure based on motion differences and saliency maps evaluation

While subjective assessment is recognized as the most reliable means of quantifying video quality... more While subjective assessment is recognized as the most reliable means of quantifying video quality, objective assessment has proven to be a desirable alternative. Existing video quality indices achieve reasonable prediction of human quality scores, and are able to well predict quality degradation due to spatial distortions but not so well those due to temporal distortions. In this paper, we propose a perception-based quality index in which the novelty is the direct use of motion information to extract temporal distortions and to model the human visual attention. Temporal distortions are computed from optical flow and common vector metrics. Results of psychovisual experiments are used for modeling the human visual attention. Results show that the proposed index is competitive with current quality indices presented in the state of art. Additionally, the proposed index is much faster than other indices also including a temporal distortion measure.

Research paper thumbnail of Automatic brain atlas in magnetic resonance image for focal cortical \unmatched{0009}dysplasia patients

Brain tissues including cerebellum, thalamus, brain stem, striatum have increased Gray Matter thi... more Brain tissues including cerebellum, thalamus, brain stem, striatum have increased Gray Matter thickness, an important feature of Focal Cortical Dysplasia (FCD) lesions in Magnetic Resonance Images. However, these tissues are not related to FCD lesions. To reduce the False Positive regions in FCD detection, we propose to automatically generate a brain atlas for every FCD patient. As a result, the brain tissues are successfully located without overlaping the FCD regions.Brain tissues including cerebellum, thalamus, brain stem, striatum have increased Gray Matter thickness, an important feature of Focal Cortical Dysplasia (FCD) lesions in Magnetic Resonance Images. However, these tissues are not related to FCD lesions. To reduce the False Positive regions in FCD detection, we propose to automatically generate a brain atlas for every FCD patient. As a result, the brain tissues are successfully located without overlaping the FCD regions.

Research paper thumbnail of Estimating blur at the brain gray-white matter boundary for FCD detection in MRI

Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain ma... more Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain magnetic resonance imaging (MRI). One important MRI feature of FCD lesions is the blurring of the gray-white matter boundary (GWB), previously modelled by the gradient strength. However, in the absence of additional FCD descriptors, current gradient-based methods may yield false positives. Moreover, they do not explicitly quantify the level of blur which prevents from using them directly in the process of automated FCD detection. To improve the detection of FCD lesions displaying blur, we develop a novel algorithm called iterating local searches on neighborhood (ILSN). The novelty is that it measures the width of the blurry region rather than the gradient strength. The performance of our method is compared with the gradient magnitude method using precision and recall measures. The experimental results, tested on MRI data of 8 real FCD patients, indicate that our method has higher ability to correctly identify the FCD blurring than the gradient method.

Research paper thumbnail of Virtual clinical trial: a real prospect? Components of the virtual chain: the observer

Research paper thumbnail of Visual quality assessment of H.264/AVC compressed laparoscopic video

The digital revolution has reached hospital operating rooms, giving rise to new opportunities suc... more The digital revolution has reached hospital operating rooms, giving rise to new opportunities such as tele-surgery and tele-collaboration. Applications such as minimally invasive and robotic surgery generate large video streams that demand gigabytes of storage and transmission capacity. While lossy data compression can offer large size reduction, high compression levels may significantly reduce image quality. In this study we assess the quality of compressed laparoscopic video using a subjective evaluation study and three objective measures. Test sequences were full High-Definition videos captures of four laparoscopic surgery procedures acquired on two camera types. Raw sequences were processed with H.264/AVC IPPP-CBR at four compression levels (19.5, 5.5, 2.8, and 1.8 Mbps). 16 non-experts and 9 laparoscopic surgeons evaluated the subjective quality and suitability for surgery (surgeons only) using Single Stimulus Continuous Quality Evaluation methodology. VQM, HDR-VDP-2, and PSNR objective measures were evaluated. The results suggest that laparoscopic video may be lossy compressed approximately 30 to 100 times (19.5 to 5.5 Mbps) without sacrificing perceived image quality, potentially enabling real-time streaming of surgical procedures even over wireless networks. Surgeons were sensitive to content but had large variances in quality scores, whereas non-experts judged all scenes similarly and over-estimated the quality of some sequences. There was high correlation between surgeons{\textquoteright} scores for quality and {\textquotedblleft}suitability for surgery{\textquotedblright}. The objective measures had moderate to high correlation with subjective scores, especially when analyzed separately by camera type. Future studies should evaluate surgeons{\textquoteright} task performance to determine the clinical implications of conducting surgery with lossy compressed video.

Research paper thumbnail of Image quality assessment demo: industry: University and Cross Domain collaboration in practice

Research paper thumbnail of Selecting stimuli parameters for video quality studies based on perceptual similarity distances

This work presents a methodology to optimize the selection of multiple parameter levels of an ima... more This work presents a methodology to optimize the selection of multiple parameter levels of an image acquisition, degradation, or post-processing process applied to stimuli intended to be used in a subjective image or video quality assessment (QA) study. It is known that processing parameters (e.g. compression bit-rate) or technical quality measures (e.g. peak signal-to-noise ratio, PSNR) are often non-linearly related to human quality judgment, and the model of either relationship may not be known in advance. Using these approaches to select parameter levels may lead to an inaccurate estimate of the relationship between the parameter and subjective quality judgments -- the system{\textquoteright}s quality model. To overcome this, we propose a method for modeling the relationship between parameter levels and perceived quality distances using a paired comparison parameter selection procedure in which subjects judge the perceived similarity in quality. Our goal is to enable the selection of evenly sampled parameter levels within the considered quality range for use in a subjective QA study. This approach is tested on two applications: (1) selection of compression levels for laparoscopic surgery video QA study, and (2) selection of dose levels for an interventional X-ray QA study. Subjective scores, obtained from the follow-up single stimulus QA experiments conducted with expert subjects who evaluated the selected bit-rates and dose levels, were roughly equidistant in the perceptual quality space - as intended. These results suggest that a similarity judgment task can help select parameter values corresponding to desired subjective quality levels.

Research paper thumbnail of Computing contrast ratio in medical images using local content information

Rationale Image quality assessment in medical applications is often based on quantifying the vis... more Rationale Image quality assessment in medical applications is often based on quantifying the visibility between a structure of interest such as a vessel, termed foreground (F) and its surrounding anatomical background (B), i.e., the contrast ratio. A high quality image is the one that is able to make diagnostically relevant details distinguishable from the background. Therefore, the computation of contrast ratio is an important task in automatic medical image quality assessment. Methods We estimate the contrast ratio by using Weber{\textquoteright}s law in local image patches. A small image patch can contain a flat area, a textured area or an edge. Regions with edges are characterized by bimodal histograms representing B and F, and the local contrast ratio can be estimated using the ratio between mean intensity values of each mode of the histogram. B and F are identified by computing the mid-value between the modes using the ISODATA algorithm. This process is performed over the entire image with a sliding window resulting in a contrast ratio per pixel. Results We have tested our measure on two general purpose databases (TID2013 [1] and CSIQ [2]) to demonstrate that the proposed measure agrees with human preferences of quality. Since our measure is specifically designed for measuring contrast, only images exhibiting contrast changes are used. The difference between the maximum of the contrast ratios corresponding to the reference and processed images is used as a quality predictor. Human quality scores and our proposed measure are compared with the Pearson correlation coefficient. Our experimental results show that our method is able to accurately predict changes of perceived quality due to contrast decrements (Pearson correlations higher than 90\%). Additionally, this method can detect changes in contrast level in interventional x-ray images acquired with varying dose [3]. For instance, the resulting contrast maps demonstrate reduced contrast ratios for vessel edges on X-ray images acquired at lower dose settings, i.e., lower distinguishability from the background, compared to higher dose acquisitions. Conclusions We propose a measure to compute contrast ratio by using Weber{\textquoteright}s law in local image patches. While the proposed contrast ratio is computationally simple, this approximation of local content has shown to be useful in measuring quality differences due to contrast decrements in images. Especially, changes in structures of interest due to low contrast ratio can be detected by using the contrast map making our method potentially useful in Xray imaging dose control. References [1] Ponomarenko N. et al., {\textquotedblleft}A New Color Image Database TID2013: Innovations and Results,{\textquotedblright} Proceedings of ACIVS, 402-413 (2013). [2] Larson E. and Chandler D., {\textacutedbl}Most apparent distortion: full-reference image quality assessment and the role of strategy,{\textacutedbl} Journal of Electronic Imaging, 19 (1), 2010. [3] Kumcu, A. et al., {\textquotedblleft}Interventional x-ray image quality measure based on a psychovisual detectability model,{\textquotedblright} MIPS XVI, Ghent, Belgium, 2015.

Research paper thumbnail of Content-aware video quality assessment: predicting human perception of quality using peak signal to noise ratio and spatial/temporal activity

Since the end-user of video-based systems is often a human observer, prediction of human percepti... more Since the end-user of video-based systems is often a human observer, prediction of human perception of quality (HPoQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures, one problem is the lack of generalizability. This is mainly due to the strong dependency between HPoQ and video content. Although this problem is well-known, few existing methods directly account for the influence of video content on HPoQ. This paper propose a new method to predict HPoQ by using simple distortion measures and introducing video content features in their computation. Our methodology is based on analyzing the level of spatio-temporal activity and combining HPoQ content related parameters with simple distortion measures. Our results show that even very simple distortion measures such as PSNR and simple spatio-temporal activity measures lead to good results. Results over four different public video quality databases show that the proposed methodology, while faster and simpler, is competitive with current state-of-the-art methods, i.e., correlations between objective and subjective assessment higher than 80\% and it is only two times slower than PSNR.

Research paper thumbnail of Effects of static and dynamic image noise and background luminance on letter contrast threshold

We performed a pilot psychovisual experiment to determine the contrast threshold and slope of the... more We performed a pilot psychovisual experiment to determine the contrast threshold and slope of the psychometric function for a target embedded in two levels of static and dynamic external image noise. Sloan letters were presented in a local background surrounded by a global background, both varied over four luminance levels: 58.62, 155.97, 253.50, and 347.47 candela per square meter. Uncorrelated Gaussian noise with normalized standard deviation 0.019 and 0.087 was added to the stimuli. A noise-free stimulus was also tested. No systematic effect of global background luminance was found. The contrast threshold was approximately 1\% in the noise-free stimulus and increased monotonically with rms noise contrast, following a power law relationship. Thresholds were higher in static noise. The model will be incorporated in a no-reference, task-based medical quality metric for x-ray sequences.

Research paper thumbnail of Subjective contrast sensitivity function assessment in stereoscopic viewing of Gabor patches

While 3D displays are entering hospitals, no study to-date has explored the impact of binocular d... more While 3D displays are entering hospitals, no study to-date has explored the impact of binocular disparity and 3D inclination on contrast sensitivity function (CSF) of humans. However, knowledge of the CSF is crucial to properly calibrate medical, especially diagnostic, displays. This study examined the impact of two parameters on the CSF: (1) the depth plane position (0 mm or 171 mm behind the display plane, respectively DP:0 or DP:171), and (2) the 3D inclination (0{\textdegree} or 45{\textdegree} around the horizontal axis of the considered DP), each of these for seven spatial frequencies ranging from 0.4 to 10 cycles per degree (cpd). The stimuli were computer-generated stereoscopic images of a vertically oriented 2D Gabor patch with a given frequency. They were displayed on a 24{\textquotedblright} full HD stereoscopic display using a patterned retarder. Nine human observers assessed the CSF in a 3-down 1-up staircase experiment. Medians of the measured contrast sensitivities and results of Friedman tests suggest that the 2D CSF as modeled by Barten1 still holds when a 3D display is used as a 2D visualization system (DP:0). However, the 3D CSF measured at DP:171 was found different from the 2D CSF at frequencies below 1 cpd and above 10 cpd.

Research paper thumbnail of Interventional X-ray quality measure based on a psychovisual detectability model

Research paper thumbnail of Computer aided FCD lesion detection based on T1 MRI data

Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain ma... more Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain magnetic resonance imaging (MRI). The FCD lesions in MRI images are characterized by blurring of the gray matter/white matter (GM/WM) junction, cortical thickening and hyper-intensity signal within lesional region compared with other cortical regions. However, detecting FCD lesions by means of visual inspection can be a very difficult task for radiologists because the lesions are very subtle. To assist physicians in detecting the FCD lesions more efficiently and reduce the false positive regions resulted from the existing methods, we propose an algorithm for automated FCD detection based on T1 MRI data.

Research paper thumbnail of Impact of 3D visualization conditions on the contrast sensitivity function

Calibration algorithms of medical displays require knowledge of the contrast sensitivity function... more Calibration algorithms of medical displays require knowledge of the contrast sensitivity function (CSF). Although 3D image formats are increasingly being employed in medical market, few studies have been devoted to the assessment of the 3D CSF. To explore the impact of 3D visualization conditions on the CSF, seventeen observers participated in a 3-down 1-up staircase study. Computed medians and results of Friedman test suggested that the 3D CSF might differ from the 2D CSF. Consequently, new calibration algorithms may need to be implemented; further investigation is ongoing.

Research paper thumbnail of Does binocular disparity impact the contrast sensitivity function?

Research paper thumbnail of Channelized hotelling observers for detection tasks in multi-slice images

We investigate numerical observers for signal detection in volumetric imaging data sets viewed in... more We investigate numerical observers for signal detection in volumetric imaging data sets viewed in stack browsing mode. Three types of multi-slice CHO (msCHO) as well as a single-slice CHO (ssCHO) are considered. We study the influence of signal size on model observer performance for detection of exactly known signals in three-dimensional (3D) non-Gaussian lumpy backgrounds (LB). The size of the signal is varied separately in the {\textacutedbl}frontal{\textacutedbl} or {\textacutedbl}coronal{\textacutedbl} plane (xy-plane) and in the {\textacutedbl}sagittal{\textacutedbl} plane (z-direction). METHODS The three msCHO designs: type a (msCHOa), type b (msCHOb) and type c (msCHOc), differ in how they treat the channelized slice data to infer the classification decision for the image. In one case, applied to msCHOa and msCHOb, the observers first build a test statistic for each slice and then use these to estimate the final test statistic for the image. The first step is modeled by the regular 2D-CHO applied on each slice in the stack and by the 1D-HO applied on the array of slice test statistics in the following step. For msCHOa, a separate 2D-CHO template is built for each position of the slice, while the msCHOb applies the same templates on multiple adjacent slices. In the other case, applied to msCHOc, the observers build their final statistics for the multi-slice image directly using the channelized slice data with no intermediate {\textacutedbl}scoring{\textacutedbl} on the slice level. Hence, the model applies the 1D-HO directly on the concatenated channelized slice data. Finally, the ssCHO corresponds to the conventional 2D-CHO applied on the central slice of the signal. CONCLUSION In general, the results of our study suggest higher observer performance of msCHO design compared to the ssCHO which confirms the expected benefit from using the data of multiple image slices in contrast to the single slice only. Due to its least restrictive assumptions among three types of msCHO models, type a may be applied in the greatest range of detection tasks. When applicable, type b achieves the highest performance, especially if the signal is spread across fewer slices. Not surprisingly, the results indicate significant dependency of the detection performance with regard to the signal characteristics, especially the signal size in xy-plane and signal spread across z-direction, but also with regard to the relation between the signal and the background structure. It is part of our future research plans to pursue a study with psychovisual experiments to compare the models and human observer performance.

Research paper thumbnail of Effects of common image manipulations on diagnostic performance in digital pathology: human study

A very recent work of Ref.[1] studied the effects of image manipulation and image degradation on ... more A very recent work of Ref.[1] studied the effects of image manipulation and image degradation on the perceived attributes of image quality (IQ) of digital pathology slides. However, before any conclusions and recommendations can be formulated regarding specific image manipulations (and IQ attributes), it is necessary to investigate their effects on the diagnostic performance of clinicians when interpreting these images. In this study, 6 expert pathologists interpreted digital images of H\&E stained animal pathology samples in a free-response (FROC) experiment. Participants marked locations suspicious for viral inclusions (inclusion bodies) and rated them using a continuous scale from 0 (low confidence) to 100\% (high confidence). The images were the same as in Ref.[1]: crops of digital pathology slides of 3 different animal tissue samples, all 1200{\texttimes}750 pixels in size. Each participant viewed a total of 72 images: 12 nonmanipulated (reference) images (4 of each tissue type), and 60 manipulated images (5 for each reference image). The extent of artificial manipulations was adjusted relative to the reference images using the HDR-VDP metric [2] in the luminance domain: added Gaussian blur (sb=3), decreased gamma (-5\%), added white Gaussian noise (sn=10), decreased color saturation (-5\%), and JPG compression (libjpeg 50). The images were displayed on a 3MP medical color LCD in a controlled viewing environment. Preliminary analysis assessing the change in the number of positive markings in the reference and manipulated images indicates that blurring and changes in gamma, followed by changes in color saturation, could have an effect on diagnostic performance. This largely coincides with the findings from Ref.[1], where IQ ratings appeared to be most affected by changes in color and gamma parameters. Importantly, diagnostic performance appears to be content dependent; it is different across tissue types. Further data analysis (including JAFROC) is ongoing and shall be reported in the conference talk.

Research paper thumbnail of Effect of video latency on performance and subjective experience in laparoscopic surgery

Research paper thumbnail of The roles and limitations of model observer studies

Research paper thumbnail of Subjective quality and depth assessment in stereoscopic viewing of volume-rendered medical images

No study to-date explored the relationship between perceived image quality (IQ) and perceived dep... more No study to-date explored the relationship between perceived image quality (IQ) and perceived depth (DP) in stereoscopic medical images. However, this is crucial to design objective quality metrics suitable for stereoscopic medical images. This study examined this relationship using volume-rendered stereoscopic medical images for both dual-and single-view distortions. The reference image was modified to simulate common alterations occurring during the image acquisition stage or at the display side: added white Gaussian noise, Gaussian filtering, changes in luminance, brightness and contrast. We followed a double stimulus five-point quality scale methodology to conduct subjective tests with eight non-expert human observers. The results suggested that DP was very robust to luminance, contrast and brightness alterations and insensitive to noise distortions until standard deviation sigma=20 and crosstalk rates of 7\%. In contrast, IQ seemed sensitive to all distortions. Finally, for both DP and IQ, the Friedman test indicated that the quality scores for dual-view distortions were significantly worse than scores for single-view distortions for multiple blur levels and crosstalk impairments. No differences were found for most levels of brightness, contrast and noise distortions. So, DP and IQ didn't react equivalently to identical impairments, and both depended whether dual-or single-view distortions were applied.

Research paper thumbnail of A full reference video quality measure based on motion differences and saliency maps evaluation

While subjective assessment is recognized as the most reliable means of quantifying video quality... more While subjective assessment is recognized as the most reliable means of quantifying video quality, objective assessment has proven to be a desirable alternative. Existing video quality indices achieve reasonable prediction of human quality scores, and are able to well predict quality degradation due to spatial distortions but not so well those due to temporal distortions. In this paper, we propose a perception-based quality index in which the novelty is the direct use of motion information to extract temporal distortions and to model the human visual attention. Temporal distortions are computed from optical flow and common vector metrics. Results of psychovisual experiments are used for modeling the human visual attention. Results show that the proposed index is competitive with current quality indices presented in the state of art. Additionally, the proposed index is much faster than other indices also including a temporal distortion measure.

Research paper thumbnail of Automatic brain atlas in magnetic resonance image for focal cortical \unmatched{0009}dysplasia patients

Brain tissues including cerebellum, thalamus, brain stem, striatum have increased Gray Matter thi... more Brain tissues including cerebellum, thalamus, brain stem, striatum have increased Gray Matter thickness, an important feature of Focal Cortical Dysplasia (FCD) lesions in Magnetic Resonance Images. However, these tissues are not related to FCD lesions. To reduce the False Positive regions in FCD detection, we propose to automatically generate a brain atlas for every FCD patient. As a result, the brain tissues are successfully located without overlaping the FCD regions.Brain tissues including cerebellum, thalamus, brain stem, striatum have increased Gray Matter thickness, an important feature of Focal Cortical Dysplasia (FCD) lesions in Magnetic Resonance Images. However, these tissues are not related to FCD lesions. To reduce the False Positive regions in FCD detection, we propose to automatically generate a brain atlas for every FCD patient. As a result, the brain tissues are successfully located without overlaping the FCD regions.

Research paper thumbnail of Estimating blur at the brain gray-white matter boundary for FCD detection in MRI

Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain ma... more Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain magnetic resonance imaging (MRI). One important MRI feature of FCD lesions is the blurring of the gray-white matter boundary (GWB), previously modelled by the gradient strength. However, in the absence of additional FCD descriptors, current gradient-based methods may yield false positives. Moreover, they do not explicitly quantify the level of blur which prevents from using them directly in the process of automated FCD detection. To improve the detection of FCD lesions displaying blur, we develop a novel algorithm called iterating local searches on neighborhood (ILSN). The novelty is that it measures the width of the blurry region rather than the gradient strength. The performance of our method is compared with the gradient magnitude method using precision and recall measures. The experimental results, tested on MRI data of 8 real FCD patients, indicate that our method has higher ability to correctly identify the FCD blurring than the gradient method.

Research paper thumbnail of Virtual clinical trial: a real prospect? Components of the virtual chain: the observer

Research paper thumbnail of Visual quality assessment of H.264/AVC compressed laparoscopic video

The digital revolution has reached hospital operating rooms, giving rise to new opportunities suc... more The digital revolution has reached hospital operating rooms, giving rise to new opportunities such as tele-surgery and tele-collaboration. Applications such as minimally invasive and robotic surgery generate large video streams that demand gigabytes of storage and transmission capacity. While lossy data compression can offer large size reduction, high compression levels may significantly reduce image quality. In this study we assess the quality of compressed laparoscopic video using a subjective evaluation study and three objective measures. Test sequences were full High-Definition videos captures of four laparoscopic surgery procedures acquired on two camera types. Raw sequences were processed with H.264/AVC IPPP-CBR at four compression levels (19.5, 5.5, 2.8, and 1.8 Mbps). 16 non-experts and 9 laparoscopic surgeons evaluated the subjective quality and suitability for surgery (surgeons only) using Single Stimulus Continuous Quality Evaluation methodology. VQM, HDR-VDP-2, and PSNR objective measures were evaluated. The results suggest that laparoscopic video may be lossy compressed approximately 30 to 100 times (19.5 to 5.5 Mbps) without sacrificing perceived image quality, potentially enabling real-time streaming of surgical procedures even over wireless networks. Surgeons were sensitive to content but had large variances in quality scores, whereas non-experts judged all scenes similarly and over-estimated the quality of some sequences. There was high correlation between surgeons{\textquoteright} scores for quality and {\textquotedblleft}suitability for surgery{\textquotedblright}. The objective measures had moderate to high correlation with subjective scores, especially when analyzed separately by camera type. Future studies should evaluate surgeons{\textquoteright} task performance to determine the clinical implications of conducting surgery with lossy compressed video.

Research paper thumbnail of Image quality assessment demo: industry: University and Cross Domain collaboration in practice

Research paper thumbnail of Selecting stimuli parameters for video quality studies based on perceptual similarity distances

This work presents a methodology to optimize the selection of multiple parameter levels of an ima... more This work presents a methodology to optimize the selection of multiple parameter levels of an image acquisition, degradation, or post-processing process applied to stimuli intended to be used in a subjective image or video quality assessment (QA) study. It is known that processing parameters (e.g. compression bit-rate) or technical quality measures (e.g. peak signal-to-noise ratio, PSNR) are often non-linearly related to human quality judgment, and the model of either relationship may not be known in advance. Using these approaches to select parameter levels may lead to an inaccurate estimate of the relationship between the parameter and subjective quality judgments -- the system{\textquoteright}s quality model. To overcome this, we propose a method for modeling the relationship between parameter levels and perceived quality distances using a paired comparison parameter selection procedure in which subjects judge the perceived similarity in quality. Our goal is to enable the selection of evenly sampled parameter levels within the considered quality range for use in a subjective QA study. This approach is tested on two applications: (1) selection of compression levels for laparoscopic surgery video QA study, and (2) selection of dose levels for an interventional X-ray QA study. Subjective scores, obtained from the follow-up single stimulus QA experiments conducted with expert subjects who evaluated the selected bit-rates and dose levels, were roughly equidistant in the perceptual quality space - as intended. These results suggest that a similarity judgment task can help select parameter values corresponding to desired subjective quality levels.

Research paper thumbnail of Computing contrast ratio in medical images using local content information

Rationale Image quality assessment in medical applications is often based on quantifying the vis... more Rationale Image quality assessment in medical applications is often based on quantifying the visibility between a structure of interest such as a vessel, termed foreground (F) and its surrounding anatomical background (B), i.e., the contrast ratio. A high quality image is the one that is able to make diagnostically relevant details distinguishable from the background. Therefore, the computation of contrast ratio is an important task in automatic medical image quality assessment. Methods We estimate the contrast ratio by using Weber{\textquoteright}s law in local image patches. A small image patch can contain a flat area, a textured area or an edge. Regions with edges are characterized by bimodal histograms representing B and F, and the local contrast ratio can be estimated using the ratio between mean intensity values of each mode of the histogram. B and F are identified by computing the mid-value between the modes using the ISODATA algorithm. This process is performed over the entire image with a sliding window resulting in a contrast ratio per pixel. Results We have tested our measure on two general purpose databases (TID2013 [1] and CSIQ [2]) to demonstrate that the proposed measure agrees with human preferences of quality. Since our measure is specifically designed for measuring contrast, only images exhibiting contrast changes are used. The difference between the maximum of the contrast ratios corresponding to the reference and processed images is used as a quality predictor. Human quality scores and our proposed measure are compared with the Pearson correlation coefficient. Our experimental results show that our method is able to accurately predict changes of perceived quality due to contrast decrements (Pearson correlations higher than 90\%). Additionally, this method can detect changes in contrast level in interventional x-ray images acquired with varying dose [3]. For instance, the resulting contrast maps demonstrate reduced contrast ratios for vessel edges on X-ray images acquired at lower dose settings, i.e., lower distinguishability from the background, compared to higher dose acquisitions. Conclusions We propose a measure to compute contrast ratio by using Weber{\textquoteright}s law in local image patches. While the proposed contrast ratio is computationally simple, this approximation of local content has shown to be useful in measuring quality differences due to contrast decrements in images. Especially, changes in structures of interest due to low contrast ratio can be detected by using the contrast map making our method potentially useful in Xray imaging dose control. References [1] Ponomarenko N. et al., {\textquotedblleft}A New Color Image Database TID2013: Innovations and Results,{\textquotedblright} Proceedings of ACIVS, 402-413 (2013). [2] Larson E. and Chandler D., {\textacutedbl}Most apparent distortion: full-reference image quality assessment and the role of strategy,{\textacutedbl} Journal of Electronic Imaging, 19 (1), 2010. [3] Kumcu, A. et al., {\textquotedblleft}Interventional x-ray image quality measure based on a psychovisual detectability model,{\textquotedblright} MIPS XVI, Ghent, Belgium, 2015.

Research paper thumbnail of Content-aware video quality assessment: predicting human perception of quality using peak signal to noise ratio and spatial/temporal activity

Since the end-user of video-based systems is often a human observer, prediction of human percepti... more Since the end-user of video-based systems is often a human observer, prediction of human perception of quality (HPoQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures, one problem is the lack of generalizability. This is mainly due to the strong dependency between HPoQ and video content. Although this problem is well-known, few existing methods directly account for the influence of video content on HPoQ. This paper propose a new method to predict HPoQ by using simple distortion measures and introducing video content features in their computation. Our methodology is based on analyzing the level of spatio-temporal activity and combining HPoQ content related parameters with simple distortion measures. Our results show that even very simple distortion measures such as PSNR and simple spatio-temporal activity measures lead to good results. Results over four different public video quality databases show that the proposed methodology, while faster and simpler, is competitive with current state-of-the-art methods, i.e., correlations between objective and subjective assessment higher than 80\% and it is only two times slower than PSNR.

Research paper thumbnail of Effects of static and dynamic image noise and background luminance on letter contrast threshold

We performed a pilot psychovisual experiment to determine the contrast threshold and slope of the... more We performed a pilot psychovisual experiment to determine the contrast threshold and slope of the psychometric function for a target embedded in two levels of static and dynamic external image noise. Sloan letters were presented in a local background surrounded by a global background, both varied over four luminance levels: 58.62, 155.97, 253.50, and 347.47 candela per square meter. Uncorrelated Gaussian noise with normalized standard deviation 0.019 and 0.087 was added to the stimuli. A noise-free stimulus was also tested. No systematic effect of global background luminance was found. The contrast threshold was approximately 1\% in the noise-free stimulus and increased monotonically with rms noise contrast, following a power law relationship. Thresholds were higher in static noise. The model will be incorporated in a no-reference, task-based medical quality metric for x-ray sequences.

Research paper thumbnail of Subjective contrast sensitivity function assessment in stereoscopic viewing of Gabor patches

While 3D displays are entering hospitals, no study to-date has explored the impact of binocular d... more While 3D displays are entering hospitals, no study to-date has explored the impact of binocular disparity and 3D inclination on contrast sensitivity function (CSF) of humans. However, knowledge of the CSF is crucial to properly calibrate medical, especially diagnostic, displays. This study examined the impact of two parameters on the CSF: (1) the depth plane position (0 mm or 171 mm behind the display plane, respectively DP:0 or DP:171), and (2) the 3D inclination (0{\textdegree} or 45{\textdegree} around the horizontal axis of the considered DP), each of these for seven spatial frequencies ranging from 0.4 to 10 cycles per degree (cpd). The stimuli were computer-generated stereoscopic images of a vertically oriented 2D Gabor patch with a given frequency. They were displayed on a 24{\textquotedblright} full HD stereoscopic display using a patterned retarder. Nine human observers assessed the CSF in a 3-down 1-up staircase experiment. Medians of the measured contrast sensitivities and results of Friedman tests suggest that the 2D CSF as modeled by Barten1 still holds when a 3D display is used as a 2D visualization system (DP:0). However, the 3D CSF measured at DP:171 was found different from the 2D CSF at frequencies below 1 cpd and above 10 cpd.

Research paper thumbnail of Interventional X-ray quality measure based on a psychovisual detectability model

Research paper thumbnail of Computer aided FCD lesion detection based on T1 MRI data

Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain ma... more Focal cortical dysplasia (FCD) is a frequent cause of epilepsy and can be detected using brain magnetic resonance imaging (MRI). The FCD lesions in MRI images are characterized by blurring of the gray matter/white matter (GM/WM) junction, cortical thickening and hyper-intensity signal within lesional region compared with other cortical regions. However, detecting FCD lesions by means of visual inspection can be a very difficult task for radiologists because the lesions are very subtle. To assist physicians in detecting the FCD lesions more efficiently and reduce the false positive regions resulted from the existing methods, we propose an algorithm for automated FCD detection based on T1 MRI data.

Research paper thumbnail of Impact of 3D visualization conditions on the contrast sensitivity function

Calibration algorithms of medical displays require knowledge of the contrast sensitivity function... more Calibration algorithms of medical displays require knowledge of the contrast sensitivity function (CSF). Although 3D image formats are increasingly being employed in medical market, few studies have been devoted to the assessment of the 3D CSF. To explore the impact of 3D visualization conditions on the CSF, seventeen observers participated in a 3-down 1-up staircase study. Computed medians and results of Friedman test suggested that the 3D CSF might differ from the 2D CSF. Consequently, new calibration algorithms may need to be implemented; further investigation is ongoing.

Research paper thumbnail of Does binocular disparity impact the contrast sensitivity function?

Research paper thumbnail of Digital image processing of the Ghent altarpiece: supporting the painting's study and conservation treatment

In this article, we show progress in certain image processing techniques that can support the ph... more In this article, we show progress in certain image processing techniques that can support the physical restoration of the painting, its art-historical analysis, or both. We show how analysis of the crack patterns could indicate possible areas of overpaint, which may be of great value for the physical restoration campaign, after further validation. Next, we explore how digital image inpainting can serve as a simulation for the restoration of paint losses. Finally, we explore how the statistical analysis of the relatively simple and frequently recurring objects (such as pearls in this masterpiece) may characterize the consistency of the painter{\textquoteright}s style and thereby aid both art-historical interpretation and physical restoration campaign.

Research paper thumbnail of Contrast sensitivity function in stereoscopic viewing of Gabor patches on a medical polarized 3D stereoscopic display

Research paper thumbnail of Content-aware objective video quality assessment

Since the end-user of video-based systems is often a human observer, prediction of user-perceived... more Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20\% relative to its noncontent-aware baseline.

Research paper thumbnail of Effect of video lag on laparoscopic surgery: correlation between performance and usability at low latencies

Background Few telesurgery studies assess the impact of latency on user experience, low latencie... more Background Few telesurgery studies assess the impact of latency on user experience, low latencies are often not studied despite evidence of negative effects, and some studies recruit inexperienced subjects instead of surgeons without evidence that latency affects both groups similarly. Methods Fifteen trainees and fourteen laparoscopic surgeons conducted two tasks on a laparoscopy home-trainer at six latencies below 200 milliseconds (ms). Completion time and usability (perceived awareness of latency, inefficiency, disturbance, adaptability, and impact on patient safety) were measured. Results Weak correlation between completion time and usability was found. There was significant deterioration in performance and user experience at 105\,ms added latency. Surgeons were more negatively affected. Conclusion Objective measures insufficiently describe the impact of latency therefore standard measures of user experience should be incorporated in studies. Even low latencies may be detrimental to laparoscopic surgery. Results from non-experts cannot predict the impact of latency on experienced surgeons.

Research paper thumbnail of Channelized hotelling observers for the assessment of volumetric  imaging data sets

Research paper thumbnail of Crack detection and inpainting for virtual restoration of paintings:  the case of the Ghent Altarpiece

Research paper thumbnail of Spatiogram features to characterize pearls and beads and other small  ball-shaped objects in paintings

Objective characterization of jewels in paintings, especially pearls, has been a long lasting ch... more Objective characterization of jewels in paintings, especially pearls, has been a long lasting challenge for art historians. The way an artist painted pearls reflects his ability to observe nature and his acquaintance with contemporary optical theory. Moreover, the painterly execution may also be considered as an individual characteristic, useful in distinguishing hands. In this contribution, we propose a set of image analysis techniques to analyze and measure spatial characteristics of the digital images of pearls and beads, all relying on the so called spatiogram image representation. Our experimental results demonstrate a good correlation between the new metrics and the visually observed image features, and also capture the degree of realism of the visual appearance in the painting. In that sense, these results set the basis in creating a practical tool for art historical analysis and attribution and provide strong motivation for further investigations in this direction.

Research paper thumbnail of Image quality assessment: utility, beauty, assessment

Research paper thumbnail of Image quality assessment : utility, beauty, appearance

Karel Deblaere M.D. for his expertise, his enthusiasm and dedication in providing feedback to the... more Karel Deblaere M.D. for his expertise, his enthusiasm and dedication in providing feedback to the design and results of our human observer studies of signal detectability as well as for his own participation in the experiments. Also, it was a pleasure collaborating with Nemanja Lukić and Prof. Miodrag Temerinac on real-time implementation of the algorithms for blur estimation proposed in this thesis. Finally, I would like to thank to Prof. Ingrid Daubechies for introducing our group to the intriguing research area of digital artwork analysis and for her inspiration of this work. As a result, several of my IPI colleagues and I are working as part of a magnificent research team involving Prof. Marc de Mey, Prof. Maximiliaan Martens, Dr. Annick Born, Dr. Emile Gezels, Prof. Ann Dooms, and Bruno Cornelis; I credit them all for the pleasant and motivating collaboration. In addition, I thank Saint Bavo cathedral, Lukas-Art in Flanders and the Dierickfonds for their permission to use for my research the digital images of the artwork by Van Eyck which are based on negatives from the late Alfons Dierick photographs (c-04, h-16, 40-15) made available for research purposes to Ghent University. Once more, my sincere thanks to all those who I have worked with. My sincere thanks extends to all my co-authors for their intellectual insights and wonderful collaboration. Furthermore, I am grateful to those with whom I interacted during the scientific conferences and other meetings. I truly appreciate all your various contributions and insights; they have been invaluable for this dissertation.

Research paper thumbnail of A full reference video quality measure based on motion differences and saliency maps evaluation

2014 International Conference on Computer Vision Theory and Applications (VISAPP), 2014

While subjective assessment is recognized as the most reliable means of quantifying video quality... more While subjective assessment is recognized as the most reliable means of quantifying video quality, objective assessment has proven to be a desirable alternative. Existing video quality indices achieve reasonable prediction of human quality scores, and are able to well predict quality degradation due to spatial distortions but not so well those due to temporal distortions. In this paper, we propose a perception-based quality index in which the novelty is the direct use of motion information to extract temporal distortions and to model the human visual attention. Temporal distortions are computed from optical flow and common vector metrics. Results of psychovisual experiments are used for modeling the human visual attention. Results show that the proposed index is competitive with current quality indices presented in the state of art. Additionally, the proposed index is much faster than other indices also including a temporal distortion measure.

Research paper thumbnail of Content-aware objective video quality assessment

Journal of Electronic Imaging, 2016

Since the end-user of video-based systems is often a human observer, prediction of user-perceived... more Since the end-user of video-based systems is often a human observer, prediction of user-perceived video quality (PVQ) is an important task for increasing the user satisfaction. Despite the large variety of objective video quality measures (VQMs), their lack of generalizability remains a problem. This is mainly due to the strong dependency between PVQ and video content. Although this problem is well known, few existing VQMs directly account for the influence of video content on PVQ. Recently, we proposed a method to predict PVQ by introducing relevant video content features in the computation of video distortion measures. The method is based on analyzing the level of spatiotemporal activity in the video and using those as parameters of the anthropomorphic video distortion models. We focus on the experimental evaluation of the proposed methodology based on a total of five public databases, four different objective VQMs, and 105 content related indexes. Additionally, relying on the proposed method, we introduce an approach for selecting the levels of video distortions for the purpose of subjective quality assessment studies. Our results suggest that when adequately combined with content related indexes, even very simple distortion measures (e.g., peak signal to noise ratio) are able to achieve high performance, i.e., high correlation between the VQM and the PVQ. In particular, we have found that by incorporating video content features, it is possible to increase the performance of the VQM by up to 20% relative to its noncontent-aware baseline.

Research paper thumbnail of Interventional X-ray quality measure based on a psychovisual detectability model

Rationale Classical estimates of diagnostic performance – model observers – typically test subtle... more Rationale Classical estimates of diagnostic performance – model observers – typically test subtle signals at threshold contrast perception. This approach may not be suitable for real-time quality assessment of medical imaging systems in which observers operate at suprathreshold contrast levels, such as interventional X-ray. Automatic dose control mechanisms for these systems adjust patient dose based on pre-determined patient thickness/dose curves and measurement of average gray levels in the acquisition [1], and may overestimate the dose needed to conduct the clinical task on a given patient or region. We present a real-time task-based quality measure that aims to estimate the minimum dose needed to obtain suprathreshold contrasts of target objects (vessels). This measure may be incorporated in a feedback loop for dose reduction while ensuring sufficient image quality for the clinical task. Methods The quality measure was built from two components: (1) a detectability function whic...

Research paper thumbnail of LamGodsBook pizurica with Figs

Research paper thumbnail of Effects of static and dynamic image noise and background luminance on letter contrast threshold

2015 Seventh International Workshop on Quality of Multimedia Experience (QoMEX), 2015

We performed a pilot psychovisual experiment to determine the contrast threshold and slope of the... more We performed a pilot psychovisual experiment to determine the contrast threshold and slope of the psychometric function for a target embedded in two levels of static and dynamic external image noise. Sloan letters were presented in a local background surrounded by a global background, both varied over four luminance levels: 58.62, 155.97, 253.50, and 347.47 candela per square meter. Uncorrelated Gaussian noise with normalized standard deviation 0.019 and 0.087 was added to the stimuli. A noise-free stimulus was also tested. No systematic effect of global background luminance was found. The contrast threshold was approximately 1% in the noise-free stimulus and increased monotonically with rms noise contrast, following a power law relationship. Thresholds were higher in static noise. The model will be incorporated in a no-reference, task-based medical quality metric for x-ray sequences.

Research paper thumbnail of Selecting stimuli parameters for video quality assessment studies based on perceptual similarity distances

Image Processing: Algorithms and Systems XIII, 2015

This is a repository copy of Selecting stimuli parameters for video quality studies based on perc... more This is a repository copy of Selecting stimuli parameters for video quality studies based on perceptual similarity distances.

Research paper thumbnail of <title>No-reference blur estimation based on the average cone ratio in the wavelet domain</title>

Multimedia on Mobile Devices 2011; and Multimedia Content Access: Algorithms and Systems V, 2011

We propose a wavelet based metric of blurriness in the digital images named CogACR-Center of grav... more We propose a wavelet based metric of blurriness in the digital images named CogACR-Center of gravity of the Average Cone Ratio. The metric is highly robust to noise and able to distinguish between a great range of blurriness. To automate the CogACR estimation of blur in a no-reference scenario, we introduce a novel method for image classification based on edge content similarity. Our results indicate high accuracy of the CogACR metric for a range of natural scene images distorted with the out-of-focus blur. Within the considered range of blur radius of 0 to 10 pixels, varied in steps of 0.25 pixels, the proposed metric estimates the blur radius with an absolute error of up to 1 pixel in 80 to 90% of the images.

Research paper thumbnail of Objectively measuring signal detectability, contrast, blur and noise in medical images using channelized joint observers

Medical Imaging 2013: Image Perception, Observer Performance, and Technology Assessment, 2013

To improve imaging systems and image processing techniques, objective image quality assessment is... more To improve imaging systems and image processing techniques, objective image quality assessment is essential. Model observers adopting a task-based quality assessment strategy by estimating signal detectability measures, have shown to be quite successful to this end. At the same time, costly and time-consuming human observer experiments can be avoided. However, optimizing images in terms of signal detectability alone, still allows a lot of freedom in terms of the imaging parameters. More specifically, fixing the signal detectability defines a manifold in the imaging parameter space on which different "possible" solutions reside. In this article, we present measures that can be used to distinguish these possible solutions from each other, in terms of image quality factors such as signal blur, noise and signal contrast. Our approach is based on an extended channelized joint observer (CJO) that simultaneously estimates the signal amplitude, scale and detectability. As an application, we use this technique to design k-space trajectories for MRI acquisition. Our technique allows to compare the different spiral trajectories in terms of blur, noise and contrast, even when the signal detectability is estimated to be equal.

Research paper thumbnail of <title>Image blur estimation based on the average cone of ratio in the wavelet domain</title>

Wavelet Applications in Industrial Processing VI, 2009

In this paper, we propose a new algorithm for objective blur estimation using wavelet decompositi... more In this paper, we propose a new algorithm for objective blur estimation using wavelet decomposition. The central idea of our method is to estimate blur as a function of the center of gravity of the average cone ratio (ACR) histogram. The key properties of ACR are twofold: it is powerful in estimating local edge regularity, and it is nearly insensitive to noise. We use these properties to estimate the blurriness of the image, irrespective of the level of noise. In particular, the center of gravity of the ACR histogram is a blur metric. The method is applicable both in case where the reference image is available and when there is no reference. The results demonstrate a consistent performance of the proposed metric for a wide class of natural images and in a wide range of out of focus blurriness. Moreover, the proposed method shows a remarkable insensitivity to noise compared to other wavelet domain methods.

Research paper thumbnail of Optimizing image quality using test signals: Trading off blur, noise and contrast

2012 Fourth International Workshop on Quality of Multimedia Experience, 2012

Objective image quality assessment (QA) is crucial in order to improve imaging systems and image ... more Objective image quality assessment (QA) is crucial in order to improve imaging systems and image processing techniques. In medical imaging, model observers that estimate signal detectability, have become widespread and promising as a means to avoid costly human observer experiments. However, signal detectability alone does not give the complete picture: one may also be interested in optimizing several independent quality factors (e.g. contrast, spatial resolution, noise). In recent work, we have proposed the channelized joint observer (CJO), to jointly detect and estimate random parametric signals in images, a so-called signal-known-statistically (SKS) detection task. In this paper, we show how the estimation capabilities of the CJO can be exploited to estimate several image quality factors in degraded images, through signal insertion. By fixing the signal detectability, we illustrate how to benefit from the trade-offs that exist between the different quality factors. Our method is in the first place intended to aid medical image reconstruction techniques and medical display design, although the technique can also be useful in a much wider context.

Research paper thumbnail of <title>Real-time wavelet based blur estimation on cell BE platform</title>

Wavelet Applications in Industrial Processing VII, 2010

We propose a real-time system for blur estimation using wavelet decomposition. The system is base... more We propose a real-time system for blur estimation using wavelet decomposition. The system is based on an emerging multi-core microprocessor architecture (Cell Broadband Engine, Cell BE) known to outperform any available general purpose or DSP processor in the domain of real-time advanced video processing solutions. We start from a recent wavelet domain blur estimation algorithm which uses histograms of a local regularity measure called average cone ratio (ACR). This approach has shown a very good potential for assessing the level of blur in the image yet some important aspects remain to be addressed in order for the method to become a practically working one. Some of these aspects are explored in our work. Furthermore, we develop an efficient real-time implementation of the novelty metric and integrate it into a system that captures live video. The proposed system estimates blur extent and renders the results to the remote user in real-time.

Research paper thumbnail of Channelized Hotelling observers for signal detection in stack-mode reading of volumetric images on medical displays with slow response time

2011 IEEE Nuclear Science Symposium Conference Record, 2011

Volumetric medical images are commonly read in stack-browsing mode. However, previous studies sug... more Volumetric medical images are commonly read in stack-browsing mode. However, previous studies suggest that slow temporal response of medical liquid crystal displays may degrade the diagnostic accuracy (lesion detectability) at browsing rates as low as 10 frames per second (fps). Recently, a multi-slice channelized Hotelling observer (msCHO) model was proposed to estimate the detection performance in 3D images. This implementation of the msCHO restricted the analysis to the luminance of a display pixel at the end of the frame time (end-of-frame luminance) while ignoring the luminance transition within the frame time (intra-frame luminance). Such an approach fails to differentiate between, for example, the commonly found case of two displays with different temporal profiles of luminance as long as their end-of-frame luminance levels are the same. In order to overcome this limitation of the msCHO, we propose a new upsampled msCHO (umsCHO) which acts on images obtained using both the intra-frame and the end-of-frame luminance information. The two models are compared on a set of synthesized 3D images for a range of browsing rates (16.67, 25 and 50 fps). Our results demonstrate that, depending on the details of the luminance transition profiles, neglecting the intra-frame luminance information may lead to over-or underestimation of lesion detectability. Therefore, we argue that using the umsCHO rather than msCHO model is more appropriate for estimating the detection performance in the stack-browsing mode. I. INTRODUCTION V OLUMETRIC medical images are commonly read in stack browsing mode. Examples of 3D image modalities include MRI brain scans, CT scans of liver, 3D SPECT of bone, breast tomosynthesis, and many others. However, medical liquid crystal display (LCD) monitors today have a slow temporal response which could affect the diagnostic accuracy at higher browsing speeds. Though more investigation is required, previous studies of this effect suggest that slow Manuscript

Research paper thumbnail of Reader behavior in a detection task using single- and multislice image datasets

SPIE Proceedings, 2012

We assess human reader behavior such as reading times and browsing trends in a signal detection e... more We assess human reader behavior such as reading times and browsing trends in a signal detection experiment with synthetic single-slice (ss) and multi-slice (ms) image datasets of varying task complexity, defined in this study as the ratio of the background lump size to the signal width. Three dataset types were generated by inserting one 3D Gaussian target of fixed size into the center of 3D volumes of correlated Gaussian noise with three different kernel sizes. Corresponding signal intensities were determined separately for the three background types using the staircase method targeting an AUC of 0.7 for ss datasets. Non-expert human readers were presented with ss (central slice of the volume) and ms datasets (slice-by-slice viewing in a stack-browsing mode). Readers were aware of the target's approximate location within the slice or volume. Readers could scroll freely through the ms datasets at arbitrary speed and direction with no time limit. Experiments were conducted in a controlled viewing environment on a 5MP digital mammography display. AUCs were 0.68-0.73 for ss; 0.82-0.98 for ms datasets. Reading time (ms, ss), the number of repetitions through the stack (ms), and the average number of slices per repetition (ms) were assessed. Browsing speeds were in the range of 1-7 slices per second. Results show that readers spent the shortest time and fewest repetitions reading TP cases, with FP and FN cases requiring the most attention. The reported trends concur with earlier chest x-ray and mammography studies which report that readers fixate longer on regions subsequently rated incorrectly.

Research paper thumbnail of Visual quality assessment of H.264/AVC compressed laparoscopic video

SPIE Proceedings, 2014

The digital revolution has reached hospital operating rooms, giving rise to new opportunities suc... more The digital revolution has reached hospital operating rooms, giving rise to new opportunities such as tele-surgery and tele-collaboration. Applications such as minimally invasive and robotic surgery generate large video streams that demand gigabytes of storage and transmission capacity. While lossy data compression can offer large size reduction, high compression levels may significantly reduce image quality. In this study we assess the quality of compressed laparoscopic video using a subjective evaluation study and three objective measures. Test sequences were full High-Definition videos captures of four laparoscopic surgery procedures acquired on two camera types. Raw sequences were processed with H.264/AVC IPPP-CBR at four compression levels (19.5, 5.5, 2.8, and 1.8 Mbps). 16 non-experts and 9 laparoscopic surgeons evaluated the subjective quality and suitability for surgery (surgeons only) using Single Stimulus Continuous Quality Evaluation methodology. VQM, HDR-VDP-2, and PSNR objective measures were evaluated. The results suggest that laparoscopic video may be lossy compressed approximately 30 to 100 times (19.5 to 5.5 Mbps) without sacrificing perceived image quality, potentially enabling real-time streaming of surgical procedures even over wireless networks. Surgeons were sensitive to content but had large variances in quality scores, whereas non-experts judged all scenes similarly and overestimated the quality of some sequences. There was high correlation between surgeons' scores for quality and "suitability for surgery". The objective measures had moderate to high correlation with subjective scores, especially when analyzed separately by camera type. Future studies should evaluate surgeons' task performance to determine the clinical implications of conducting surgery with lossy compressed video.

Research paper thumbnail of Virtual Restoration of the Ghent Altarpiece Using Crack Detection and Inpainting

Advanced Concepts for Intelligent Vision Systems, 2011

In this paper, we present a new method for virtual restoration of digitized paintings, with the s... more In this paper, we present a new method for virtual restoration of digitized paintings, with the special focus on the Ghent Altarpiece (1432), one of Belgium's greatest masterpieces. The goal of the work is to remove cracks from the digitized painting thereby approximating how the painting looked like before ageing for nearly 600 years and aiding art historical and palaeographical analysis. For crack detection, we employ a multiscale morphological approach, which can cope with greatly varying thickness of the cracks as well as with their varying intensities (from dark to the light ones). Due to the content of the painting (with extremely many fine details) and complex type of cracks (including inconsistent whitish clouds around them), the available inpainting methods do not provide satisfactory results on many parts of the painting. We show that patch-based methods outperform pixel-based ones, but leaving still much room for improvements in this application. We propose a new method for candidate patch selection, which can be combined with different patchbased inpainting methods to improve their performance in crack removal. The results demonstrate improved performance, with less artefacts and better preserved fine details.

Research paper thumbnail of Crack detection and inpainting for virtual restoration of paintings: The case of the Ghent Altarpiece

Signal Processing, 2013

Digital image processing is proving to be of great help in the analysis and documentation of our ... more Digital image processing is proving to be of great help in the analysis and documentation of our vast cultural heritage. In this paper, we present a new method for the virtual restoration of digitized paintings with special attention for the Ghent Altarpiece (1432), a large polyptych panel painting of which very few digital reproductions exist. We achieve our objective by detecting and digitally removing cracks. The detection of cracks is particularly difficult because of the varying content features in different parts of the polyptych. Three new detection methods are proposed and combined in order to detect cracks of different sizes as well as varying brightness. Semi-supervised clustering based post-processing is used to remove objects falsely labelled as cracks. For the subsequent inpainting stage, a patch-based technique is applied to handle the noisy nature of the images and to increase the performance for crack removal. We demonstrate the usefulness of our method by means of a case study where the goal is to improve readability of the depiction of text in a book, present in one of the panels, in order to assist paleographers in its deciphering.

Research paper thumbnail of Channelized Hotelling observers for the assessment of volumetric imaging data sets

Journal of the Optical Society of America A, 2011

Research paper thumbnail of Craquelure inpainting in art work

Tijana Ruzic UGent, Bruno Cornelis, Ljiljana Platisa UGent, Aleksandra Pizurica UGent, Ann Dooms,... more Tijana Ruzic UGent, Bruno Cornelis, Ljiljana Platisa UGent, Aleksandra Pizurica UGent, Ann Dooms, Maximiliaan Martens UGent, Marc De Mey UGent and Ingrid Daubechies (2010) Vision and material : interaction between art and science in Jan van Eyck's time, Abstracts. ... Ruzic, Tijana, Bruno Cornelis, Ljiljana Platisa, Aleksandra Pizurica, Ann Dooms, Maximiliaan Martens, Marc De Mey, and Ingrid Daubechies. 2010. “Craquelure Inpainting in Art Work.” In Vision and Material : Interaction Between Art and Science in Jan Van Eyck's Time, ...

Research paper thumbnail of Teaching a computer about shapes in paintings

Ghent University Ghent University Academic Bibliography. ...