Anton Bardera | University of Girona (original) (raw)

Papers by Anton Bardera

Research paper thumbnail of El Centre Excursionista Empordanès: 40 anys treballant per al territori, el paisatge i el medi natural de l'Empordà

Annals de l'Institut d'Estudis Empordanesos, 2007

Research paper thumbnail of Video Key Frame Selection

Research paper thumbnail of Informational Aesthetics Measures

Springer eBooks, 2014

I n 1928, George D. Birkhoff formalized the aesthetic measure of an object as the quotient betwee... more I n 1928, George D. Birkhoff formalized the aesthetic measure of an object as the quotient between order and complexity (see also the "Related Work" sidebar). 1 From Birkhoff's work, Max Bense, 2 together with Abraham Moles, 3 developed informational aesthetics (or information-theoretic aesthetics from the original German term), which defines the concepts of order and complexity from Shannon's notion of information. 4 As Birkhoff stated, formalizing these concepts, which depend on the context, author, observer, and so on, is difficult. Scha and Bod claimed that in spite of these measures' simplicity, "if we integrate them with other ideas from perceptual psychology and computational linguistics, they may in fact constitute a starting point for the development of more adequate formal models." 5 The creative process generally produces order from disorder. Bense proposed a general schema that characterizes artistic production by the transition from the repertoire to the final product. He assigned a complexity to the repertoire, or palette, and an order to the distribution of its elements on the artistic product. This article, an extended and revised version of earlier work, 6 presents a set of measures that conceptualizes Birkhoff's aesthetic measure from an informational viewpoint. These measures describe complementary aspects of the aesthetic experience and are normalized for comparison. We show the measures' behavior using three sets of paintings representing different styles that cover a representative feature range: from randomness to order. Our experiments show that both global and compositional measures extend Birkhoff's measure and help us understand and quantify the creative process.

Research paper thumbnail of Tutorial on information theory in visualization

Research paper thumbnail of A semi-automatic and an automatic segmentation algorithm to remove the internal organs from live pig CT images

Computers and Electronics in Agriculture, Aug 1, 2017

Removal of internal organs such as lungs, liver, and kidneys is a key step required to compute th... more Removal of internal organs such as lungs, liver, and kidneys is a key step required to compute the lean meat percentage from Computed Tomography (CT) scans of live animals. In this paper, we propose two segmentation techniques to remove these organs focusing on pigs. The first method is semiautomatic, and it starts with the first CT slice and a manually defined mask with internal organs. Then, it applies a four-step iterative process that computes the masks of the next CT slices by using the information of the previous one. To find the best boundary it uses a Dynamic Programming-based approach. At each iteration the user can check the correctness of the new computed mask. The second method is fully automatic, and segments each slice individually by using distance maps and morphological operators, such as dilation. It is composed of three main steps which detect the pig's torso, pre-classify the voxels in different tissues, and segment the internal organs using the information of such classification. Although it has some parameters, user interaction is not required to obtain the results. The proposed approaches have been tested on CT data sets from 9 pigs, and compared with a manual segmentation. To evaluate the results, the precision, recall, and F-score measures have been used. From our test, we can observe that the performance of both methods is very high according to their average F-score. We also analyse how the accuracy of the results in the semi-automatic approach increases when more user interaction is applied. For the automatic approach, we evaluate the dependence of the results on the algorithm's parameters. If robustness is enough, and high accuracy is not required, the automatic algorithm can be used to segment a whole pig in less than 50 s. However, if the user wants to control the level of accuracy, the semi-automatic algorithm is preferred. Both methods are useful to reduce the time needed to segment the internal organs of a pig from hours (manual segmentation) to minutes or seconds.

Research paper thumbnail of Image Segmentation

Research paper thumbnail of Information Theory Tools for Image Processing

Springer eBooks, 2014

is series will present lectures on research and development in computer graphics and geometric m... more is series will present lectures on research and development in computer graphics and geometric modeling for an audience of professional developers, researchers and advanced students. Topics of interest include Animation, Visualization, Special Effects, Game design, Image techniques, Computational Geometry, Modeling, Rendering and others of interest to the graphics system developer or researcher.

Research paper thumbnail of Basic Concepts of Information Theory

Research paper thumbnail of Normalized similarity measures for medical image registration

Proceedings of SPIE, May 12, 2004

Two new similarity measures for rigid image registration, based on the normalization of Jensen's ... more Two new similarity measures for rigid image registration, based on the normalization of Jensen's difference applied to Rényi and Tsallis-Havrda-Charvát entropies, are introduced. One measure is normalized by the first term of Jensen's difference, which in our proposal coincides with the marginal entropy, and the other by the joint entropy. These measures can be seen as an extension of two measures successfully applied in medical image registration: the mutual information and the normalized mutual information. Experiments with various registration modalities show that the new similarity measures are more robust than the normalized mutual information for some modalities and a determined range of the entropy parameter. Also, a certain improvement on accuracy can be obtained for a different range of this parameter.

Research paper thumbnail of Reliability of the ABC/2 Method in Determining Acute Infarct Volume

Journal of Neuroimaging, Mar 29, 2011

BACKGROUND AND PURPOSE Infarct volume is used as a surrogate outcome measure in clinical trials o... more BACKGROUND AND PURPOSE Infarct volume is used as a surrogate outcome measure in clinical trials of therapies for acute ischemic stroke. ABC/2 is a fast volumetric method, but its accuracy remains to be determined. We aimed to study the accuracy and reproducibility of ABC/2 in determining acute infarct volume with diffusion-weighted imaging. METHODS We studied 86 consecutive patients with acute ischemic stroke. Three blinded observers determined volume with the ABC/2 method, and the results were compared with those of the manual planimetric method. RESULTS The ABC/2 technique overestimated infarct volume by a median false increase (variable ABC/2 volume minus planimetric volume) of 7.33 cm 3 (1.29, 22.170, representing a 162.56% increase over the value of the gold standard (variable ABC/2 volume over planimetric volume) (121.70, 248.52). In each method, the interrater reliability was excellent: the intraclass correlations were .992 and .985 for the ABC/2 technique and planimetric method, respectively. CONCLUSIONS ABC/2 is volumetric method with clinical value but it consistently overestimates the real infarct volume.

Research paper thumbnail of Semi-automated method for brain hematoma and edema quantification using computed tomography

Computerized Medical Imaging and Graphics, Jun 1, 2009

In this paper, a semi-automated method for brain hematoma and edema segmentation and volume measu... more In this paper, a semi-automated method for brain hematoma and edema segmentation and volume measurement using computed tomography imaging is presented. The method combines a region growing approach to segment the hematoma and a level set segmentation technique to segment the edema. The main novelty of this method is the strategy applied to define the propagation function required by the level set approach. To evaluate the method, 18 patients with brain hematoma and edema of different size, shape and location were selected. The obtained results demonstrate that the proposed approach provides objective and reproducible segmentations that are similar to the results obtained manually. Moreover, processing time is 4 minutes compared to the 10 minutes required for manual segmentation.

Research paper thumbnail of A Deep-Learning Based Solution to Automatically Control Closure and Seal of Pizza Packages

IEEE Access, 2021

Closure and seal inspection is one of the key steps in quality control of pizza packages. This is... more Closure and seal inspection is one of the key steps in quality control of pizza packages. This is generally carried out by human operators that are not able to inspect all the packages due to cadence restrictions. To overcome this limitation, a computer vision system that automatically performs 100% inline seal and closure inspection is proposed. In this paper, after evaluating pizza package features, the manual quality control procedure, and the packaging machines of a real industrial scenario, a detailed description of hardware and software components of the proposed system as well as the main design decisions are presented. Focusing on the hardware, line-scan technology and hyperspectral imaging has been considered to ensure that all relevant information can be acquired independently of the pizza brand, topping, and film features. Focusing on the software, this applies a three-phases strategy that, first, applies a set of basic rejection controls; second, identifies the sealing region; and third, prepares the data for prediction through the classification of the pizzas using a deep learning network. This network is one of the software key elements and has been selected after comparing the commercial off-the-shelf (pretrained-dl-classifier-resnet50 from MVTec Halcon) and the custom-developed (ResNet18) architectures designed to automate the accept/reject classification of pizza packages. To train the networks, a classification of pizza package defects, focusing on sealing and closure, and an image-based method able to automatically detect them have been proposed. The system has been tested in laboratory and in real industrial conditions comparing it with the manual scenario and considering three pizza brands with two toppings per brand. From the evaluation, it has been seen that ResNet18 achieves the best results with mean, maximum, and minimum precision values of 99.87%, 99.95%, and 99.74%, respectively. Moreover, our system achieves twice the throughput rate with respect to the manual scenario, with the guarantee that all pizzas are evaluated, which is not possible in the manual scenario due to operator fatigue. The proposed solution can be easily adapted to similar contexts, even considering packages with other shapes.

Research paper thumbnail of Some Order Preserving Inequalities for Cross Entropy and Kullback–Leibler Divergence

Entropy, Dec 12, 2018

Cross entropy and Kullback-Leibler (K-L) divergence are fundamental quantities of information the... more Cross entropy and Kullback-Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied for optimization in machine learning. K-L divergence also stands independently as a commonly used metric for measuring the difference between two distributions. In this paper, we introduce new inequalities regarding cross entropy and K-L divergence by using the fact that cross entropy is the negated logarithm of the weighted geometric mean. We first apply the well-known rearrangement inequality, followed by a recent theorem on weighted Kolmogorov means, and, finally, we introduce a new theorem that directly applies to inequalities between K-L divergences. To illustrate our results, we show numerical examples of distributions.

Research paper thumbnail of Magnetic resonance imaging biomarkers of ischemic stroke: Criteria for the validation of primary imaging biomarkers

Drug News & Perspectives, 2009

Ischemic stroke is associated with a high rate of disability and death. Establishing valid biomar... more Ischemic stroke is associated with a high rate of disability and death. Establishing valid biomarkers could help accelerate the approval of promising new therapies for stroke. Whereas many serum biomarkers have been evaluated, possible imaging biomarkers of stroke lack validation. Magnetic resonance imaging (MRI) is a very sensitive technique to study acute stroke and MRI parameters have been established to assess the outcome of acute stroke. This review reassesses the criteria for the validation of MRI biomarkers of acute ischemic stroke (MRI-BAS). Seven criteria were used to review the validity of the main MRI-BAS: vascular status, lesion volume, reversibility on diffusion-weighted imaging, perfusion alteration, penumbra studied with diffusion-perfusion mismatch, clinical-diffusion mismatch, diffusion-angiography mismatch and hemorrhagic transformation. We analyzed the definitions of these biomarkers and the extent to which each fulfills the criteria for validation and found that few MRI-BAS have been fully validated. Further studies should help to improve the validation of current MRI-BAS and develop new biomarkers.

Research paper thumbnail of Image registration by compression

Information Sciences, Apr 1, 2010

ABSTRACT Image registration consists in finding the transformation that brings one image into the... more ABSTRACT Image registration consists in finding the transformation that brings one image into the best possible spatial correspondence with another image. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that image registration can be formulated as a compression problem. Second, we demonstrate the good performance of the similarity metric, introduced by Li et al., in image registration. Two different approaches for the computation of this similarity metric are described: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images.

Research paper thumbnail of Information Theory Tools for Visualization

A K Peters/CRC Press eBooks, Sep 19, 2016

This bookexplores Information theory (IT) tools, which have become state of the art to solve and ... more This bookexplores Information theory (IT) tools, which have become state of the art to solve and understand better many of the problems in visualization. This book covers all relevant literature up to date. It is thefirst book solely devoted to this subject, written by leading experts in the field.

Research paper thumbnail of Image Registration

Research paper thumbnail of Information theory in visualization

Eurographics, May 9, 2016

In this half-day tutorial, we review a variety of applications of information theory in visualiza... more In this half-day tutorial, we review a variety of applications of information theory in visualization. The holistic nature of information-theoretic reasoning has enabled many such applications, ranging from light placement to view selection, from feature highlighting to transfer function design, from data fusion to visual multiplexing, and so on. Perhaps the most exciting application is the potential for information theory to underpin the discipline of visualization, for example, mathematically confirming the benefit of visualization in data intelligence.

Research paper thumbnail of Information Theory Basics

Research paper thumbnail of SHNN-CAD+: An Improvement on SHNN-CAD for Adaptive Online Trajectory Anomaly Detection

Sensors, Dec 27, 2018

To perform anomaly detection for trajectory data, we study the Sequential Hausdorff Nearest-Neigh... more To perform anomaly detection for trajectory data, we study the Sequential Hausdorff Nearest-Neighbor Conformal Anomaly Detector (SHNN-CAD) approach, and propose an enhanced version called SHNN-CAD +. SHNN-CAD was introduced based on the theory of conformal prediction dealing with the problem of online detection. Unlike most related approaches requiring several not intuitive parameters, SHNN-CAD has the advantage of being parameter-light which enables the easy reproduction of experiments. We propose to adaptively determine the anomaly threshold during the online detection procedure instead of predefining it without any prior knowledge, which makes the algorithm more usable in practical applications. We present a modified Hausdorff distance measure that takes into account the direction difference and also reduces the computational complexity. In addition, the anomaly detection is more flexible and accurate via a redo strategy. Extensive experiments on both real-world and synthetic data show that SHNN-CAD + outperforms SHNN-CAD with regard to accuracy and running time.

Research paper thumbnail of El Centre Excursionista Empordanès: 40 anys treballant per al territori, el paisatge i el medi natural de l'Empordà

Annals de l'Institut d'Estudis Empordanesos, 2007

Research paper thumbnail of Video Key Frame Selection

Research paper thumbnail of Informational Aesthetics Measures

Springer eBooks, 2014

I n 1928, George D. Birkhoff formalized the aesthetic measure of an object as the quotient betwee... more I n 1928, George D. Birkhoff formalized the aesthetic measure of an object as the quotient between order and complexity (see also the "Related Work" sidebar). 1 From Birkhoff's work, Max Bense, 2 together with Abraham Moles, 3 developed informational aesthetics (or information-theoretic aesthetics from the original German term), which defines the concepts of order and complexity from Shannon's notion of information. 4 As Birkhoff stated, formalizing these concepts, which depend on the context, author, observer, and so on, is difficult. Scha and Bod claimed that in spite of these measures' simplicity, "if we integrate them with other ideas from perceptual psychology and computational linguistics, they may in fact constitute a starting point for the development of more adequate formal models." 5 The creative process generally produces order from disorder. Bense proposed a general schema that characterizes artistic production by the transition from the repertoire to the final product. He assigned a complexity to the repertoire, or palette, and an order to the distribution of its elements on the artistic product. This article, an extended and revised version of earlier work, 6 presents a set of measures that conceptualizes Birkhoff's aesthetic measure from an informational viewpoint. These measures describe complementary aspects of the aesthetic experience and are normalized for comparison. We show the measures' behavior using three sets of paintings representing different styles that cover a representative feature range: from randomness to order. Our experiments show that both global and compositional measures extend Birkhoff's measure and help us understand and quantify the creative process.

Research paper thumbnail of Tutorial on information theory in visualization

Research paper thumbnail of A semi-automatic and an automatic segmentation algorithm to remove the internal organs from live pig CT images

Computers and Electronics in Agriculture, Aug 1, 2017

Removal of internal organs such as lungs, liver, and kidneys is a key step required to compute th... more Removal of internal organs such as lungs, liver, and kidneys is a key step required to compute the lean meat percentage from Computed Tomography (CT) scans of live animals. In this paper, we propose two segmentation techniques to remove these organs focusing on pigs. The first method is semiautomatic, and it starts with the first CT slice and a manually defined mask with internal organs. Then, it applies a four-step iterative process that computes the masks of the next CT slices by using the information of the previous one. To find the best boundary it uses a Dynamic Programming-based approach. At each iteration the user can check the correctness of the new computed mask. The second method is fully automatic, and segments each slice individually by using distance maps and morphological operators, such as dilation. It is composed of three main steps which detect the pig's torso, pre-classify the voxels in different tissues, and segment the internal organs using the information of such classification. Although it has some parameters, user interaction is not required to obtain the results. The proposed approaches have been tested on CT data sets from 9 pigs, and compared with a manual segmentation. To evaluate the results, the precision, recall, and F-score measures have been used. From our test, we can observe that the performance of both methods is very high according to their average F-score. We also analyse how the accuracy of the results in the semi-automatic approach increases when more user interaction is applied. For the automatic approach, we evaluate the dependence of the results on the algorithm's parameters. If robustness is enough, and high accuracy is not required, the automatic algorithm can be used to segment a whole pig in less than 50 s. However, if the user wants to control the level of accuracy, the semi-automatic algorithm is preferred. Both methods are useful to reduce the time needed to segment the internal organs of a pig from hours (manual segmentation) to minutes or seconds.

Research paper thumbnail of Image Segmentation

Research paper thumbnail of Information Theory Tools for Image Processing

Springer eBooks, 2014

is series will present lectures on research and development in computer graphics and geometric m... more is series will present lectures on research and development in computer graphics and geometric modeling for an audience of professional developers, researchers and advanced students. Topics of interest include Animation, Visualization, Special Effects, Game design, Image techniques, Computational Geometry, Modeling, Rendering and others of interest to the graphics system developer or researcher.

Research paper thumbnail of Basic Concepts of Information Theory

Research paper thumbnail of Normalized similarity measures for medical image registration

Proceedings of SPIE, May 12, 2004

Two new similarity measures for rigid image registration, based on the normalization of Jensen's ... more Two new similarity measures for rigid image registration, based on the normalization of Jensen's difference applied to Rényi and Tsallis-Havrda-Charvát entropies, are introduced. One measure is normalized by the first term of Jensen's difference, which in our proposal coincides with the marginal entropy, and the other by the joint entropy. These measures can be seen as an extension of two measures successfully applied in medical image registration: the mutual information and the normalized mutual information. Experiments with various registration modalities show that the new similarity measures are more robust than the normalized mutual information for some modalities and a determined range of the entropy parameter. Also, a certain improvement on accuracy can be obtained for a different range of this parameter.

Research paper thumbnail of Reliability of the ABC/2 Method in Determining Acute Infarct Volume

Journal of Neuroimaging, Mar 29, 2011

BACKGROUND AND PURPOSE Infarct volume is used as a surrogate outcome measure in clinical trials o... more BACKGROUND AND PURPOSE Infarct volume is used as a surrogate outcome measure in clinical trials of therapies for acute ischemic stroke. ABC/2 is a fast volumetric method, but its accuracy remains to be determined. We aimed to study the accuracy and reproducibility of ABC/2 in determining acute infarct volume with diffusion-weighted imaging. METHODS We studied 86 consecutive patients with acute ischemic stroke. Three blinded observers determined volume with the ABC/2 method, and the results were compared with those of the manual planimetric method. RESULTS The ABC/2 technique overestimated infarct volume by a median false increase (variable ABC/2 volume minus planimetric volume) of 7.33 cm 3 (1.29, 22.170, representing a 162.56% increase over the value of the gold standard (variable ABC/2 volume over planimetric volume) (121.70, 248.52). In each method, the interrater reliability was excellent: the intraclass correlations were .992 and .985 for the ABC/2 technique and planimetric method, respectively. CONCLUSIONS ABC/2 is volumetric method with clinical value but it consistently overestimates the real infarct volume.

Research paper thumbnail of Semi-automated method for brain hematoma and edema quantification using computed tomography

Computerized Medical Imaging and Graphics, Jun 1, 2009

In this paper, a semi-automated method for brain hematoma and edema segmentation and volume measu... more In this paper, a semi-automated method for brain hematoma and edema segmentation and volume measurement using computed tomography imaging is presented. The method combines a region growing approach to segment the hematoma and a level set segmentation technique to segment the edema. The main novelty of this method is the strategy applied to define the propagation function required by the level set approach. To evaluate the method, 18 patients with brain hematoma and edema of different size, shape and location were selected. The obtained results demonstrate that the proposed approach provides objective and reproducible segmentations that are similar to the results obtained manually. Moreover, processing time is 4 minutes compared to the 10 minutes required for manual segmentation.

Research paper thumbnail of A Deep-Learning Based Solution to Automatically Control Closure and Seal of Pizza Packages

IEEE Access, 2021

Closure and seal inspection is one of the key steps in quality control of pizza packages. This is... more Closure and seal inspection is one of the key steps in quality control of pizza packages. This is generally carried out by human operators that are not able to inspect all the packages due to cadence restrictions. To overcome this limitation, a computer vision system that automatically performs 100% inline seal and closure inspection is proposed. In this paper, after evaluating pizza package features, the manual quality control procedure, and the packaging machines of a real industrial scenario, a detailed description of hardware and software components of the proposed system as well as the main design decisions are presented. Focusing on the hardware, line-scan technology and hyperspectral imaging has been considered to ensure that all relevant information can be acquired independently of the pizza brand, topping, and film features. Focusing on the software, this applies a three-phases strategy that, first, applies a set of basic rejection controls; second, identifies the sealing region; and third, prepares the data for prediction through the classification of the pizzas using a deep learning network. This network is one of the software key elements and has been selected after comparing the commercial off-the-shelf (pretrained-dl-classifier-resnet50 from MVTec Halcon) and the custom-developed (ResNet18) architectures designed to automate the accept/reject classification of pizza packages. To train the networks, a classification of pizza package defects, focusing on sealing and closure, and an image-based method able to automatically detect them have been proposed. The system has been tested in laboratory and in real industrial conditions comparing it with the manual scenario and considering three pizza brands with two toppings per brand. From the evaluation, it has been seen that ResNet18 achieves the best results with mean, maximum, and minimum precision values of 99.87%, 99.95%, and 99.74%, respectively. Moreover, our system achieves twice the throughput rate with respect to the manual scenario, with the guarantee that all pizzas are evaluated, which is not possible in the manual scenario due to operator fatigue. The proposed solution can be easily adapted to similar contexts, even considering packages with other shapes.

Research paper thumbnail of Some Order Preserving Inequalities for Cross Entropy and Kullback–Leibler Divergence

Entropy, Dec 12, 2018

Cross entropy and Kullback-Leibler (K-L) divergence are fundamental quantities of information the... more Cross entropy and Kullback-Leibler (K-L) divergence are fundamental quantities of information theory, and they are widely used in many fields. Since cross entropy is the negated logarithm of likelihood, minimizing cross entropy is equivalent to maximizing likelihood, and thus, cross entropy is applied for optimization in machine learning. K-L divergence also stands independently as a commonly used metric for measuring the difference between two distributions. In this paper, we introduce new inequalities regarding cross entropy and K-L divergence by using the fact that cross entropy is the negated logarithm of the weighted geometric mean. We first apply the well-known rearrangement inequality, followed by a recent theorem on weighted Kolmogorov means, and, finally, we introduce a new theorem that directly applies to inequalities between K-L divergences. To illustrate our results, we show numerical examples of distributions.

Research paper thumbnail of Magnetic resonance imaging biomarkers of ischemic stroke: Criteria for the validation of primary imaging biomarkers

Drug News & Perspectives, 2009

Ischemic stroke is associated with a high rate of disability and death. Establishing valid biomar... more Ischemic stroke is associated with a high rate of disability and death. Establishing valid biomarkers could help accelerate the approval of promising new therapies for stroke. Whereas many serum biomarkers have been evaluated, possible imaging biomarkers of stroke lack validation. Magnetic resonance imaging (MRI) is a very sensitive technique to study acute stroke and MRI parameters have been established to assess the outcome of acute stroke. This review reassesses the criteria for the validation of MRI biomarkers of acute ischemic stroke (MRI-BAS). Seven criteria were used to review the validity of the main MRI-BAS: vascular status, lesion volume, reversibility on diffusion-weighted imaging, perfusion alteration, penumbra studied with diffusion-perfusion mismatch, clinical-diffusion mismatch, diffusion-angiography mismatch and hemorrhagic transformation. We analyzed the definitions of these biomarkers and the extent to which each fulfills the criteria for validation and found that few MRI-BAS have been fully validated. Further studies should help to improve the validation of current MRI-BAS and develop new biomarkers.

Research paper thumbnail of Image registration by compression

Information Sciences, Apr 1, 2010

ABSTRACT Image registration consists in finding the transformation that brings one image into the... more ABSTRACT Image registration consists in finding the transformation that brings one image into the best possible spatial correspondence with another image. In this paper, we present a new framework for image registration based on compression. The basic idea underlying our approach is the conjecture that two images are correctly registered when we can maximally compress one image given the information in the other. The contribution of this paper is twofold. First, we show that image registration can be formulated as a compression problem. Second, we demonstrate the good performance of the similarity metric, introduced by Li et al., in image registration. Two different approaches for the computation of this similarity metric are described: the Kolmogorov version, computed using standard real-world compressors, and the Shannon version, calculated from an estimation of the entropy rate of the images.

Research paper thumbnail of Information Theory Tools for Visualization

A K Peters/CRC Press eBooks, Sep 19, 2016

This bookexplores Information theory (IT) tools, which have become state of the art to solve and ... more This bookexplores Information theory (IT) tools, which have become state of the art to solve and understand better many of the problems in visualization. This book covers all relevant literature up to date. It is thefirst book solely devoted to this subject, written by leading experts in the field.

Research paper thumbnail of Image Registration

Research paper thumbnail of Information theory in visualization

Eurographics, May 9, 2016

In this half-day tutorial, we review a variety of applications of information theory in visualiza... more In this half-day tutorial, we review a variety of applications of information theory in visualization. The holistic nature of information-theoretic reasoning has enabled many such applications, ranging from light placement to view selection, from feature highlighting to transfer function design, from data fusion to visual multiplexing, and so on. Perhaps the most exciting application is the potential for information theory to underpin the discipline of visualization, for example, mathematically confirming the benefit of visualization in data intelligence.

Research paper thumbnail of Information Theory Basics

Research paper thumbnail of SHNN-CAD+: An Improvement on SHNN-CAD for Adaptive Online Trajectory Anomaly Detection

Sensors, Dec 27, 2018

To perform anomaly detection for trajectory data, we study the Sequential Hausdorff Nearest-Neigh... more To perform anomaly detection for trajectory data, we study the Sequential Hausdorff Nearest-Neighbor Conformal Anomaly Detector (SHNN-CAD) approach, and propose an enhanced version called SHNN-CAD +. SHNN-CAD was introduced based on the theory of conformal prediction dealing with the problem of online detection. Unlike most related approaches requiring several not intuitive parameters, SHNN-CAD has the advantage of being parameter-light which enables the easy reproduction of experiments. We propose to adaptively determine the anomaly threshold during the online detection procedure instead of predefining it without any prior knowledge, which makes the algorithm more usable in practical applications. We present a modified Hausdorff distance measure that takes into account the direction difference and also reduces the computational complexity. In addition, the anomaly detection is more flexible and accurate via a redo strategy. Extensive experiments on both real-world and synthetic data show that SHNN-CAD + outperforms SHNN-CAD with regard to accuracy and running time.