Alejandro Moreo | Consiglio Nazionale delle Ricerche (CNR) (original) (raw)

Talks by Alejandro Moreo

Research paper thumbnail of Silvia Corbara e Mirko Tavoni (Università di Pisa) – Alejandro Moreo e Fabrizio Sebastiani (ISTI-CNR), L'Epistola a Cangrande al vaglio della Authorship Verification.pdf

Research paper thumbnail of Silvia Corbara (Università di Pisa) - Alejandro Moreo (ISTI CNR) - Fabrizio Sebastiani (ISTI CNR) - Mirko Tavoni (Università di Pisa), L'Epistola a Cangrande al vaglio della Authorship Verification

in Nuove inchieste sull'Epistola a Cangrande. Seminario di studi, Pisa, 18 dicembre 2018, Palazzo... more in Nuove inchieste sull'Epistola a Cangrande. Seminario di studi, Pisa, 18 dicembre 2018, Palazzo Matteucci

Papers by Alejandro Moreo

Research paper thumbnail of The Quantification Landscape

The Springer international series on information retrieval, 2023

Research paper thumbnail of Applications of Quantification

The Springer international series on information retrieval, 2023

Research paper thumbnail of Ordinal Quantification Through Regularization

Lecture Notes in Computer Science, 2023

Research paper thumbnail of Methods for Learning to Quantify

The Springer international series on information retrieval, 2023

Research paper thumbnail of NoR-VDPNet++: Real-Time No-Reference Image Quality Metrics

IEEE Access

Efficiency and efficacy are desirable properties for any evaluation metric having to do with Stan... more Efficiency and efficacy are desirable properties for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or with High Dynamic Range (HDR) imaging. However, it is a daunting task to satisfy both properties simultaneously. On the one side, existing evaluation metrics like HDR-VDP 2.2 can accurately mimic the Human Visual System (HVS), but this typically comes at a very high computational cost. On the other side, computationally cheaper alternatives (e.g., PSNR, MSE, etc.) fail to capture many crucial aspects of the HVS. In this work, we present NoR-VDPNet++, a deep learning architecture for converting full-reference accurate metrics into no-reference metrics thus reducing the computational burden. We show NoR-VDPNet++ can be successfully employed in different application scenarios. INDEX TERMS Deep learning, HDR imaging, objective metrics, no-reference.

Research paper thumbnail of Multi-Label Quantification

arXiv (Cornell University), Nov 15, 2022

Quantification, variously called supervised prevalence estimation or learning to quantify, is the... more Quantification, variously called supervised prevalence estimation or learning to quantify, is the supervised learning task of generating predictors of the relative frequencies (a.k.a. prevalence values) of the classes of interest in unlabelled data samples. While many quantification methods have been proposed in the past for binary problems and, to a lesser extent, single-label multiclass problems, the multi-label setting (i.e., the scenario in which the classes of interest are not mutually exclusive) remains by and large unexplored. A straightforward solution to the multi-label quantification problem could simply consist of recasting the problem as a set of independent binary quantification problems. Such a solution is simple but naïve, since the independence assumption upon which it rests is, in most cases, not satisfied. In these cases, knowing the relative frequency of one class could be of help in determining the prevalence of other related classes. We propose the first truly multilabel quantification methods, i.e., methods for inferring estimators of class prevalence values that strive to leverage the stochastic dependencies among the classes of interest in order to predict their relative frequencies more accurately. We show empirical evidence that natively multi-label solutions outperform the naïve approaches by a large margin. The code to reproduce all our experiments is available online.

Research paper thumbnail of Revisiting Distributional Correspondence Indexing: A Python Reimplementation and New Experiments

arXiv (Cornell University), Oct 19, 2018

This paper introduces PyDCI, a new implementation of Distributional Correspondence Indexing (DCI)... more This paper introduces PyDCI, a new implementation of Distributional Correspondence Indexing (DCI) written in Python. DCI is a transfer learning method for cross-domain and cross-lingual text classification for which we had provided an implementation (here called JaDCI) built on top of JaTeCS, a Java framework for text classification. PyDCI is a stand-alone version of DCI that exploits scikit-learn and the SciPy stack. We here report on new experiments that we have carried out in order to test PyDCI, and in which we use as baselines new high-performing methods that have appeared after DCI was originally proposed. These experiments show that, thanks to a few subtle ways in which we have improved DCI, PyDCI outperforms both JaDCI and the above-mentioned high-performing methods, and delivers the best known results on the two popular benchmarks on which we had tested DCI, i.e., MultiDomainSentiment (a.k.a. MDS-for cross-domain adaptation) and Webis-CLS-10 (for cross-lingual adaptation). PyDCI, together with the code allowing to replicate our experiments, is available at https://github.com/AlexMoreo/pydci .

Research paper thumbnail of Syllabic Quantity: Rhythmic Clues for Latin Authorship

Research paper thumbnail of A Detailed Overview of LeQua@CLEF 2022: Learning to Quantify

Zenodo (CERN European Organization for Nuclear Research), Sep 5, 2022

LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest = { 1, ..., } in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting; this is the first time that an evaluation exercise solely dedicated to quantification is organized. For both the binary setting and the single-label multiclass setting, data were provided to participants both in ready-made vector form and in raw document form. In this overview article we describe the structure of the lab, we report the results obtained by the participants in the four proposed tasks and subtasks, and we comment on the lessons that can be learned from these results.

Research paper thumbnail of AIMH Research Activities 2021

HAL (Le Centre pour la Communication Scientifique Directe), 2020

The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investi... more The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability. This report summarize the 2021 activities of the research group.

Research paper thumbnail of A Concise Overview of LeQua@CLEF 2022: Learning to Quantify

Lecture Notes in Computer Science, 2022

LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest Y = {y1, ..., yn} in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting; this is the first time that an evaluation exercise solely dedicated to quantification is organized. For both the binary setting and the single-label multiclass setting, data were provided to participants both in ready-made vector form and in raw document form. In this overview article we describe the structure of the lab, we report the results obtained by the participants in the four proposed tasks and subtasks, and we comment on the lessons that can be learned from these results.

Research paper thumbnail of QuaPy

QuaPy is an open source framework for Quantification (a.k.a. Supervised Prevalence Estimation) wr... more QuaPy is an open source framework for Quantification (a.k.a. Supervised Prevalence Estimation) written in Python. QuaPy roots on the concept of data sample, and provides implementations of most important concepts in quantification literature, such as the most important quantification baselines, many advanced quantification methods, quantification-oriented model selection, many evaluation measures and protocols used for evaluating quantification methods. QuaPy also integrates commonly used datasets and offers visualization tools for facilitating the analysis and interpretation of results.

Research paper thumbnail of LeQua@CLEF2022: Learning to Quantify

Lecture Notes in Computer Science, 2022

LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting. For each such setting we provide data either in ready-made vector form or in raw document form.

Research paper thumbnail of CLEF 2022 : Learning to Quantify

LeQua 2022 is a new lab for the evaluation of methods for “learning to quantify” in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for “learning to quantify” in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting. For each such setting we provide data either in ready-made vector form or in raw document form. 1 Learning to Quantify In a number of applications involving classification, the final goal is not determining which class (or classes) individual unlabelled items belong to, but estimating the prevalence (or “r...

Research paper thumbnail of Sentiment Quantification Datasets

These files are contain the tokenized reviews that are used for quantication experiments on text.... more These files are contain the tokenized reviews that are used for quantication experiments on text. IMDB is derived from the IMDB dataset from Maas et al., 2011 (https://ai.stanford.edu/~amaas/data/sentiment/).<br&gt; The version of the IMDB content in this dataset has minimal processing with respect to the original dataset, yet, it is provided to unsure reproducibility of experiments. HP and Kindle dataset are Amazon reviews collected by the authors. The reviews are respectively about the books in the Harry Potter series, and about the Kindle e-book reader.

Research paper thumbnail of Funnelling: A New Ensemble Method for Heterogeneous Transfer Learning and its Application to Cross-Lingual Text Classification

Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a com... more Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when naively classifying each document via its corresponding language-specific classifier. In order to obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle multilabel CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespectively of language, are classified by the same (2nd-tier) classifier. For this classifier all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by 1st-tier, language-dependent classifiers. This allows the classification of all test documents, of any langu...

Research paper thumbnail of MedieValla: an authorship verification tool written in Python for medieval Latin

Code to reproduce the experiments reported in the paper: Corbara, S., Moreo, A., Sebastiani, F., ... more Code to reproduce the experiments reported in the paper: Corbara, S., Moreo, A., Sebastiani, F., & Tavoni, M. "The Epistle to Cangrande Through the Lens of Computational Authorship Verification." In <em>International Conference on Image Analysis and Processing</em>, pp. 148-158. Springer, Cham, 2019.

Research paper thumbnail of NoR-VDPNet++: Efficient Training and Architecture for Deep No-Reference Image Quality Metrics

ACM SIGGRAPH 2021 Talks, 2021

Efficiency and efficacy are two desirable properties of the utmost importance for any evaluation ... more Efficiency and efficacy are two desirable properties of the utmost importance for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or High Dynamic Range (HDR) imaging. However, these properties are hard to achieve simultaneously. On the one side, metrics like HDR-VDP2.2 are known to mimic the human visual system (HVS) very accurately, but its high computational cost prevents its widespread use in large evaluation campaigns. On the other side, computationally cheaper alternatives like PSNR or MSE fail to capture many of the crucial aspects of the HVS. In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved variant of a previous deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN). In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved version of a deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN).

Research paper thumbnail of Silvia Corbara e Mirko Tavoni (Università di Pisa) – Alejandro Moreo e Fabrizio Sebastiani (ISTI-CNR), L'Epistola a Cangrande al vaglio della Authorship Verification.pdf

Research paper thumbnail of Silvia Corbara (Università di Pisa) - Alejandro Moreo (ISTI CNR) - Fabrizio Sebastiani (ISTI CNR) - Mirko Tavoni (Università di Pisa), L'Epistola a Cangrande al vaglio della Authorship Verification

in Nuove inchieste sull'Epistola a Cangrande. Seminario di studi, Pisa, 18 dicembre 2018, Palazzo... more in Nuove inchieste sull'Epistola a Cangrande. Seminario di studi, Pisa, 18 dicembre 2018, Palazzo Matteucci

Research paper thumbnail of The Quantification Landscape

The Springer international series on information retrieval, 2023

Research paper thumbnail of Applications of Quantification

The Springer international series on information retrieval, 2023

Research paper thumbnail of Ordinal Quantification Through Regularization

Lecture Notes in Computer Science, 2023

Research paper thumbnail of Methods for Learning to Quantify

The Springer international series on information retrieval, 2023

Research paper thumbnail of NoR-VDPNet++: Real-Time No-Reference Image Quality Metrics

IEEE Access

Efficiency and efficacy are desirable properties for any evaluation metric having to do with Stan... more Efficiency and efficacy are desirable properties for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or with High Dynamic Range (HDR) imaging. However, it is a daunting task to satisfy both properties simultaneously. On the one side, existing evaluation metrics like HDR-VDP 2.2 can accurately mimic the Human Visual System (HVS), but this typically comes at a very high computational cost. On the other side, computationally cheaper alternatives (e.g., PSNR, MSE, etc.) fail to capture many crucial aspects of the HVS. In this work, we present NoR-VDPNet++, a deep learning architecture for converting full-reference accurate metrics into no-reference metrics thus reducing the computational burden. We show NoR-VDPNet++ can be successfully employed in different application scenarios. INDEX TERMS Deep learning, HDR imaging, objective metrics, no-reference.

Research paper thumbnail of Multi-Label Quantification

arXiv (Cornell University), Nov 15, 2022

Quantification, variously called supervised prevalence estimation or learning to quantify, is the... more Quantification, variously called supervised prevalence estimation or learning to quantify, is the supervised learning task of generating predictors of the relative frequencies (a.k.a. prevalence values) of the classes of interest in unlabelled data samples. While many quantification methods have been proposed in the past for binary problems and, to a lesser extent, single-label multiclass problems, the multi-label setting (i.e., the scenario in which the classes of interest are not mutually exclusive) remains by and large unexplored. A straightforward solution to the multi-label quantification problem could simply consist of recasting the problem as a set of independent binary quantification problems. Such a solution is simple but naïve, since the independence assumption upon which it rests is, in most cases, not satisfied. In these cases, knowing the relative frequency of one class could be of help in determining the prevalence of other related classes. We propose the first truly multilabel quantification methods, i.e., methods for inferring estimators of class prevalence values that strive to leverage the stochastic dependencies among the classes of interest in order to predict their relative frequencies more accurately. We show empirical evidence that natively multi-label solutions outperform the naïve approaches by a large margin. The code to reproduce all our experiments is available online.

Research paper thumbnail of Revisiting Distributional Correspondence Indexing: A Python Reimplementation and New Experiments

arXiv (Cornell University), Oct 19, 2018

This paper introduces PyDCI, a new implementation of Distributional Correspondence Indexing (DCI)... more This paper introduces PyDCI, a new implementation of Distributional Correspondence Indexing (DCI) written in Python. DCI is a transfer learning method for cross-domain and cross-lingual text classification for which we had provided an implementation (here called JaDCI) built on top of JaTeCS, a Java framework for text classification. PyDCI is a stand-alone version of DCI that exploits scikit-learn and the SciPy stack. We here report on new experiments that we have carried out in order to test PyDCI, and in which we use as baselines new high-performing methods that have appeared after DCI was originally proposed. These experiments show that, thanks to a few subtle ways in which we have improved DCI, PyDCI outperforms both JaDCI and the above-mentioned high-performing methods, and delivers the best known results on the two popular benchmarks on which we had tested DCI, i.e., MultiDomainSentiment (a.k.a. MDS-for cross-domain adaptation) and Webis-CLS-10 (for cross-lingual adaptation). PyDCI, together with the code allowing to replicate our experiments, is available at https://github.com/AlexMoreo/pydci .

Research paper thumbnail of Syllabic Quantity: Rhythmic Clues for Latin Authorship

Research paper thumbnail of A Detailed Overview of LeQua@CLEF 2022: Learning to Quantify

Zenodo (CERN European Organization for Nuclear Research), Sep 5, 2022

LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest = { 1, ..., } in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting; this is the first time that an evaluation exercise solely dedicated to quantification is organized. For both the binary setting and the single-label multiclass setting, data were provided to participants both in ready-made vector form and in raw document form. In this overview article we describe the structure of the lab, we report the results obtained by the participants in the four proposed tasks and subtasks, and we comment on the lessons that can be learned from these results.

Research paper thumbnail of AIMH Research Activities 2021

HAL (Le Centre pour la Communication Scientifique Directe), 2020

The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investi... more The Artificial Intelligence for Media and Humanities laboratory (AIMH) has the mission to investigate and advance the state of the art in the Artificial Intelligence field, specifically addressing applications to digital media and digital humanities, and taking also into account issues related to scalability. This report summarize the 2021 activities of the research group.

Research paper thumbnail of A Concise Overview of LeQua@CLEF 2022: Learning to Quantify

Lecture Notes in Computer Science, 2022

LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest Y = {y1, ..., yn} in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting; this is the first time that an evaluation exercise solely dedicated to quantification is organized. For both the binary setting and the single-label multiclass setting, data were provided to participants both in ready-made vector form and in raw document form. In this overview article we describe the structure of the lab, we report the results obtained by the participants in the four proposed tasks and subtasks, and we comment on the lessons that can be learned from these results.

Research paper thumbnail of QuaPy

QuaPy is an open source framework for Quantification (a.k.a. Supervised Prevalence Estimation) wr... more QuaPy is an open source framework for Quantification (a.k.a. Supervised Prevalence Estimation) written in Python. QuaPy roots on the concept of data sample, and provides implementations of most important concepts in quantification literature, such as the most important quantification baselines, many advanced quantification methods, quantification-oriented model selection, many evaluation measures and protocols used for evaluating quantification methods. QuaPy also integrates commonly used datasets and offers visualization tools for facilitating the analysis and interpretation of results.

Research paper thumbnail of LeQua@CLEF2022: Learning to Quantify

Lecture Notes in Computer Science, 2022

LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for "learning to quantify" in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting. For each such setting we provide data either in ready-made vector form or in raw document form.

Research paper thumbnail of CLEF 2022 : Learning to Quantify

LeQua 2022 is a new lab for the evaluation of methods for “learning to quantify” in textual datas... more LeQua 2022 is a new lab for the evaluation of methods for “learning to quantify” in textual datasets, i.e., for training predictors of the relative frequencies of the classes of interest in sets of unlabelled textual documents. While these predictions could be easily achieved by first classifying all documents via a text classifier and then counting the numbers of documents assigned to the classes, a growing body of literature has shown this approach to be suboptimal, and has proposed better methods. The goal of this lab is to provide a setting for the comparative evaluation of methods for learning to quantify, both in the binary setting and in the single-label multiclass setting. For each such setting we provide data either in ready-made vector form or in raw document form. 1 Learning to Quantify In a number of applications involving classification, the final goal is not determining which class (or classes) individual unlabelled items belong to, but estimating the prevalence (or “r...

Research paper thumbnail of Sentiment Quantification Datasets

These files are contain the tokenized reviews that are used for quantication experiments on text.... more These files are contain the tokenized reviews that are used for quantication experiments on text. IMDB is derived from the IMDB dataset from Maas et al., 2011 (https://ai.stanford.edu/~amaas/data/sentiment/).<br&gt; The version of the IMDB content in this dataset has minimal processing with respect to the original dataset, yet, it is provided to unsure reproducibility of experiments. HP and Kindle dataset are Amazon reviews collected by the authors. The reviews are respectively about the books in the Harry Potter series, and about the Kindle e-book reader.

Research paper thumbnail of Funnelling: A New Ensemble Method for Heterogeneous Transfer Learning and its Application to Cross-Lingual Text Classification

Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a com... more Cross-lingual Text Classification (CLC) consists of automatically classifying, according to a common set C of classes, documents each written in one of a set of languages L, and doing so more accurately than when naively classifying each document via its corresponding language-specific classifier. In order to obtain an increase in the classification accuracy for a given language, the system thus needs to also leverage the training examples written in the other languages. We tackle multilabel CLC via funnelling, a new ensemble learning method that we propose here. Funnelling consists of generating a two-tier classification system where all documents, irrespectively of language, are classified by the same (2nd-tier) classifier. For this classifier all documents are represented in a common, language-independent feature space consisting of the posterior probabilities generated by 1st-tier, language-dependent classifiers. This allows the classification of all test documents, of any langu...

Research paper thumbnail of MedieValla: an authorship verification tool written in Python for medieval Latin

Code to reproduce the experiments reported in the paper: Corbara, S., Moreo, A., Sebastiani, F., ... more Code to reproduce the experiments reported in the paper: Corbara, S., Moreo, A., Sebastiani, F., & Tavoni, M. "The Epistle to Cangrande Through the Lens of Computational Authorship Verification." In <em>International Conference on Image Analysis and Processing</em>, pp. 148-158. Springer, Cham, 2019.

Research paper thumbnail of NoR-VDPNet++: Efficient Training and Architecture for Deep No-Reference Image Quality Metrics

ACM SIGGRAPH 2021 Talks, 2021

Efficiency and efficacy are two desirable properties of the utmost importance for any evaluation ... more Efficiency and efficacy are two desirable properties of the utmost importance for any evaluation metric having to do with Standard Dynamic Range (SDR) imaging or High Dynamic Range (HDR) imaging. However, these properties are hard to achieve simultaneously. On the one side, metrics like HDR-VDP2.2 are known to mimic the human visual system (HVS) very accurately, but its high computational cost prevents its widespread use in large evaluation campaigns. On the other side, computationally cheaper alternatives like PSNR or MSE fail to capture many of the crucial aspects of the HVS. In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved variant of a previous deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN). In this work, we try to get the best of the two worlds: we present NoR-VDPNet++, an improved version of a deep learning-based metric for distilling HDR-VDP2.2 into a convolutional neural network (CNN).

Research paper thumbnail of Lost in Transduction: Transductive Transfer Learning in Text Classification

ACM Transactions on Knowledge Discovery from Data, 2022

Obtaining high-quality labelled data for training a classifier in a new application domain is oft... more Obtaining high-quality labelled data for training a classifier in a new application domain is often costly. Transfer Learning (a.k.a. “Inductive Transfer”) tries to alleviate these costs by transferring, to the “target” domain of interest, knowledge available from a different “source” domain. In transfer learning the lack of labelled information from the target domain is compensated by the availability at training time of a set of unlabelled examples from the target distribution. Transductive Transfer Learning denotes the transfer learning setting in which the only set of target documents that we are interested in classifying is known and available at training time. Although this definition is indeed in line with Vapnik’s original definition of “transduction”, current terminology in the field is confused. In this article, we discuss how the term “transduction” has been misused in the transfer learning literature, and propose a clarification consistent with the original characterizat...

Research paper thumbnail of Efficient Evaluation of Image Quality via Deep-Learning Approximation of Perceptual Metrics

IEEE Transactions on Image Processing, 2019

Image metrics based on Human Visual System (HVS) play a remarkable role in the evaluation of comp... more Image metrics based on Human Visual System (HVS) play a remarkable role in the evaluation of complex image processing algorithms. However, mimicking the HVS is known to be complex and computationally expensive (both in terms of time and memory), and its usage is thus limited to a few applications and to small input data. All of this makes such metrics not fully attractive in real-world scenarios. To address these issues, we propose Deep Image Quality Metric (DIQM), a deep-learning approach to learn the global image quality feature (mean-opinion-score). DIQM can emulate existing visual metrics efficiently, reducing the computational costs by more than an order of magnitude with respect to existing implementations.