Gain-Some-Lose-Some: Reliable Quantification Under General Dataset Shift (original) (raw)
2021 IEEE International Conference on Data Mining (ICDM), 2021
Abstract
When applying supervised learning to estimate class distributions of unlabelled samples (so-called quantification), dataset shift is an expected yet challenging problem. Existing quantification methods make strong assumptions on the nature of dataset shift that often will not hold in practice. We propose a novel Gain-Some-Lose-Some (GSLS) model that accounts for more general conditions of dataset shift. We present a method for fitting the GSLS model without any labelled instances from the target sample, and experimentally demonstrate that GSLS can produce reliable quantification prediction intervals under broader conditions of shift than existing quantification methods.
Edmund M Lai hasn't uploaded this paper.
Let Edmund know you want this paper to be uploaded.
Ask for this paper to be uploaded.