Interactive regret minimization (original) (raw)

Efficient Computation of Regret-ratio Minimizing Set

Proceedings of the 2017 ACM International Conference on Management of Data

Finding the maxima of a database based on a user preference, especially when the ranking function is a linear combination of the attributes, has been the subject of recent research. A critical observation is that the convex hull is the subset of tuples that can be used to find the maxima of any linear function. However, in real world applications the convex hull can be a significant portion of the database, and thus its performance is greatly reduced. Thus, computing a subset limited to r tuples that minimizes the regret ratio (a measure of the user's dissatisfaction with the result from the limited set versus the one from the entire database) is of interest. In this paper, we make several fundamental theoretical as well as practical advances in developing such a compact set. In the case of two dimensional databases, we develop an optimal linearithmic time algorithm by leveraging the ordering of skyline tuples. In the case of higher dimensions, the problem is known to be NPcomplete. As one of our main results of this paper, we develop an approximation algorithm that runs in linearithmic time and guarantees a regret ratio, within any arbitrarily small user-controllable distance from the optimal regret ratio. The comprehensive set of experiments on both synthetic and publicly available real datasets confirm the efficiency, quality of output, and scalability of our proposed algorithms.

Varieties of regret in online prediction

We present a general framework for analyzing regret in the online prediction problem. We develop this from sets of linear transformations of strategies. We establish relationships among the varieties of regret and present a class of regret-matching algorithms. Finally we consider algorithms that exhibit the asymptotic no-regret property. Our main results are an analysis of observed regret in expectation and two regretmatching algorithms that exhibit no-observed-internal-regret in expectation.

Efficient Algorithms for k-Regret Minimizing Sets

ArXiv, 2017

A regret minimizing set Q is a small size representation of a much larger database P so that user queries executed on Q return answers whose scores are not much worse than those on the full dataset. In particular, a k-regret minimizing set has the property that the regret ratio between the score of the top-1 item in Q and the score of the top-k item in P is minimized, where the score of an item is the inner product of the item's attributes with a user's weight (preference) vector. The problem is challenging because we want to find a single representative set Q whose regret ratio is small with respect to all possible user weight vectors. We show that k-regret minimization is NP-Complete for all dimensions d >= 3. This settles an open problem from Chester et al. [VLDB 2014], and resolves the complexity status of the problem for all d: the problem is known to have polynomial-time solution for d <= 2. In addition, we propose two new approximation schemes for regret minimiz...

Possibilistic preference elicitation by minimax regret

2021

Identifying the preferences of a given user through elicitation is a central part of multi-criteria decision aid (MCDA) or preference learning tasks. Two classical ways to perform this elicitation is to use either a robust or a Bayesian approach. However, both have their shortcoming: the robust approach has strong guarantees through very strong hypotheses, but cannot integrate uncertain information. While the Bayesian approach can integrate uncertainties, but sacrifices the previous guarantees and asks for stronger model assumptions. In this paper, we propose and test a method based on possibility theory, which keeps the guarantees of the robust approach without needing its strong hypotheses. Among other things, we show that it can detect user errors as well as model misspecification.

Minimax regret estimation in linear models

2004 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2004

We develop a new linear estimator for estimating an unknown vector x in a linear model, in the presence of bounded data uncertainties. The estimator is designed to minimize the worst-case regret across all bounded data vectors, namely the worst-case difference between the MSE attainable using a linear estimator that does not know the true parameters x, and the optimal MSE attained using a linear estimator that knows x. We demonstrate through several examples that the minimax regret estimator can significantly increase the performance over the conventional least-squares estimator, as well as several other least-squares alternatives.

Computational Decision Support: Regret-based Models for Optimization and Preference Elicitation

Decision making is a fundamental human, organizational, and societal activity, involving several key (and sometimes implicit) steps: the formulation of a set of options or decisions; information gathering to help assess the outcomes of these decisions and their likelihood; some assessment of the relative utility or desirability of the possible outcomes; and an assessment of the tradeoffs involved to determine an appropriate course of action.

A unified optimization algorithm for solving "regret-minimizing representative" problems

Proceedings of the VLDB Endowment

Given a database with numeric attributes, it is often of interest to rank the tuples according to linear scoring functions. For a scoring function and a subset of tuples, the regret of the subset is defined as the (relative) difference in scores between the top-1 tuple of the subset and the top-1 tuple of the entire database. Finding the regret-ratio minimizing set (RRMS), i.e., the subset of a required size k that minimizes the maximum regret-ratio across all possible ranking functions, has been a well-studied problem in recent years. This problem is known to be NP-complete and there are several approximation algorithms for it. Other NP-complete variants have also been investigated, e.g., finding the set of size k that minimizes the average regret ratio over all linear functions. Prior work have designed customized algorithms for different variants of the problem, and are unlikely to easily generalize to other variants. In this paper we take a different path towards tackling these ...

The Interplay Between Stability and Regret in Online Learning

arXiv preprint arXiv:1211.6158, 2012

This paper considers the stability of online learning algorithms and its implications for learnability (bounded regret). We introduce a novel quantity called forward regret that intuitively measures how good an online learning algorithm is if it is allowed a one-step look-ahead into the future. We show that given stability, bounded forward regret is equivalent to bounded regret. We also show that the existence of an algorithm with bounded regret implies the existence of a stable algorithm with bounded regret and bounded forward regret. The equivalence results apply to general, possibly non-convex problems. To the best of our knowledge, our analysis provides the first general connection between stability and regret in the online setting that is not restricted to a particular class of algorithms. Our stability-regret connection provides a simple recipe for analyzing regret incurred by any online learning algorithm. Using our framework, we analyze several existing online learning algorithms as well as the "approximate" versions of algorithms like RDA that solve an optimization problem at each iteration. Our proofs are simpler than existing analysis for the respective algorithms, show a clear trade-off between stability and forward regret, and provide tighter regret bounds in some cases. Furthermore, using our recipe, we analyze "approximate" versions of several algorithms such as follow-the-regularized-leader (FTRL) that requires solving an optimization problem at each step.

Preference Elicitation with Uncertainty: Extending Regret Based Methods with Belief Functions

Scalable Uncertainty Management, 2019

Preference elicitation is a key element of any multi-criteria decision analysis (MCDA) problem, and more generally of individual user preference learning. Existing efficient elicitation procedures in the literature mostly use either robust or Bayesian approaches. In this paper, we are interested in extending the former ones by allowing the user to express uncertainty in addition of her preferential information and by modelling it through belief functions. We show that doing this, we preserve the strong guarantees of robust approaches, while overcoming some of their drawbacks. In particular, our approach allows the user to contradict herself, therefore allowing us to detect inconsistencies or ill-chosen model, something that is impossible with more classical robust methods.