Learning Multicriteria Utility Functions with Random Utility Models (original) (raw)
Related papers
2012
The topic of "preferences" has recently attracted considerable attention in artificial intelligence in general and machine learning in particular, where the topic of preference learning has emerged as a new, interdisciplinary research field with close connections to related areas such as operations research, social choice and decision theory. Roughly speaking, preference learning is about methods for learning preference models from explicit or implicit preference information, typically used for predicting the preferences of an individual or a group of individuals. Approaches relevant to this area range from learning special types of preference models, such as lexicographic orders, over "learning to rank" for information retrieval to collaborative filtering techniques for recommender systems. The primary goal of this tutorial is to survey the field of preference learning in its current stage of development. The presentation will focus on a systematic overview of different types of preference learning problems, methods and algorithms to tackle these problems, and metrics for evaluating the performance of preference models induced from data. 10h30 Coffee break 11h00 Session 2-"A New Rule-based Label Ranking Method", pages 3-13
Decision Aiding Assessing non-additive utility for multicriteria decision aid
2004
In the framework of Multi-Attribute Utility Theory (MAUT) several methods have been proposed to build a Decision-MakerÕs (DM) utility function representing his/her preferences. Among such methods, the UTA method infers an additive utility function from a set of exemplary decisions using linear programming. However, the UTA method does not guarantee to find a utility function which is coherent with the available information. This drawback is due to the underlying utility model of UTA, viz. the additive one, which does not allow to include additional information such as an interaction among criteria. In this paper we present a methodology for building a non-additive utility function, in the framework of the so called fuzzy integrals, which permits to model preference structures with interaction between criteria. Like in the UTA method, we aim at searching a utility function representing the DMÕs preferences, but unlike UTA, the functional form is a specific fuzzy integral (Choquet integral). As a result, we obtain weights which can be interpreted as the ''importance'' of coalitions of criteria, exploiting the potential interaction between criteria, as already proposed by other authors. However, within the same framework, we obtain also the marginal utility functions relative to each one of the considered criteria, that are evaluated on a common scale, as a consequence of the implemented methodology. Finally, we illustrate our approach with an example.
Construction and Refinement of Preference Ordered Decision Classes
Advances in Intelligent Systems and Computing, 2019
Preference learning methods are commonly used in multicriteria analysis. The working principle of these methods is similar to classical machine learning techniques. A common issue to both machine learning and preference learning methods is the difficulty of the definition of decision classes and the assignment of objects to these classes, especially for large datasets. This paper proposes two procedures permitting to automatize the construction of decision classes. It also proposes two simple refinement procedures, that rely on the 80-20 principle, permitting to map the output of the construction procedures into a manageable set of decision classes. The proposed construction procedures rely on the most elementary preference relation, namely dominance relation, which avoids the need for additional information or distance/(di)similarity functions, as with most of existing clustering methods. Furthermore, the simplicity of the 80-20 principle on which the refinement procedures are based, make them very adequate to large datasets. Proposed procedures are illustrated and validated using real-world datasets.
Assessing non-additive utility for multicriteria decision aid
European Journal of Operational Research, 2004
In the framework of Multi-Attribute Utility Theory (MAUT) several methods have been proposed to build a Decision-MakerÕs (DM) utility function representing his/her preferences. Among such methods, the UTA method infers an additive utility function from a set of exemplary decisions using linear programming. However, the UTA method does not guarantee to find a utility function which is coherent with the available information. This drawback is due to the underlying utility model of UTA, viz. the additive one, which does not allow to include additional information such as an interaction among criteria. In this paper we present a methodology for building a non-additive utility function, in the framework of the so called fuzzy integrals, which permits to model preference structures with interaction between criteria. Like in the UTA method, we aim at searching a utility function representing the DMÕs preferences, but unlike UTA, the functional form is a specific fuzzy integral (Choquet integral). As a result, we obtain weights which can be interpreted as the ''importance'' of coalitions of criteria, exploiting the potential interaction between criteria, as already proposed by other authors. However, within the same framework, we obtain also the marginal utility functions relative to each one of the considered criteria, that are evaluated on a common scale, as a consequence of the implemented methodology. Finally, we illustrate our approach with an example.
Learning Mallows models with pairwise preferences
2011
Abstract Learning preference distributions is a key problem in many areas (eg, recommender systems, IR, social choice). However, many existing methods require restrictive data models for evidence about user preferences. We relax these restrictions by considering as data arbitrary pairwise comparisons—the fundamental building blocks of ordinal rankings. We develop the first algorithms for learning Mallows models (and mixtures) with pairwise comparisons.
European Journal of Operational …, 2010
In multiple criteria decision aiding, it is common to use methods that are capable of automatically extracting a decision or evaluation model from partial information provided by the decision maker about a preference structure. In general, there is more than one possible model, leading to an indetermination which is dealt with sometimes arbitrarily in existing methods. This paper aims at filling this theoretical gap: we present a novel method, based on the computation of the analytic center of a polyhedron, for the selection of additive value functions that are compatible with holistic assessments of preferences. We demonstrate the most important characteristics of this technique with an experimental and comparative study of several existing methods belonging to the UTA family.
European Journal of Operational Research, 2009
We present a method called GRIP (Generalized Regression with Intensities of Preference) for ranking a set of actions evaluated on multiple criteria. GRIP builds a set of additive value functions compatible with preference information composed of a partial preorder and required intensities of preference on a subset of actions, called reference actions. It constructs not only the preference relation in the considered set of actions, but it also gives information about intensities of preference for pairs of actions from this set for a given Decision Maker (DM). Distinguishing necessary and possible consequences of prefernce information on the all set of actions, GRIP answers questions of robustness analysis. The proposed methodology can be seen as an extension of UTA method based on ordinal regression. GRIP can also be compared to AHP method, which requires pairwise comparison of all actions and criteria, and yields a priority ranking of actions. As for the preference information being used, GRIP can be compared, moreover, to MAC-BETH method which also takes into account a preference order of actions and intensity of preference for pairs of actions. The preference information used in GRIP does not need, however, to be complete: the DM is asked to provide comparisons of only those pairs of reference actions on particular criteria for which his/her judgment is sufficiently certain. This is an important advantage comparing to methods which, instead, require comparison of all possible pairs of evaluations on all the considered criteria. Moreover, GRIP works with a set of general additive value functions compatible with the preference information, while other methods use a single and less general value function, such as the weighted-sum.
Supervised Learning as Preference Optimization: A General Framework and its Applications
Supervised learning is characterized by a broad spectrum of learning problems, often involving structured predictions, including classification and regressions problems, ranking-based predictions (label and instance ranking), and ordinal regression in its various forms. All these different learning problems are typically addressed by specific algorithmic solutions. In this paper, we show that the general preference learning model (GPLM), which is based on a large-margin principled approach, gives a flexible way to codify cost functions for all the above problems as sets of linear preferences. Examples of how the proposed framework can be effectively used to address a variety of real-world applications are reported showing the flexibility and effectiveness of the approach.