AHARON ben-Tal - Academia.edu (original) (raw)
Papers by AHARON ben-Tal
Lectures on Modern Convex Optimization, 2001
Operations Research, 2015
Robust optimization is a common optimization framework under uncertainty when problem parameters ... more Robust optimization is a common optimization framework under uncertainty when problem parameters are unknown, but it is known that they belong to some given uncertainty set. In the robust optimization framework, a min-max problem is solved wherein a solution is evaluated according to its performance on the worst possible realization of the parameters. In many cases, a straightforward solution to a robust optimization problem of a certain type requires solving an optimization problem of a more complicated type, which might be NP-hard in some cases. For example, solving a robust conic quadratic program, such as those arising in a robust support vector machine (SVM) with an ellipsoidal uncertainty set, leads in general to a semidefinite program. In this paper, we develop a method for approximately solving a robust optimization problem using tools from online convex optimization, where at every stage a standard (nonrobust) optimization program is solved. Our algorithms find an approxima...
In this paper we investigate a ∞exible approach to robust optimization based on risk preferences ... more In this paper we investigate a ∞exible approach to robust optimization based on risk preferences of the decision-maker in which one specifles not only the values of the uncertain parameters for which feasibility should be ensured, but also the degree of feasibility. We show that traditional, robust opti- mization models are a special case of this framework. Our focus is on linear optimization and the key tool will be the theory of convex risk measures, developed by Follmer and Schied (13). We consider four, primary classes of risk measures and connect them with corresponding notions of robustness. We also prove that the corresponding risk measures imply a family of probability guarantees at var- ious degrees of feasibility, as opposed to a single bound on feasibility commonly proved in robust optimization. Finally, we illustrate the performance of these risk measures on a real-world portfolio optimization application and show promising results that our methodology can, in some cases...
INFORMS Journal on Computing
Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove t... more Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove that such a problem can be reformulated as an adjustable robust optimization (ARO) problem in which each adjustable variable corresponds to a unique constraint of the original problem. We use ARO techniques to obtain approximate solutions to the convex maximization problem. In order to demonstrate the complete approximation scheme, we distinguish the cases in which we have just one nonlinear constraint and multiple linear constraints. Concerning the first case, we give three examples in which one can analytically eliminate the adjustable variable and approximately solve the resulting static robust optimization problem efficiently. More specifically, we show that the norm constrained log-sum-exp (geometric) maximization problem can be approximated by (convex) exponential cone optimization techniques. Concerning the second case of multiple linear constraints, the equivalent ARO problem can ...
This paper studies the problem of constructing robust classifiers when the training is plagued wi... more This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain datapoints are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test datapoints and error measures for evaluating classifiers robust to uncertain data are discussed. ...
Journal of Machine Learning Research, 2011
This paper presents novel algorithms and applications for a particular class of mixed-norm regula... more This paper presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given k...
IEEE Transactions on Information Theory, 2018
Optimization, 1988
ABSTRACT
Operations Research, 2018
In this paper we consider ambiguous stochastic constraints under partial information consisting o... more In this paper we consider ambiguous stochastic constraints under partial information consisting of means and dispersion measures of the underlying random parameters. Whereas the past literature used the variance as the dispersion measure, here we use the mean absolute deviation from the mean (MAD). This makes it possible to use the 1972 result of Ben-Tal and Hochman (BH) in which tight upper and lower bounds on the expectation of a convex function of a random variable are given. First, we use these results to treat ambiguous expected feasibility constraints to obtain exact reformulations for both functions that are convex and concave in the components of the random variable. This approach requires, however, the independence of the random variables and, moreover, may lead to an exponential number of terms in the resulting robust counterparts. We then show how upper bounds can be constructed that alleviate the independence restriction, and require only a linear number of terms, by exp...
Mathematical Programming, 2019
Mathematische Operationsforschung und Statistik. Series Optimization, 1977
ABSTRACT
Computational Management Science, 2016
Lectures on Modern Convex Optimization, 2001
Operations Research, 2015
Robust optimization is a common optimization framework under uncertainty when problem parameters ... more Robust optimization is a common optimization framework under uncertainty when problem parameters are unknown, but it is known that they belong to some given uncertainty set. In the robust optimization framework, a min-max problem is solved wherein a solution is evaluated according to its performance on the worst possible realization of the parameters. In many cases, a straightforward solution to a robust optimization problem of a certain type requires solving an optimization problem of a more complicated type, which might be NP-hard in some cases. For example, solving a robust conic quadratic program, such as those arising in a robust support vector machine (SVM) with an ellipsoidal uncertainty set, leads in general to a semidefinite program. In this paper, we develop a method for approximately solving a robust optimization problem using tools from online convex optimization, where at every stage a standard (nonrobust) optimization program is solved. Our algorithms find an approxima...
In this paper we investigate a ∞exible approach to robust optimization based on risk preferences ... more In this paper we investigate a ∞exible approach to robust optimization based on risk preferences of the decision-maker in which one specifles not only the values of the uncertain parameters for which feasibility should be ensured, but also the degree of feasibility. We show that traditional, robust opti- mization models are a special case of this framework. Our focus is on linear optimization and the key tool will be the theory of convex risk measures, developed by Follmer and Schied (13). We consider four, primary classes of risk measures and connect them with corresponding notions of robustness. We also prove that the corresponding risk measures imply a family of probability guarantees at var- ious degrees of feasibility, as opposed to a single bound on feasibility commonly proved in robust optimization. Finally, we illustrate the performance of these risk measures on a real-world portfolio optimization application and show promising results that our methodology can, in some cases...
INFORMS Journal on Computing
Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove t... more Maximizing a convex function over convex constraints is an NP-hard problem in general. We prove that such a problem can be reformulated as an adjustable robust optimization (ARO) problem in which each adjustable variable corresponds to a unique constraint of the original problem. We use ARO techniques to obtain approximate solutions to the convex maximization problem. In order to demonstrate the complete approximation scheme, we distinguish the cases in which we have just one nonlinear constraint and multiple linear constraints. Concerning the first case, we give three examples in which one can analytically eliminate the adjustable variable and approximately solve the resulting static robust optimization problem efficiently. More specifically, we show that the norm constrained log-sum-exp (geometric) maximization problem can be approximated by (convex) exponential cone optimization techniques. Concerning the second case of multiple linear constraints, the equivalent ARO problem can ...
This paper studies the problem of constructing robust classifiers when the training is plagued wi... more This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain datapoints are classified correctly with high probability. Unfortunately such a CCP turns out to be intractable. The key novelty is in employing Bernstein bounding schemes to relax the CCP as a convex second order cone program whose solution is guaranteed to satisfy the probabilistic constraint. Prior to this work, only the Chebyshev based relaxations were exploited in learning algorithms. Bernstein bounds employ richer partial information and hence can be far less conservative than Chebyshev bounds. Due to this efficient modeling of uncertainty, the resulting classifiers achieve higher classification margins and hence better generalization. Methodologies for classifying uncertain test datapoints and error measures for evaluating classifiers robust to uncertain data are discussed. ...
Journal of Machine Learning Research, 2011
This paper presents novel algorithms and applications for a particular class of mixed-norm regula... more This paper presents novel algorithms and applications for a particular class of mixed-norm regularization based Multiple Kernel Learning (MKL) formulations. The formulations assume that the given k...
IEEE Transactions on Information Theory, 2018
Optimization, 1988
ABSTRACT
Operations Research, 2018
In this paper we consider ambiguous stochastic constraints under partial information consisting o... more In this paper we consider ambiguous stochastic constraints under partial information consisting of means and dispersion measures of the underlying random parameters. Whereas the past literature used the variance as the dispersion measure, here we use the mean absolute deviation from the mean (MAD). This makes it possible to use the 1972 result of Ben-Tal and Hochman (BH) in which tight upper and lower bounds on the expectation of a convex function of a random variable are given. First, we use these results to treat ambiguous expected feasibility constraints to obtain exact reformulations for both functions that are convex and concave in the components of the random variable. This approach requires, however, the independence of the random variables and, moreover, may lead to an exponential number of terms in the resulting robust counterparts. We then show how upper bounds can be constructed that alleviate the independence restriction, and require only a linear number of terms, by exp...
Mathematical Programming, 2019
Mathematische Operationsforschung und Statistik. Series Optimization, 1977
ABSTRACT
Computational Management Science, 2016