Melvyn Sim - Profile on Academia.edu (original) (raw)
Papers by Melvyn Sim
Strategic Workforce Planning Under Uncertainty
Operations Research
A new study in the INFORMS journal Operations Research proposes a data-driven model for conductin... more A new study in the INFORMS journal Operations Research proposes a data-driven model for conducting strategic workforce planning in organizations. The model optimizes for recruitment and promotions by balancing the risks of not meeting headcount, budget, and productivity constraints, while keeping within a prescribed organizational structure. Analysis using the model indicates that there are increased workforce risks faced by organizations that are not in a state of growth or organizations that face limitations to organizational renewal (such as bureaucracies).
Operations Research eJournal, 2021
The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to nor... more The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to normalcy has never been greater. Epidemiological models, such as the SEIR compartmental model, are indispensable tools for, among other things, predicting how pandemic may spread over time and how vaccinations and different public health interventions could affect the outcome. However, deterministic epidemiological models do not reflect the stochastic nature of the actual infected populations for which the true distribution can never be determined precisely. When embedded in an optimization model, the impact of ambiguous risk can influence the desired outcomes of the mitigating strategy. To address these issues, we first propose a robust epidemiological model, which provides prediction intervals that is specified by the Aumann and Serrano (2008) riskiness index. With suitable approximations, the robust epidemiological optimization model that minimizes the riskiness index can be formulated a...
We use L(m k , N, I k) to approximate Y(m k , N, I k) in Problem (2.2). Applying the definition o... more We use L(m k , N, I k) to approximate Y(m k , N, I k) in Problem (2.2). Applying the definition of L(m k , N, I k) in (4.1) for each k ∈ [K], the problem above equivalently becomes: min x,{y 0,k ,Y k } K k=1
Submitted to Operations Research manuscript ( Please , provide the mansucript number ! ) Mitigating Delays and Unfairness in Appointment Systems
We consider an appointment system where heterogenous participants are sequenced and scheduled for... more We consider an appointment system where heterogenous participants are sequenced and scheduled for service. As service times are uncertain, the aim is to mitigate the unpleasantness experienced by the participants in the system when their waiting times or delays exceed acceptable thresholds, and address fairness concerning balancing of service levels among participants. In evaluating uncertain delays, we propose the Delay Unpleasantness Measure (DUM) which takes into account of the frequency and intensity of delays above a threshold, and introduce the concept of lexicographic min-max fairness to design appointment systems from the perspective of the worst-o↵ participants. We focus our study in the context of outpatient clinics in balancing doctor’s overtime and patients’ waiting time in which patients are distinguished by their service time characterizations. The model can be adapted in the robust setting when the underlying probability distribution is not fully available. To capture...
We present a unified and tractable framework for distributionally robust optimization that could ... more We present a unified and tractable framework for distributionally robust optimization that could encompass a variety of statistical information including, among others things, constraints on expectation, conditional expectation, and disjoint confidence sets with uncertain probabilities defined by φ-divergence. In particular, we also show that the Wasserstein-based ambiguity set has an equivalent formulation via our proposed ambiguity set, which would enable us to tractably approximate a Wasserstein-based distributionally robust optimization problem with recourse. To address a distributionally robust optimization problem with recourse, we introduce the tractable adaptive recourse scheme (TARS), which is based on the classical linear decision rule and can also be applied in situations where the recourse decisions are discrete. We demonstrate the effectiveness of the TARS in our computational study on a multi-item newsvendor problem.
Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative ... more Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative framework for optimization under uncertainty, which we term as a robust satisficing model. Instead of sizing the uncertainty set in robust optimization, the robust satisficing model is specified by a target objective with the aim of delivering the solution that is least impacted by uncertainty in achieving the target. At the heart of this framework, we minimize the level of constraint violation under all possible realizations within the support set. Our framework is based on a constraint function that evaluates to the optimal objective value of a standard conic optimization problem, which can be used to model a wide range of constraint functions that are convex in the decision variables but can be either convex or concave in the uncertain parameters. We derive an exact semidefinite optimization formulation when the constraint is biconvex quadratic with quadratic penalty and the support s...
We propose tractable replenishment policies for a multi-period, single product inventory control ... more We propose tractable replenishment policies for a multi-period, single product inventory control problem under ambiguous demands, that is, only limited information of the demand distributions such as mean, support and deviation measures are available. We obtain the parameters of the tractable replenishment policies by solving a deterministic optimization problem in the form of second order cone optimization problem (SOCP). Our framework extends to correlated demands and is developed around a factor-based model, which has the ability to incorporate business factors as well as time series forecast effects of trend, seasonality and cyclic variations. Computational results show that with correlated demands, our model outperforms a state independent base-stock policy derived from dynamic programming and an adaptive myopic policy. ∗Department of Industrial and Systems Engineering, National University of Singapore. Email: see chuenteck@yahoo.com.sg †NUS Business School, National University...
Stochastic optimization, especially multistage models, is well known to be computationally excruc... more Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support and covariance. One basic idea of our methods is to approximate the recourse decisions via decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances. Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the forms of the so...
In this paper, we axiomatize a target-based model of choice that allows decision makers to be bot... more In this paper, we axiomatize a target-based model of choice that allows decision makers to be both risk averse and risk seeking, depending on the payoff's position relative to a prespecified target. The approach can be viewed as a hybrid model, capturing in spirit two celebrated ideas: first, the satisficing concept of Simon (1955); second, the switch between risk aversion and risk seeking popularized by the prospect theory of Kahneman and Tversky (1979). Our axioms are simple and intuitive; in order to be implemented in practice, our approach requires only the specification of an aspiration level. We show that this approach is dual to a known approach using risk measures, thereby allowing us to connect to existing theory. Though our approach is intended to be normative, we also show that it resolves the classical examples of Allais (1953) and Ellsberg (1961).
SSRN Electronic Journal
Satisficers, in contrast to maximizers, are content with attaining a reasonable target that they ... more Satisficers, in contrast to maximizers, are content with attaining a reasonable target that they set for themselves. We develop a new prescriptive analytics tool called robust satisficing that uses data to help a satisficer achieve her target expected reward or consumption as well as possible under ambiguous risks and prediction uncertainty. It builds upon the robustness optimization framework recently proposed by Long et al. (2021), and we extend it to incorporate aspects of predictive analytics. The extension is non-trivial. We adopt linear regression as the underlying predictive model and propose a new estimator uncertainty and residual ambiguity set to characterize the relations between the underlying regression coefficients, which is uncertain but non-stochastic, and the stochastic random variables representing residuals that have ambiguous distributions. The robust satisficing model is also useful in allocating resources for multiple satisficing agents to meet their expected reward targets. We present some useful robust satisficing models that can be solved efficiently, and provide tractable approximations to tackle adaptive linear optimization problems. The simulation studies for newsvendor problems elucidate the benefits of the robust satisficing framework in helping the firm attain the target expected profits, mitigate shortfalls, and limit target surplus, if desired. The robust satisficing model can also improve solutions over one that is obtained by solving a baseline empirical optimization model using estimated parameters. The improvement is also more pronounced when data availability is limited. Paradoxically, maximizers can also benefit from the analytics of robust satisficing.
Robust CARA Optimization
SSRN Electronic Journal
Operations Research
We study a network fortification problem on a directed network that channels single-commodity res... more We study a network fortification problem on a directed network that channels single-commodity resources to fulfill random demands delivered to a subset of the nodes. For given a realization of demands, the malicious interdictor would disrupt the network in a manner that would maximize the total demand shortfalls subject to the interdictor's constraints. To mitigate the risk of such shortfalls, a network's operator can fortify it by providing additional network capacity and/or protecting the nominal capacity. Given the stochastic nature of the demand uncertainty, the goal is to fortify the network, within the operator's budget constraint, that would minimize the expected disutility of the shortfalls in events of interdiction. We model this as a threelevel, nonlinear stochastic optimization problem that can be solved via a robust stochastic approximation approach under which each iteration involves solving a linear mixed-integer program. We provide favourable computational results that demonstrate how our fortification strategy effectively mitigates interdiction risks. We also extend the model to multi-commodity network with multiple sources and multiple sinks.
The Dao of Robustness
SSRN Electronic Journal
We present a general framework for data-driven optimization called robustness optimization that f... more We present a general framework for data-driven optimization called robustness optimization that favors solutions for which a risk-aware objective function would best attain an acceptable target even when the actual probability distribution deviates from the empirical distribution. Unlike robust optimization approaches, the decision maker does not have to size the ambiguity set, but specifies an acceptable target, or loss of optimality compared to the empirical optimization model, as a trade off for the model’s ability to withstand greater uncertainty. We axiomatize the decision criterion associated with robustness optimization, termed as the fragility measure and present its representation theorem. Focusing on Wasserstein distance measure with l1-norm, we present tractable robustness optimization models for risk-based linear optimization, combinatorial optimization, and linear optimization problems with recourse. Serendipitously, the insights to the approximation also provide a recipe for approximating solutions for hard stochastic optimization prob- lems without relatively complete recourse. We illustrate in a portfolio optimization problem and a network lot-sizing problem on how we can set targets in the robustness optimization model, which can be more intuitive and effective than specifying the hyper-parameter used in a robust optimization model. The numerical studies show that the solutions to the robustness optimization models are more effective in alleviating the Optimizer’s Curse (Smith and Winkler 2006) by improving the out-of-sample performance evaluated on a variety of metrics.
Operations Research
We demonstrate how adjustable robust optimization (ARO) problems with fixed recourse can be caste... more We demonstrate how adjustable robust optimization (ARO) problems with fixed recourse can be casted as static robust optimization problems via Fourier-Motzkin elimination (FME). Through the lens of FME, we characterize the structures of the optimal decision rules for a broad class of ARO problems. A scheme based on a blending of classical FME and a simple Linear Programming technique that can efficiently remove redundant constraints, is developed to reformulate ARO problems. This generic reformulation technique enhances the classical approximation scheme via decision rules, and enables us to solve adjustable optimization problems to optimality. We show via numerical experiments that, for small-size ARO problems our novel approach finds the optimal solution. For moderate or large-size instances, we eliminate a subset of the adjustable variables, which improves the solutions from decision rule approximations.
Goal scoring, coherent loss and applications to machine learning
Mathematical Programming
Motivated by the binary classification problem in machine learning, we study in this paper a clas... more Motivated by the binary classification problem in machine learning, we study in this paper a class of decision problems where the decision maker has a list of goals, from which he aims to attain the maximal possible number of goals. In binary classification, this essentially means seeking a prediction rule to achieve the lowest probability of misclassification, and computationally it involves minimizing a (difficult) non-convex, 0–1 loss function. To address the intractability, previous methods consider minimizing the cumulative loss —the sum of convex surrogates of the 0–1 loss of each goal. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for goal scoring and then propose the coherent loss approach, which is a tractable upper-bound of the loss over the entire set of goals. We show that the proposed approach yields a strictly tighter approximation to the total loss (i.e., the number of missed goals) than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem. Moreover, this approach, applied to for binary classification, also has a robustness interpretation which builds a connection to robust SVMs.
SSRN Electronic Journal
Many real-world optimization problems have input parameters estimated from data whose inherent im... more Many real-world optimization problems have input parameters estimated from data whose inherent imprecision can lead to fragile solutions that may impede desired objectives and/or render constraints infeasible. We propose a joint estimation and robustness optimization (JERO) framework to mitigate estimation uncertainty in optimization problems by seamlessly incorporating both the parameter estimation procedure and the optimization problem. Toward that end, we construct an uncertainty set that incorporates all of the data, where the size of the uncertainty set is based on how well the parameters would be estimated from that data when using a particular estimation procedure: regressions, the least absolute shrinkage and selection operator, and maximum likelihood estimation (among others). The JERO model maximizes the uncertainty set's size and so obtains solutions that-unlike those derived from models dedicated strictly to robust optimizationare immune to parameter perturbations that would violate constraints or lead to objective function values exceeding their desired levels. We describe several applications and provide explicit formulations of the JERO framework for a variety of estimation procedures. To solve the JERO models with exponential cones, we develop a second-order conic approximation that limits errors beyond an operating range; with this approach, we can use state-of-the-art SOCP solvers to solve even large-scale convex optimization problems. Finally, we apply the JERO model to a case study, thereby addressing a health insurance reimbursement problem with the aim of improving patient flow in the healthcare system while hedging against estimation errors.
The Analytics of Bed Shortages: Coherent Metric, Prediction and Optimization
SSRN Electronic Journal
Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical out... more Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical outcomes. In practice, healthcare managers often use bed occupancy rates (BOR) as a metric to understand bed utilization, which is insufficient in capturing the risk of bed shortages. We propose the bed shortage index (BSI) to capture more facets of bed shortage risk than traditional metrics such as the occupancy rate, the probability of shortages and expected shortages. The BSI is based on the well-known Aumann and Serrano (2008) riskiness index and it is calibrated to coincide with BOR when the daily arrivals in the hospital unit are Poisson distributed. Our metric can be tractably computed and does not require additional assumptions or approximations. As such, it can be consistently used across the descriptive, predictive and prescriptive analytical approaches. We also propose optimization models to plan for bed capacity via this metric. These models can be efficiently solved on a large scale via a sequence of linear optimization problems. The first maximizes total elective throughput while managing the metric under a specified threshold. The second determines the optimal scheduling policy by lexicographically minimizing the steady-state daily BSI for a given number of scheduled admissions. We validate these models using real data from a hospital and test them against data-driven simulations. We apply these models to study the real-world problem of long stayers, to predict the impact of transferring them to community hospitals, as a result of an aging population.
Brief paper: Constrained linear system with disturbance: Convergence under disturbance feedback
Automatica, Oct 1, 2008
This paper presents a new measure of skewness, skewness-aware deviation, that can be linked to ta... more This paper presents a new measure of skewness, skewness-aware deviation, that can be linked to tail risk measures such as Value-at-Risk. We show that this measure of skewness arises naturally also when one thinks of maximizing the certainty equivalent for an investor with a negative exponential utility function, thus bringing together the mean-risk and the expected utility framework for an important class of investor preferences. We generalize the idea of variance and covariance in the new skewness-aware asset pricing and allocation framework. We show via computational experiments that the proposed approach results in improved and intuitively appealing asset allocation when returns follow real-world or simulated skewed distributions. We also suggest a skewness-aware equivalent of the classical CAPM beta, and study its consistency with the observed behavior of the stocks traded at the NYSE between 1963 and 2006.
A prediction rule in binary classification that aims to achieve the lowest probability of misclas... more A prediction rule in binary classification that aims to achieve the lowest probability of misclassification involves minimizing over a nonconvex, 0-1 loss function, which is typically a computationally intractable optimization problem. To address the intractability, previous methods consider minimizing the cumulative lossthe sum of convex surrogates of the 0-1 loss of each sample. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for binary classification and then propose the coherent loss approach, which is a tractable upper-bound of the empirical classification error over the entire sample set. We show that the proposed approach yields a strictly tighter approximation to the empirical classification error than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem, and this approach for binary classification also has a robustness interpretation which builds a connection to robust SVMs.
Strategic Workforce Planning Under Uncertainty
Operations Research
A new study in the INFORMS journal Operations Research proposes a data-driven model for conductin... more A new study in the INFORMS journal Operations Research proposes a data-driven model for conducting strategic workforce planning in organizations. The model optimizes for recruitment and promotions by balancing the risks of not meeting headcount, budget, and productivity constraints, while keeping within a prescribed organizational structure. Analysis using the model indicates that there are increased workforce risks faced by organizations that are not in a state of growth or organizations that face limitations to organizational renewal (such as bureaucracies).
Operations Research eJournal, 2021
The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to nor... more The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to normalcy has never been greater. Epidemiological models, such as the SEIR compartmental model, are indispensable tools for, among other things, predicting how pandemic may spread over time and how vaccinations and different public health interventions could affect the outcome. However, deterministic epidemiological models do not reflect the stochastic nature of the actual infected populations for which the true distribution can never be determined precisely. When embedded in an optimization model, the impact of ambiguous risk can influence the desired outcomes of the mitigating strategy. To address these issues, we first propose a robust epidemiological model, which provides prediction intervals that is specified by the Aumann and Serrano (2008) riskiness index. With suitable approximations, the robust epidemiological optimization model that minimizes the riskiness index can be formulated a...
We use L(m k , N, I k) to approximate Y(m k , N, I k) in Problem (2.2). Applying the definition o... more We use L(m k , N, I k) to approximate Y(m k , N, I k) in Problem (2.2). Applying the definition of L(m k , N, I k) in (4.1) for each k ∈ [K], the problem above equivalently becomes: min x,{y 0,k ,Y k } K k=1
Submitted to Operations Research manuscript ( Please , provide the mansucript number ! ) Mitigating Delays and Unfairness in Appointment Systems
We consider an appointment system where heterogenous participants are sequenced and scheduled for... more We consider an appointment system where heterogenous participants are sequenced and scheduled for service. As service times are uncertain, the aim is to mitigate the unpleasantness experienced by the participants in the system when their waiting times or delays exceed acceptable thresholds, and address fairness concerning balancing of service levels among participants. In evaluating uncertain delays, we propose the Delay Unpleasantness Measure (DUM) which takes into account of the frequency and intensity of delays above a threshold, and introduce the concept of lexicographic min-max fairness to design appointment systems from the perspective of the worst-o↵ participants. We focus our study in the context of outpatient clinics in balancing doctor’s overtime and patients’ waiting time in which patients are distinguished by their service time characterizations. The model can be adapted in the robust setting when the underlying probability distribution is not fully available. To capture...
We present a unified and tractable framework for distributionally robust optimization that could ... more We present a unified and tractable framework for distributionally robust optimization that could encompass a variety of statistical information including, among others things, constraints on expectation, conditional expectation, and disjoint confidence sets with uncertain probabilities defined by φ-divergence. In particular, we also show that the Wasserstein-based ambiguity set has an equivalent formulation via our proposed ambiguity set, which would enable us to tractably approximate a Wasserstein-based distributionally robust optimization problem with recourse. To address a distributionally robust optimization problem with recourse, we introduce the tractable adaptive recourse scheme (TARS), which is based on the classical linear decision rule and can also be applied in situations where the recourse decisions are discrete. We demonstrate the effectiveness of the TARS in our computational study on a multi-item newsvendor problem.
Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative ... more Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative framework for optimization under uncertainty, which we term as a robust satisficing model. Instead of sizing the uncertainty set in robust optimization, the robust satisficing model is specified by a target objective with the aim of delivering the solution that is least impacted by uncertainty in achieving the target. At the heart of this framework, we minimize the level of constraint violation under all possible realizations within the support set. Our framework is based on a constraint function that evaluates to the optimal objective value of a standard conic optimization problem, which can be used to model a wide range of constraint functions that are convex in the decision variables but can be either convex or concave in the uncertain parameters. We derive an exact semidefinite optimization formulation when the constraint is biconvex quadratic with quadratic penalty and the support s...
We propose tractable replenishment policies for a multi-period, single product inventory control ... more We propose tractable replenishment policies for a multi-period, single product inventory control problem under ambiguous demands, that is, only limited information of the demand distributions such as mean, support and deviation measures are available. We obtain the parameters of the tractable replenishment policies by solving a deterministic optimization problem in the form of second order cone optimization problem (SOCP). Our framework extends to correlated demands and is developed around a factor-based model, which has the ability to incorporate business factors as well as time series forecast effects of trend, seasonality and cyclic variations. Computational results show that with correlated demands, our model outperforms a state independent base-stock policy derived from dynamic programming and an adaptive myopic policy. ∗Department of Industrial and Systems Engineering, National University of Singapore. Email: see chuenteck@yahoo.com.sg †NUS Business School, National University...
Stochastic optimization, especially multistage models, is well known to be computationally excruc... more Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support and covariance. One basic idea of our methods is to approximate the recourse decisions via decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances. Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the forms of the so...
In this paper, we axiomatize a target-based model of choice that allows decision makers to be bot... more In this paper, we axiomatize a target-based model of choice that allows decision makers to be both risk averse and risk seeking, depending on the payoff's position relative to a prespecified target. The approach can be viewed as a hybrid model, capturing in spirit two celebrated ideas: first, the satisficing concept of Simon (1955); second, the switch between risk aversion and risk seeking popularized by the prospect theory of Kahneman and Tversky (1979). Our axioms are simple and intuitive; in order to be implemented in practice, our approach requires only the specification of an aspiration level. We show that this approach is dual to a known approach using risk measures, thereby allowing us to connect to existing theory. Though our approach is intended to be normative, we also show that it resolves the classical examples of Allais (1953) and Ellsberg (1961).
SSRN Electronic Journal
Satisficers, in contrast to maximizers, are content with attaining a reasonable target that they ... more Satisficers, in contrast to maximizers, are content with attaining a reasonable target that they set for themselves. We develop a new prescriptive analytics tool called robust satisficing that uses data to help a satisficer achieve her target expected reward or consumption as well as possible under ambiguous risks and prediction uncertainty. It builds upon the robustness optimization framework recently proposed by Long et al. (2021), and we extend it to incorporate aspects of predictive analytics. The extension is non-trivial. We adopt linear regression as the underlying predictive model and propose a new estimator uncertainty and residual ambiguity set to characterize the relations between the underlying regression coefficients, which is uncertain but non-stochastic, and the stochastic random variables representing residuals that have ambiguous distributions. The robust satisficing model is also useful in allocating resources for multiple satisficing agents to meet their expected reward targets. We present some useful robust satisficing models that can be solved efficiently, and provide tractable approximations to tackle adaptive linear optimization problems. The simulation studies for newsvendor problems elucidate the benefits of the robust satisficing framework in helping the firm attain the target expected profits, mitigate shortfalls, and limit target surplus, if desired. The robust satisficing model can also improve solutions over one that is obtained by solving a baseline empirical optimization model using estimated parameters. The improvement is also more pronounced when data availability is limited. Paradoxically, maximizers can also benefit from the analytics of robust satisficing.
Robust CARA Optimization
SSRN Electronic Journal
Operations Research
We study a network fortification problem on a directed network that channels single-commodity res... more We study a network fortification problem on a directed network that channels single-commodity resources to fulfill random demands delivered to a subset of the nodes. For given a realization of demands, the malicious interdictor would disrupt the network in a manner that would maximize the total demand shortfalls subject to the interdictor's constraints. To mitigate the risk of such shortfalls, a network's operator can fortify it by providing additional network capacity and/or protecting the nominal capacity. Given the stochastic nature of the demand uncertainty, the goal is to fortify the network, within the operator's budget constraint, that would minimize the expected disutility of the shortfalls in events of interdiction. We model this as a threelevel, nonlinear stochastic optimization problem that can be solved via a robust stochastic approximation approach under which each iteration involves solving a linear mixed-integer program. We provide favourable computational results that demonstrate how our fortification strategy effectively mitigates interdiction risks. We also extend the model to multi-commodity network with multiple sources and multiple sinks.
The Dao of Robustness
SSRN Electronic Journal
We present a general framework for data-driven optimization called robustness optimization that f... more We present a general framework for data-driven optimization called robustness optimization that favors solutions for which a risk-aware objective function would best attain an acceptable target even when the actual probability distribution deviates from the empirical distribution. Unlike robust optimization approaches, the decision maker does not have to size the ambiguity set, but specifies an acceptable target, or loss of optimality compared to the empirical optimization model, as a trade off for the model’s ability to withstand greater uncertainty. We axiomatize the decision criterion associated with robustness optimization, termed as the fragility measure and present its representation theorem. Focusing on Wasserstein distance measure with l1-norm, we present tractable robustness optimization models for risk-based linear optimization, combinatorial optimization, and linear optimization problems with recourse. Serendipitously, the insights to the approximation also provide a recipe for approximating solutions for hard stochastic optimization prob- lems without relatively complete recourse. We illustrate in a portfolio optimization problem and a network lot-sizing problem on how we can set targets in the robustness optimization model, which can be more intuitive and effective than specifying the hyper-parameter used in a robust optimization model. The numerical studies show that the solutions to the robustness optimization models are more effective in alleviating the Optimizer’s Curse (Smith and Winkler 2006) by improving the out-of-sample performance evaluated on a variety of metrics.
Operations Research
We demonstrate how adjustable robust optimization (ARO) problems with fixed recourse can be caste... more We demonstrate how adjustable robust optimization (ARO) problems with fixed recourse can be casted as static robust optimization problems via Fourier-Motzkin elimination (FME). Through the lens of FME, we characterize the structures of the optimal decision rules for a broad class of ARO problems. A scheme based on a blending of classical FME and a simple Linear Programming technique that can efficiently remove redundant constraints, is developed to reformulate ARO problems. This generic reformulation technique enhances the classical approximation scheme via decision rules, and enables us to solve adjustable optimization problems to optimality. We show via numerical experiments that, for small-size ARO problems our novel approach finds the optimal solution. For moderate or large-size instances, we eliminate a subset of the adjustable variables, which improves the solutions from decision rule approximations.
Goal scoring, coherent loss and applications to machine learning
Mathematical Programming
Motivated by the binary classification problem in machine learning, we study in this paper a clas... more Motivated by the binary classification problem in machine learning, we study in this paper a class of decision problems where the decision maker has a list of goals, from which he aims to attain the maximal possible number of goals. In binary classification, this essentially means seeking a prediction rule to achieve the lowest probability of misclassification, and computationally it involves minimizing a (difficult) non-convex, 0–1 loss function. To address the intractability, previous methods consider minimizing the cumulative loss —the sum of convex surrogates of the 0–1 loss of each goal. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for goal scoring and then propose the coherent loss approach, which is a tractable upper-bound of the loss over the entire set of goals. We show that the proposed approach yields a strictly tighter approximation to the total loss (i.e., the number of missed goals) than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem. Moreover, this approach, applied to for binary classification, also has a robustness interpretation which builds a connection to robust SVMs.
SSRN Electronic Journal
Many real-world optimization problems have input parameters estimated from data whose inherent im... more Many real-world optimization problems have input parameters estimated from data whose inherent imprecision can lead to fragile solutions that may impede desired objectives and/or render constraints infeasible. We propose a joint estimation and robustness optimization (JERO) framework to mitigate estimation uncertainty in optimization problems by seamlessly incorporating both the parameter estimation procedure and the optimization problem. Toward that end, we construct an uncertainty set that incorporates all of the data, where the size of the uncertainty set is based on how well the parameters would be estimated from that data when using a particular estimation procedure: regressions, the least absolute shrinkage and selection operator, and maximum likelihood estimation (among others). The JERO model maximizes the uncertainty set's size and so obtains solutions that-unlike those derived from models dedicated strictly to robust optimizationare immune to parameter perturbations that would violate constraints or lead to objective function values exceeding their desired levels. We describe several applications and provide explicit formulations of the JERO framework for a variety of estimation procedures. To solve the JERO models with exponential cones, we develop a second-order conic approximation that limits errors beyond an operating range; with this approach, we can use state-of-the-art SOCP solvers to solve even large-scale convex optimization problems. Finally, we apply the JERO model to a case study, thereby addressing a health insurance reimbursement problem with the aim of improving patient flow in the healthcare system while hedging against estimation errors.
The Analytics of Bed Shortages: Coherent Metric, Prediction and Optimization
SSRN Electronic Journal
Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical out... more Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical outcomes. In practice, healthcare managers often use bed occupancy rates (BOR) as a metric to understand bed utilization, which is insufficient in capturing the risk of bed shortages. We propose the bed shortage index (BSI) to capture more facets of bed shortage risk than traditional metrics such as the occupancy rate, the probability of shortages and expected shortages. The BSI is based on the well-known Aumann and Serrano (2008) riskiness index and it is calibrated to coincide with BOR when the daily arrivals in the hospital unit are Poisson distributed. Our metric can be tractably computed and does not require additional assumptions or approximations. As such, it can be consistently used across the descriptive, predictive and prescriptive analytical approaches. We also propose optimization models to plan for bed capacity via this metric. These models can be efficiently solved on a large scale via a sequence of linear optimization problems. The first maximizes total elective throughput while managing the metric under a specified threshold. The second determines the optimal scheduling policy by lexicographically minimizing the steady-state daily BSI for a given number of scheduled admissions. We validate these models using real data from a hospital and test them against data-driven simulations. We apply these models to study the real-world problem of long stayers, to predict the impact of transferring them to community hospitals, as a result of an aging population.
Brief paper: Constrained linear system with disturbance: Convergence under disturbance feedback
Automatica, Oct 1, 2008
This paper presents a new measure of skewness, skewness-aware deviation, that can be linked to ta... more This paper presents a new measure of skewness, skewness-aware deviation, that can be linked to tail risk measures such as Value-at-Risk. We show that this measure of skewness arises naturally also when one thinks of maximizing the certainty equivalent for an investor with a negative exponential utility function, thus bringing together the mean-risk and the expected utility framework for an important class of investor preferences. We generalize the idea of variance and covariance in the new skewness-aware asset pricing and allocation framework. We show via computational experiments that the proposed approach results in improved and intuitively appealing asset allocation when returns follow real-world or simulated skewed distributions. We also suggest a skewness-aware equivalent of the classical CAPM beta, and study its consistency with the observed behavior of the stocks traded at the NYSE between 1963 and 2006.
A prediction rule in binary classification that aims to achieve the lowest probability of misclas... more A prediction rule in binary classification that aims to achieve the lowest probability of misclassification involves minimizing over a nonconvex, 0-1 loss function, which is typically a computationally intractable optimization problem. To address the intractability, previous methods consider minimizing the cumulative lossthe sum of convex surrogates of the 0-1 loss of each sample. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for binary classification and then propose the coherent loss approach, which is a tractable upper-bound of the empirical classification error over the entire sample set. We show that the proposed approach yields a strictly tighter approximation to the empirical classification error than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem, and this approach for binary classification also has a robustness interpretation which builds a connection to robust SVMs.