Melvyn Sim | National University of Singapore (original) (raw)
Papers by Melvyn Sim
Operations Research
A new study in the INFORMS journal Operations Research proposes a data-driven model for conductin... more A new study in the INFORMS journal Operations Research proposes a data-driven model for conducting strategic workforce planning in organizations. The model optimizes for recruitment and promotions by balancing the risks of not meeting headcount, budget, and productivity constraints, while keeping within a prescribed organizational structure. Analysis using the model indicates that there are increased workforce risks faced by organizations that are not in a state of growth or organizations that face limitations to organizational renewal (such as bureaucracies).
Operations Research eJournal, 2021
The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to nor... more The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to normalcy has never been greater. Epidemiological models, such as the SEIR compartmental model, are indispensable tools for, among other things, predicting how pandemic may spread over time and how vaccinations and different public health interventions could affect the outcome. However, deterministic epidemiological models do not reflect the stochastic nature of the actual infected populations for which the true distribution can never be determined precisely. When embedded in an optimization model, the impact of ambiguous risk can influence the desired outcomes of the mitigating strategy. To address these issues, we first propose a robust epidemiological model, which provides prediction intervals that is specified by the Aumann and Serrano (2008) riskiness index. With suitable approximations, the robust epidemiological optimization model that minimizes the riskiness index can be formulated a...
We consider an appointment system where heterogenous participants are sequenced and scheduled for... more We consider an appointment system where heterogenous participants are sequenced and scheduled for service. As service times are uncertain, the aim is to mitigate the unpleasantness experienced by the participants in the system when their waiting times or delays exceed acceptable thresholds, and address fairness concerning balancing of service levels among participants. In evaluating uncertain delays, we propose the Delay Unpleasantness Measure (DUM) which takes into account of the frequency and intensity of delays above a threshold, and introduce the concept of lexicographic min-max fairness to design appointment systems from the perspective of the worst-o↵ participants. We focus our study in the context of outpatient clinics in balancing doctor’s overtime and patients’ waiting time in which patients are distinguished by their service time characterizations. The model can be adapted in the robust setting when the underlying probability distribution is not fully available. To capture...
We present a unified and tractable framework for distributionally robust optimization that could ... more We present a unified and tractable framework for distributionally robust optimization that could encompass a variety of statistical information including, among others things, constraints on expectation, conditional expectation, and disjoint confidence sets with uncertain probabilities defined by φ-divergence. In particular, we also show that the Wasserstein-based ambiguity set has an equivalent formulation via our proposed ambiguity set, which would enable us to tractably approximate a Wasserstein-based distributionally robust optimization problem with recourse. To address a distributionally robust optimization problem with recourse, we introduce the tractable adaptive recourse scheme (TARS), which is based on the classical linear decision rule and can also be applied in situations where the recourse decisions are discrete. We demonstrate the effectiveness of the TARS in our computational study on a multi-item newsvendor problem.
Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative ... more Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative framework for optimization under uncertainty, which we term as a robust satisficing model. Instead of sizing the uncertainty set in robust optimization, the robust satisficing model is specified by a target objective with the aim of delivering the solution that is least impacted by uncertainty in achieving the target. At the heart of this framework, we minimize the level of constraint violation under all possible realizations within the support set. Our framework is based on a constraint function that evaluates to the optimal objective value of a standard conic optimization problem, which can be used to model a wide range of constraint functions that are convex in the decision variables but can be either convex or concave in the uncertain parameters. We derive an exact semidefinite optimization formulation when the constraint is biconvex quadratic with quadratic penalty and the support s...
We propose tractable replenishment policies for a multi-period, single product inventory control ... more We propose tractable replenishment policies for a multi-period, single product inventory control problem under ambiguous demands, that is, only limited information of the demand distributions such as mean, support and deviation measures are available. We obtain the parameters of the tractable replenishment policies by solving a deterministic optimization problem in the form of second order cone optimization problem (SOCP). Our framework extends to correlated demands and is developed around a factor-based model, which has the ability to incorporate business factors as well as time series forecast effects of trend, seasonality and cyclic variations. Computational results show that with correlated demands, our model outperforms a state independent base-stock policy derived from dynamic programming and an adaptive myopic policy. ∗Department of Industrial and Systems Engineering, National University of Singapore. Email: see chuenteck@yahoo.com.sg †NUS Business School, National University...
Stochastic optimization, especially multistage models, is well known to be computationally excruc... more Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support and covariance. One basic idea of our methods is to approximate the recourse decisions via decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances. Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the forms of the so...
In this paper, we axiomatize a target-based model of choice that allows decision makers to be bot... more In this paper, we axiomatize a target-based model of choice that allows decision makers to be both risk averse and risk seeking, depending on the payoff's position relative to a prespecified target. The approach can be viewed as a hybrid model, capturing in spirit two celebrated ideas: first, the satisficing concept of Simon (1955); second, the switch between risk aversion and risk seeking popularized by the prospect theory of Kahneman and Tversky (1979). Our axioms are simple and intuitive; in order to be implemented in practice, our approach requires only the specification of an aspiration level. We show that this approach is dual to a known approach using risk measures, thereby allowing us to connect to existing theory. Though our approach is intended to be normative, we also show that it resolves the classical examples of Allais (1953) and Ellsberg (1961).
SSRN Electronic Journal
We present a general framework for data-driven optimization called robustness optimization that f... more We present a general framework for data-driven optimization called robustness optimization that favors solutions for which a risk-aware objective function would best attain an acceptable target even when the actual probability distribution deviates from the empirical distribution. Unlike robust optimization approaches, the decision maker does not have to size the ambiguity set, but specifies an acceptable target, or loss of optimality compared to the empirical optimization model, as a trade off for the model’s ability to withstand greater uncertainty. We axiomatize the decision criterion associated with robustness optimization, termed as the fragility measure and present its representation theorem. Focusing on Wasserstein distance measure with l1-norm, we present tractable robustness optimization models for risk-based linear optimization, combinatorial optimization, and linear optimization problems with recourse. Serendipitously, the insights to the approximation also provide a recipe for approximating solutions for hard stochastic optimization prob- lems without relatively complete recourse. We illustrate in a portfolio optimization problem and a network lot-sizing problem on how we can set targets in the robustness optimization model, which can be more intuitive and effective than specifying the hyper-parameter used in a robust optimization model. The numerical studies show that the solutions to the robustness optimization models are more effective in alleviating the Optimizer’s Curse (Smith and Winkler 2006) by improving the out-of-sample performance evaluated on a variety of metrics.
Mathematical Programming
Motivated by the binary classification problem in machine learning, we study in this paper a clas... more Motivated by the binary classification problem in machine learning, we study in this paper a class of decision problems where the decision maker has a list of goals, from which he aims to attain the maximal possible number of goals. In binary classification, this essentially means seeking a prediction rule to achieve the lowest probability of misclassification, and computationally it involves minimizing a (difficult) non-convex, 0–1 loss function. To address the intractability, previous methods consider minimizing the cumulative loss —the sum of convex surrogates of the 0–1 loss of each goal. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for goal scoring and then propose the coherent loss approach, which is a tractable upper-bound of the loss over the entire set of goals. We show that the proposed approach yields a strictly tighter approximation to the total loss (i.e., the number of missed goals) than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem. Moreover, this approach, applied to for binary classification, also has a robustness interpretation which builds a connection to robust SVMs.
SSRN Electronic Journal
Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical out... more Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical outcomes. In practice, healthcare managers often use bed occupancy rates (BOR) as a metric to understand bed utilization, which is insufficient in capturing the risk of bed shortages. We propose the bed shortage index (BSI) to capture more facets of bed shortage risk than traditional metrics such as the occupancy rate, the probability of shortages and expected shortages. The BSI is based on the well-known Aumann and Serrano (2008) riskiness index and it is calibrated to coincide with BOR when the daily arrivals in the hospital unit are Poisson distributed. Our metric can be tractably computed and does not require additional assumptions or approximations. As such, it can be consistently used across the descriptive, predictive and prescriptive analytical approaches. We also propose optimization models to plan for bed capacity via this metric. These models can be efficiently solved on a large scale via a sequence of linear optimization problems. The first maximizes total elective throughput while managing the metric under a specified threshold. The second determines the optimal scheduling policy by lexicographically minimizing the steady-state daily BSI for a given number of scheduled admissions. We validate these models using real data from a hospital and test them against data-driven simulations. We apply these models to study the real-world problem of long stayers, to predict the impact of transferring them to community hospitals, as a result of an aging population.
Operations Research
A new study in the INFORMS journal Operations Research proposes a data-driven model for conductin... more A new study in the INFORMS journal Operations Research proposes a data-driven model for conducting strategic workforce planning in organizations. The model optimizes for recruitment and promotions by balancing the risks of not meeting headcount, budget, and productivity constraints, while keeping within a prescribed organizational structure. Analysis using the model indicates that there are increased workforce risks faced by organizations that are not in a state of growth or organizations that face limitations to organizational renewal (such as bureaucracies).
Operations Research eJournal, 2021
The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to nor... more The COVID-19 pandemic has brought many countries to their knees, and the urgency to return to normalcy has never been greater. Epidemiological models, such as the SEIR compartmental model, are indispensable tools for, among other things, predicting how pandemic may spread over time and how vaccinations and different public health interventions could affect the outcome. However, deterministic epidemiological models do not reflect the stochastic nature of the actual infected populations for which the true distribution can never be determined precisely. When embedded in an optimization model, the impact of ambiguous risk can influence the desired outcomes of the mitigating strategy. To address these issues, we first propose a robust epidemiological model, which provides prediction intervals that is specified by the Aumann and Serrano (2008) riskiness index. With suitable approximations, the robust epidemiological optimization model that minimizes the riskiness index can be formulated a...
We consider an appointment system where heterogenous participants are sequenced and scheduled for... more We consider an appointment system where heterogenous participants are sequenced and scheduled for service. As service times are uncertain, the aim is to mitigate the unpleasantness experienced by the participants in the system when their waiting times or delays exceed acceptable thresholds, and address fairness concerning balancing of service levels among participants. In evaluating uncertain delays, we propose the Delay Unpleasantness Measure (DUM) which takes into account of the frequency and intensity of delays above a threshold, and introduce the concept of lexicographic min-max fairness to design appointment systems from the perspective of the worst-o↵ participants. We focus our study in the context of outpatient clinics in balancing doctor’s overtime and patients’ waiting time in which patients are distinguished by their service time characterizations. The model can be adapted in the robust setting when the underlying probability distribution is not fully available. To capture...
We present a unified and tractable framework for distributionally robust optimization that could ... more We present a unified and tractable framework for distributionally robust optimization that could encompass a variety of statistical information including, among others things, constraints on expectation, conditional expectation, and disjoint confidence sets with uncertain probabilities defined by φ-divergence. In particular, we also show that the Wasserstein-based ambiguity set has an equivalent formulation via our proposed ambiguity set, which would enable us to tractably approximate a Wasserstein-based distributionally robust optimization problem with recourse. To address a distributionally robust optimization problem with recourse, we introduce the tractable adaptive recourse scheme (TARS), which is based on the classical linear decision rule and can also be applied in situations where the recourse decisions are discrete. We demonstrate the effectiveness of the TARS in our computational study on a multi-item newsvendor problem.
Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative ... more Inspired by the principle of satisficing (Simon 1955), Long et al. (2021) propose an alternative framework for optimization under uncertainty, which we term as a robust satisficing model. Instead of sizing the uncertainty set in robust optimization, the robust satisficing model is specified by a target objective with the aim of delivering the solution that is least impacted by uncertainty in achieving the target. At the heart of this framework, we minimize the level of constraint violation under all possible realizations within the support set. Our framework is based on a constraint function that evaluates to the optimal objective value of a standard conic optimization problem, which can be used to model a wide range of constraint functions that are convex in the decision variables but can be either convex or concave in the uncertain parameters. We derive an exact semidefinite optimization formulation when the constraint is biconvex quadratic with quadratic penalty and the support s...
We propose tractable replenishment policies for a multi-period, single product inventory control ... more We propose tractable replenishment policies for a multi-period, single product inventory control problem under ambiguous demands, that is, only limited information of the demand distributions such as mean, support and deviation measures are available. We obtain the parameters of the tractable replenishment policies by solving a deterministic optimization problem in the form of second order cone optimization problem (SOCP). Our framework extends to correlated demands and is developed around a factor-based model, which has the ability to incorporate business factors as well as time series forecast effects of trend, seasonality and cyclic variations. Computational results show that with correlated demands, our model outperforms a state independent base-stock policy derived from dynamic programming and an adaptive myopic policy. ∗Department of Industrial and Systems Engineering, National University of Singapore. Email: see chuenteck@yahoo.com.sg †NUS Business School, National University...
Stochastic optimization, especially multistage models, is well known to be computationally excruc... more Stochastic optimization, especially multistage models, is well known to be computationally excruciating. Moreover, such models require exact specifications of the probability distributions of the underlying uncertainties, which are often unavailable. In this paper, we propose tractable methods of addressing a general class of multistage stochastic optimization problems, which assume only limited information of the distributions of the underlying uncertainties, such as known mean, support and covariance. One basic idea of our methods is to approximate the recourse decisions via decision rules. We first examine linear decision rules in detail and show that even for problems with complete recourse, linear decision rules can be inadequate and even lead to infeasible instances. Hence, we propose several new decision rules that improve upon linear decision rules, while keeping the approximate models computationally tractable. Specifically, our approximate models are in the forms of the so...
In this paper, we axiomatize a target-based model of choice that allows decision makers to be bot... more In this paper, we axiomatize a target-based model of choice that allows decision makers to be both risk averse and risk seeking, depending on the payoff's position relative to a prespecified target. The approach can be viewed as a hybrid model, capturing in spirit two celebrated ideas: first, the satisficing concept of Simon (1955); second, the switch between risk aversion and risk seeking popularized by the prospect theory of Kahneman and Tversky (1979). Our axioms are simple and intuitive; in order to be implemented in practice, our approach requires only the specification of an aspiration level. We show that this approach is dual to a known approach using risk measures, thereby allowing us to connect to existing theory. Though our approach is intended to be normative, we also show that it resolves the classical examples of Allais (1953) and Ellsberg (1961).
SSRN Electronic Journal
We present a general framework for data-driven optimization called robustness optimization that f... more We present a general framework for data-driven optimization called robustness optimization that favors solutions for which a risk-aware objective function would best attain an acceptable target even when the actual probability distribution deviates from the empirical distribution. Unlike robust optimization approaches, the decision maker does not have to size the ambiguity set, but specifies an acceptable target, or loss of optimality compared to the empirical optimization model, as a trade off for the model’s ability to withstand greater uncertainty. We axiomatize the decision criterion associated with robustness optimization, termed as the fragility measure and present its representation theorem. Focusing on Wasserstein distance measure with l1-norm, we present tractable robustness optimization models for risk-based linear optimization, combinatorial optimization, and linear optimization problems with recourse. Serendipitously, the insights to the approximation also provide a recipe for approximating solutions for hard stochastic optimization prob- lems without relatively complete recourse. We illustrate in a portfolio optimization problem and a network lot-sizing problem on how we can set targets in the robustness optimization model, which can be more intuitive and effective than specifying the hyper-parameter used in a robust optimization model. The numerical studies show that the solutions to the robustness optimization models are more effective in alleviating the Optimizer’s Curse (Smith and Winkler 2006) by improving the out-of-sample performance evaluated on a variety of metrics.
Mathematical Programming
Motivated by the binary classification problem in machine learning, we study in this paper a clas... more Motivated by the binary classification problem in machine learning, we study in this paper a class of decision problems where the decision maker has a list of goals, from which he aims to attain the maximal possible number of goals. In binary classification, this essentially means seeking a prediction rule to achieve the lowest probability of misclassification, and computationally it involves minimizing a (difficult) non-convex, 0–1 loss function. To address the intractability, previous methods consider minimizing the cumulative loss —the sum of convex surrogates of the 0–1 loss of each goal. We revisit this paradigm and develop instead an axiomatic framework by proposing a set of salient properties on functions for goal scoring and then propose the coherent loss approach, which is a tractable upper-bound of the loss over the entire set of goals. We show that the proposed approach yields a strictly tighter approximation to the total loss (i.e., the number of missed goals) than any convex cumulative loss approach while preserving the convexity of the underlying optimization problem. Moreover, this approach, applied to for binary classification, also has a robustness interpretation which builds a connection to robust SVMs.
SSRN Electronic Journal
Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical out... more Bed shortages in hospitals usually have a negative impact on patient satisfaction and medical outcomes. In practice, healthcare managers often use bed occupancy rates (BOR) as a metric to understand bed utilization, which is insufficient in capturing the risk of bed shortages. We propose the bed shortage index (BSI) to capture more facets of bed shortage risk than traditional metrics such as the occupancy rate, the probability of shortages and expected shortages. The BSI is based on the well-known Aumann and Serrano (2008) riskiness index and it is calibrated to coincide with BOR when the daily arrivals in the hospital unit are Poisson distributed. Our metric can be tractably computed and does not require additional assumptions or approximations. As such, it can be consistently used across the descriptive, predictive and prescriptive analytical approaches. We also propose optimization models to plan for bed capacity via this metric. These models can be efficiently solved on a large scale via a sequence of linear optimization problems. The first maximizes total elective throughput while managing the metric under a specified threshold. The second determines the optimal scheduling policy by lexicographically minimizing the steady-state daily BSI for a given number of scheduled admissions. We validate these models using real data from a hospital and test them against data-driven simulations. We apply these models to study the real-world problem of long stayers, to predict the impact of transferring them to community hospitals, as a result of an aging population.