J. Chinneck - Academia.edu (original) (raw)
Papers by J. Chinneck
International Series in Operations Research & Management Science
Computer-Aided Design (CAD) in Electrical and Computer Engineering abounds with modeling, simulat... more Computer-Aided Design (CAD) in Electrical and Computer Engineering abounds with modeling, simulation and optimization challenges that are familiar to operations researchers. Many of these problems are on a far larger scale and of much greater complexity than usual (millions of variables and constraints), so CAD researchers have of necessity developed techniques for approximation and decomposition in order to cope. The goal of this article is to bridge the gap between the two communities so that each can learn from the other. We briefly review some of the most common O.R.-related problems in CAD, and sketch some CAD techniques for handling problems of extreme scale. We also mention emerging CAD topics that are in particular need of assistance from operations researchers.
European Journal of Operational Research, 2016
Communication links connect pairs of wireless nodes in a wireless network. Links can interfere wi... more Communication links connect pairs of wireless nodes in a wireless network. Links can interfere with each other due to their proximity and transmission power if they use the same frequency channel. Given that a frequency channel is the most important and scarce resource in a wireless network, we wish to minimize the total number of different frequency channels used. We can assign the same channel to multiple different links if the assignment is done in a way that avoids co-channel interference. Given a conflict graph which shows conflicts between pairs of links if they are assigned the same frequency channel, assigning channels to links can be cast as a minimum coloring problem. However the coloring problem is complicated by the fact that acceptably small levels of interference between pairs of links using the same channel can accumulate to cause an unacceptable level of total interference at a given link. In this paper we develop fast and effective methods for frequency channel assignment in multi-hop wireless networks via new heuristics for solving this extended coloring problem. The heuristics are orders of magnitude faster than an exact solution method while consistently returning near-optimum results.
2005 International Conference on Wireless Networks, Communications and Mobile Computing
Wireless networks are currently evolving to provide access to interactive multimedia and video co... more Wireless networks are currently evolving to provide access to interactive multimedia and video conferencing in addition to traditional services such as voice, email and web access. These new applications demand large amounts of network bandwidth to achieve the highest levels of quality. However, some customers may content to pay less in exchange for a lower Quality of Service (QoS). Accordingly, charging and resource management policies that control the QoS provided to existing calls must evolve to explicitly consider the trade-off among the service pricing, offered QoS, and customer's QoS perception This paper proposes a novel charging and resource management policy that allocates resources to customers based on their QoS satisfaction levels, thereby charging fairly while improving resource allocation efficiency. Index Terms-Wireless, Multimedia, Charging, QoS, Resource Allocation, Call Admission Control I. INTRODUCTION A proper charging and resource management (RM) policy is crucial to the successful deployment of telecommunication networks. Such policies allocate network resources and recover costs fairly and competitively from the diverse population of customers. By tuning the charging and RM policy, a service provider can attract new customers, compete with other service providers, and introduce new services and promotions. Hence, it is widely believed that an appropriate charging policy should provide incentives for customers to behave in ways that improve overall utilization and performance of the network [I]. Due to the limited resources of wireless networks, a new class of adaptive QoS multimedia applications has been introduced for next generation wireless networks to replace traditional multimedia applications that require a guaranteed Manuscript received May 1, 2005.
ORSA Journal on Computing, 1991
With ongoing advances in hardware and software, the bottleneck in linear programming is no longer... more With ongoing advances in hardware and software, the bottleneck in linear programming is no longer a model solution, it is the correct formulation of large models in the first place. During initial formulation (or modification), a very large model may prove infeasible, but it is often difficult to determine how to correct it. We present a formulation aid which analyzes infeasible LPs and identifies minimal sets of inconsistent constraints from among the perhaps very large set of constraints defining the problem. This information helps to focus the search for a diagnosis of the problem, speeding the repair of the model. We present a series of filtering routines and a final integrated algorithm which guarantees the identification of at least one minimal set of inconsistent constraints. This guarantee is a significant advantage over previous methods. The algorithms are simple, relatively efficient, and easily incorporated into standard LP solvers. Preliminary computational results are r...
Proceedings of the 5th ACM/SPEC international conference on Performance engineering, 2014
Current cloud management systems have limited awareness of the user application, and application ... more Current cloud management systems have limited awareness of the user application, and application managers have no awareness of the state of the cloud. For applications with strong real-time requirements, distributed across new multi-cloud environments, this lack of awareness hampers response-time assurance, efficient deployment and rapid adaptation to changing workloads. This paper considers what forms this awareness may take, how it can be exploited in managing the applications and the clouds, and how it can influence cloud architecture.
EURASIP Journal on Wireless Communications and Networking, 2013
We study the problem of achieving maximum network throughput with fairness among the flows at the... more We study the problem of achieving maximum network throughput with fairness among the flows at the nodes in a wireless mesh network, given their location and the number of their half-duplex radio interfaces. Our goal is to find the minimum number of non-overlapping frequency channels required to achieve interference-free communication. We use our existing Select x for less than x topology control algorithm (TCA) to build the connectivity graph (CG), which enhances spatial channel reuse to help minimize the number of channels required. We show that the TCA-based CG approach requires fewer channels than the classical approach of building the CG based on the maximum power. We use multi-path routing to achieve the maximum network throughput and show that it provides better network throughput than the classical minimum power-based shortest path routing. We also develop an effective heuristic method to determine the minimum number of channels required for interference-free channel assignment.
Ad Hoc Networks, 2015
We study the impact of three different interference models on channel assignment in multi-radio m... more We study the impact of three different interference models on channel assignment in multi-radio multi-channel wireless mesh networks, namely the protocol model, the signal-to-interference ratio (SIR) model and the SIR model with shadowing. The main purpose is to determine the minimum number of non-overlapping frequency channels required to achieve interference-free communication among the mesh nodes based on a realistic interference model. We propose novel, effective, and computationally simple methods for building the conflict graph based on the SIR model with shadowing, and for finding channel assignments from the resulting conflict graph. We find that channel assignment using a realistic interference model (SIR model with shadowing) requires more frequency channels for network throughputs at different node-degree constraints as compared to using simpler interference models.
2014 IEEE Wireless Communications and Networking Conference (WCNC), 2014
In a classical multi-radio multi-channel Wireless Mesh Network (WMN) architecture, mesh nodes use... more In a classical multi-radio multi-channel Wireless Mesh Network (WMN) architecture, mesh nodes use omni-directional antennas. Due to the circular radiation pattern of such antennas, when a mesh node communicates with its neighbor on a certain frequency channel, other mesh nodes within its range must remain silent. Directional antennas have been proposed as a way to improve spatial reuse. Since these antennas are non-steerable, they are not suitable for a dynamic WMN. In this paper, we address the problem of co-channel interference in a dynamic WMN environment by using beamforming based on utilizing the multiple omni-directional antennas of a multi-radio mesh node in the form of a linear antenna array. Our novel Linear Array Beamforming-based Channel Assignment method reduces the number of frequency channels required (NCR) for interference-free communication among the mesh nodes. It significantly outperforms the classical omni-directional antenna pattern-based channel assignment approach in terms of NCR for all node-degrees.
Omega, 1995
The Forest Management Problem of determining when and how much to plant and harvest, to convert t... more The Forest Management Problem of determining when and how much to plant and harvest, to convert tree species, to spray pesticides, and to take other silvicultural actions is a large scale optimization problem that is often formulated and solved as a linear program. Unfortunately, the algebraic representation of the model is often unintelligible to the practicing forest managers who ought to be able to manipulate the model in order to set up and examine various 'what-if' scenarios. We propose a solution to this communication problem: the time-expanded processing network. The goal of the network diagram is to clearly convey the underlying structure, limitations, and assumptions to both mathematical programming experts and nontechnical forest managers. The linear programming formulation then follows directly from the diagram. The paper illustrates the model-building procedure by constructing processing network models for common forest behaviours including growth, fire, and pest infestation, and for common management actions including conversion harvests, pest spraying, and nondeclining yield constraints.
Naval Research Logistics, 1992
Nonviable network models have edges which are forced to zero flow simply by the pattern of interc... more Nonviable network models have edges which are forced to zero flow simply by the pattern of interconnection of the nodes. The original nonviability diagnosis algorithm [4] is extended here to cover all classes of network models, including pure, generalized, pure processing, nonconserving processing, and generalized processing. The extended algorithm relies on the conversion of all network forms to a pure processing form. Efficiency improvements to the original algorithm are also presented.
Naval Research Logistics, 1990
A processing network contains at least one processing node; such a node is constrained by fixed r... more A processing network contains at least one processing node; such a node is constrained by fixed ratios of flow in it's terminals. Though fast solution algorithms are available, processing network models are difficult to formulate in the first place so that meaningful results are obtained. Until now, there have been no automated aids for use during the model-building phase. This article develops the theory of model viability and presents the first such aid: an algorithm for the identification and localization of a class of errors which can occur during model formulation. 1. Assemble a network by defining and interconnecting processing and regular nodes. 2. Inspect the network for viability errors. Detect and correct any such errors. 3. Further constrain the network by adding bounds on the edge flow rates (simple or generalized, upper or lower bounds).
The Journal of the Operational Research Society, 1996
As linear programs have grown larger and more complex, infeasible models are appearing more frequ... more As linear programs have grown larger and more complex, infeasible models are appearing more frequently. Because of the scale and complexity of the models, automated assistance is very often needed in determining the cause of the infeasibility so that model repairs can be made. Fortunately, researchers have developed algorithms for analysing infeasible LPs in recent years, and these have lately found their way into commercial LP computer codes. This paper briefly reviews the underlying algorithms, surveys the computer codes, and compares their performance on a set of test problems.
INFORMS Journal on Computing, 1999
Algorithms and computer-based tools for analyzing infeasible linear and nonlinear programs have b... more Algorithms and computer-based tools for analyzing infeasible linear and nonlinear programs have been developed in recent years, but few such tools exist for infeasible mixed-integer or integer linear programs. One approach that has proven especially useful for infeasible linear programs is the isolation of an Irreducible Infeasible Set of constraints (IIS), a subset of the constraints defining the overall linear program that is itself infeasible, but for which any proper subset is feasible. Isolating an IIS from the larger model speeds the diagnosis and repair of the model by focussing the analytic effort. This paper describes and tests algorithms for finding small infeasible sets in infeasible mixed-integer and integer linear programs; where possible these small sets are IISs.
INFORMS Journal on Computing, 1997
Infeasibility is often encountered during the process of initial model formulation or reformulati... more Infeasibility is often encountered during the process of initial model formulation or reformulation, and it can be extremely difficult to diagnose the cause, especially in large linear programs. While explanation of the error is the domain of humans or artificially intelligent assistants, algorithmic assistance is available to isolate the infeasibility to a subset of the constraints, which helps speed the diagnosis. The isolation should be infeasible, of course, and should not contain any constraints which do not contribute to the infeasibility. Algorithms for finding such irreducible inconsistent systems (IISs) of constraints have been proposed, implemented, and tested in recent years. Experience with IISs shows that a further property of the isolation is highly desirable for easing diagnosis: the isolation should contain as few model rows as possible. This article addresses the problem of finding IISs having few rows in infeasible linear programs. Theory is developed, then impleme...
INFORMS Journal on Computing, 2004
This paper develops a method for moving quickly and cheaply from an arbitrary initial point at an... more This paper develops a method for moving quickly and cheaply from an arbitrary initial point at an extreme distance from the feasible region to a point that is relatively near the feasible region of a nonlinearly constrained model. The method is a variant of a projection algorithm that is shown to be robust, even in the presence of nonconvex constraints and infeasibility. Empirical results are presented.
INFORMS Journal on Computing, 2006
Please scroll down for article-it is on subsequent pages With 12,500 members from nearly 90 count... more Please scroll down for article-it is on subsequent pages With 12,500 members from nearly 90 countries, INFORMS is the largest international association of operations research (O.R.) and analytics professionals and students. INFORMS provides unique networking and learning opportunities for individual professionals, and organizations of all types and sizes, to better understand and use O.R. and analytics tools and methods to transform strategic visions and achieve better outcomes. For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org
INFORMS Journal on Computing, 2009
This issue includes a Special Cluster of seven papers on the topic of High-Throughput Optimizatio... more This issue includes a Special Cluster of seven papers on the topic of High-Throughput Optimization. The concept for the cluster arose from a session that I organized on this topic for the INFORMS Annual Meeting in Pittsburgh in 2006. A call for papers for a Special Issue of the INFORMS Journal on Computing was issued in 2007, but before the due date for the papers arrived, I was appointed Editor-in-Chief of JOC. For this reason, the day-today handling of the papers devolved to a number of the JOC Area Editors: Karen Aardal, Bob Fourer, Harvey Greenberg, and John Hooker. It is these four who should be recognized as the Guest Editors for this Special Cluster. The motivation for the Special Cluster is the observation that parallel computing is no longer limited to very expensive high-end computers, making it the special preserve of well-funded agencies and companies. Most new personal computers have two or even four cores. Most computers are connected via the Internet to many other computers. The opportunity to take advantage of massive and inexpensive parallel computing is here. At the same time, there are numerous difficult and complex problems in optimization that could benefit from high-throughput computing. High-throughput optimization is the result. This is generally described as systems for solving optimization problems that require large amounts of computing resources over lengthy time periods. Bussieck, Ferris, and Meeraus open the Special Cluster with the paper "Grid-Enabled Optimization with GAMS," which describes how the GAMS modeling system has been extended to allow optimization to take place on a loosely coupled grid of heterogeneous computing resources. Michel, See, and Van Hentenryck then describe a system for constraint programming that exploits parallelism transparently without changes to the sequential code in "Transparent Parallelization of Constraint Programming." When a model cannot be automatically decomposed for a parallel solution, then specialized algorithms are needed for particular applications. Xu, Ralphs,
Expert Systems with Applications, 2012
The process of placing a separating hyperplane for data classification is normally disconnected f... more The process of placing a separating hyperplane for data classification is normally disconnected from the process of selecting the features to use. An approach for feature selection that is conceptually simple but computationally explosive is to simply apply the hyperplane placement process to all possible subsets of features, selecting the smallest set of features that provides reasonable classification accuracy. Two ways to speed this process are (i) use a faster filtering criterion instead of a complete hyperplane placement, and (ii) use a greedy forward or backwards sequential selection method. This paper introduces a new filtering criterion that is very fast: maximizing the drop in the sum of infeasibilities in a linear-programming transformation of the problem. It also shows how the linear programming transformation can be applied to reduce the number of features after a separating hyperplane has already been placed while maintaining the separation that was originally induced by the hyperplane. Finally, a new and highly effective integrated method that simultaneously selects features while placing the separating hyperplane is introduced.
International Series in Operations Research & Management Science
Computer-Aided Design (CAD) in Electrical and Computer Engineering abounds with modeling, simulat... more Computer-Aided Design (CAD) in Electrical and Computer Engineering abounds with modeling, simulation and optimization challenges that are familiar to operations researchers. Many of these problems are on a far larger scale and of much greater complexity than usual (millions of variables and constraints), so CAD researchers have of necessity developed techniques for approximation and decomposition in order to cope. The goal of this article is to bridge the gap between the two communities so that each can learn from the other. We briefly review some of the most common O.R.-related problems in CAD, and sketch some CAD techniques for handling problems of extreme scale. We also mention emerging CAD topics that are in particular need of assistance from operations researchers.
European Journal of Operational Research, 2016
Communication links connect pairs of wireless nodes in a wireless network. Links can interfere wi... more Communication links connect pairs of wireless nodes in a wireless network. Links can interfere with each other due to their proximity and transmission power if they use the same frequency channel. Given that a frequency channel is the most important and scarce resource in a wireless network, we wish to minimize the total number of different frequency channels used. We can assign the same channel to multiple different links if the assignment is done in a way that avoids co-channel interference. Given a conflict graph which shows conflicts between pairs of links if they are assigned the same frequency channel, assigning channels to links can be cast as a minimum coloring problem. However the coloring problem is complicated by the fact that acceptably small levels of interference between pairs of links using the same channel can accumulate to cause an unacceptable level of total interference at a given link. In this paper we develop fast and effective methods for frequency channel assignment in multi-hop wireless networks via new heuristics for solving this extended coloring problem. The heuristics are orders of magnitude faster than an exact solution method while consistently returning near-optimum results.
2005 International Conference on Wireless Networks, Communications and Mobile Computing
Wireless networks are currently evolving to provide access to interactive multimedia and video co... more Wireless networks are currently evolving to provide access to interactive multimedia and video conferencing in addition to traditional services such as voice, email and web access. These new applications demand large amounts of network bandwidth to achieve the highest levels of quality. However, some customers may content to pay less in exchange for a lower Quality of Service (QoS). Accordingly, charging and resource management policies that control the QoS provided to existing calls must evolve to explicitly consider the trade-off among the service pricing, offered QoS, and customer's QoS perception This paper proposes a novel charging and resource management policy that allocates resources to customers based on their QoS satisfaction levels, thereby charging fairly while improving resource allocation efficiency. Index Terms-Wireless, Multimedia, Charging, QoS, Resource Allocation, Call Admission Control I. INTRODUCTION A proper charging and resource management (RM) policy is crucial to the successful deployment of telecommunication networks. Such policies allocate network resources and recover costs fairly and competitively from the diverse population of customers. By tuning the charging and RM policy, a service provider can attract new customers, compete with other service providers, and introduce new services and promotions. Hence, it is widely believed that an appropriate charging policy should provide incentives for customers to behave in ways that improve overall utilization and performance of the network [I]. Due to the limited resources of wireless networks, a new class of adaptive QoS multimedia applications has been introduced for next generation wireless networks to replace traditional multimedia applications that require a guaranteed Manuscript received May 1, 2005.
ORSA Journal on Computing, 1991
With ongoing advances in hardware and software, the bottleneck in linear programming is no longer... more With ongoing advances in hardware and software, the bottleneck in linear programming is no longer a model solution, it is the correct formulation of large models in the first place. During initial formulation (or modification), a very large model may prove infeasible, but it is often difficult to determine how to correct it. We present a formulation aid which analyzes infeasible LPs and identifies minimal sets of inconsistent constraints from among the perhaps very large set of constraints defining the problem. This information helps to focus the search for a diagnosis of the problem, speeding the repair of the model. We present a series of filtering routines and a final integrated algorithm which guarantees the identification of at least one minimal set of inconsistent constraints. This guarantee is a significant advantage over previous methods. The algorithms are simple, relatively efficient, and easily incorporated into standard LP solvers. Preliminary computational results are r...
Proceedings of the 5th ACM/SPEC international conference on Performance engineering, 2014
Current cloud management systems have limited awareness of the user application, and application ... more Current cloud management systems have limited awareness of the user application, and application managers have no awareness of the state of the cloud. For applications with strong real-time requirements, distributed across new multi-cloud environments, this lack of awareness hampers response-time assurance, efficient deployment and rapid adaptation to changing workloads. This paper considers what forms this awareness may take, how it can be exploited in managing the applications and the clouds, and how it can influence cloud architecture.
EURASIP Journal on Wireless Communications and Networking, 2013
We study the problem of achieving maximum network throughput with fairness among the flows at the... more We study the problem of achieving maximum network throughput with fairness among the flows at the nodes in a wireless mesh network, given their location and the number of their half-duplex radio interfaces. Our goal is to find the minimum number of non-overlapping frequency channels required to achieve interference-free communication. We use our existing Select x for less than x topology control algorithm (TCA) to build the connectivity graph (CG), which enhances spatial channel reuse to help minimize the number of channels required. We show that the TCA-based CG approach requires fewer channels than the classical approach of building the CG based on the maximum power. We use multi-path routing to achieve the maximum network throughput and show that it provides better network throughput than the classical minimum power-based shortest path routing. We also develop an effective heuristic method to determine the minimum number of channels required for interference-free channel assignment.
Ad Hoc Networks, 2015
We study the impact of three different interference models on channel assignment in multi-radio m... more We study the impact of three different interference models on channel assignment in multi-radio multi-channel wireless mesh networks, namely the protocol model, the signal-to-interference ratio (SIR) model and the SIR model with shadowing. The main purpose is to determine the minimum number of non-overlapping frequency channels required to achieve interference-free communication among the mesh nodes based on a realistic interference model. We propose novel, effective, and computationally simple methods for building the conflict graph based on the SIR model with shadowing, and for finding channel assignments from the resulting conflict graph. We find that channel assignment using a realistic interference model (SIR model with shadowing) requires more frequency channels for network throughputs at different node-degree constraints as compared to using simpler interference models.
2014 IEEE Wireless Communications and Networking Conference (WCNC), 2014
In a classical multi-radio multi-channel Wireless Mesh Network (WMN) architecture, mesh nodes use... more In a classical multi-radio multi-channel Wireless Mesh Network (WMN) architecture, mesh nodes use omni-directional antennas. Due to the circular radiation pattern of such antennas, when a mesh node communicates with its neighbor on a certain frequency channel, other mesh nodes within its range must remain silent. Directional antennas have been proposed as a way to improve spatial reuse. Since these antennas are non-steerable, they are not suitable for a dynamic WMN. In this paper, we address the problem of co-channel interference in a dynamic WMN environment by using beamforming based on utilizing the multiple omni-directional antennas of a multi-radio mesh node in the form of a linear antenna array. Our novel Linear Array Beamforming-based Channel Assignment method reduces the number of frequency channels required (NCR) for interference-free communication among the mesh nodes. It significantly outperforms the classical omni-directional antenna pattern-based channel assignment approach in terms of NCR for all node-degrees.
Omega, 1995
The Forest Management Problem of determining when and how much to plant and harvest, to convert t... more The Forest Management Problem of determining when and how much to plant and harvest, to convert tree species, to spray pesticides, and to take other silvicultural actions is a large scale optimization problem that is often formulated and solved as a linear program. Unfortunately, the algebraic representation of the model is often unintelligible to the practicing forest managers who ought to be able to manipulate the model in order to set up and examine various 'what-if' scenarios. We propose a solution to this communication problem: the time-expanded processing network. The goal of the network diagram is to clearly convey the underlying structure, limitations, and assumptions to both mathematical programming experts and nontechnical forest managers. The linear programming formulation then follows directly from the diagram. The paper illustrates the model-building procedure by constructing processing network models for common forest behaviours including growth, fire, and pest infestation, and for common management actions including conversion harvests, pest spraying, and nondeclining yield constraints.
Naval Research Logistics, 1992
Nonviable network models have edges which are forced to zero flow simply by the pattern of interc... more Nonviable network models have edges which are forced to zero flow simply by the pattern of interconnection of the nodes. The original nonviability diagnosis algorithm [4] is extended here to cover all classes of network models, including pure, generalized, pure processing, nonconserving processing, and generalized processing. The extended algorithm relies on the conversion of all network forms to a pure processing form. Efficiency improvements to the original algorithm are also presented.
Naval Research Logistics, 1990
A processing network contains at least one processing node; such a node is constrained by fixed r... more A processing network contains at least one processing node; such a node is constrained by fixed ratios of flow in it's terminals. Though fast solution algorithms are available, processing network models are difficult to formulate in the first place so that meaningful results are obtained. Until now, there have been no automated aids for use during the model-building phase. This article develops the theory of model viability and presents the first such aid: an algorithm for the identification and localization of a class of errors which can occur during model formulation. 1. Assemble a network by defining and interconnecting processing and regular nodes. 2. Inspect the network for viability errors. Detect and correct any such errors. 3. Further constrain the network by adding bounds on the edge flow rates (simple or generalized, upper or lower bounds).
The Journal of the Operational Research Society, 1996
As linear programs have grown larger and more complex, infeasible models are appearing more frequ... more As linear programs have grown larger and more complex, infeasible models are appearing more frequently. Because of the scale and complexity of the models, automated assistance is very often needed in determining the cause of the infeasibility so that model repairs can be made. Fortunately, researchers have developed algorithms for analysing infeasible LPs in recent years, and these have lately found their way into commercial LP computer codes. This paper briefly reviews the underlying algorithms, surveys the computer codes, and compares their performance on a set of test problems.
INFORMS Journal on Computing, 1999
Algorithms and computer-based tools for analyzing infeasible linear and nonlinear programs have b... more Algorithms and computer-based tools for analyzing infeasible linear and nonlinear programs have been developed in recent years, but few such tools exist for infeasible mixed-integer or integer linear programs. One approach that has proven especially useful for infeasible linear programs is the isolation of an Irreducible Infeasible Set of constraints (IIS), a subset of the constraints defining the overall linear program that is itself infeasible, but for which any proper subset is feasible. Isolating an IIS from the larger model speeds the diagnosis and repair of the model by focussing the analytic effort. This paper describes and tests algorithms for finding small infeasible sets in infeasible mixed-integer and integer linear programs; where possible these small sets are IISs.
INFORMS Journal on Computing, 1997
Infeasibility is often encountered during the process of initial model formulation or reformulati... more Infeasibility is often encountered during the process of initial model formulation or reformulation, and it can be extremely difficult to diagnose the cause, especially in large linear programs. While explanation of the error is the domain of humans or artificially intelligent assistants, algorithmic assistance is available to isolate the infeasibility to a subset of the constraints, which helps speed the diagnosis. The isolation should be infeasible, of course, and should not contain any constraints which do not contribute to the infeasibility. Algorithms for finding such irreducible inconsistent systems (IISs) of constraints have been proposed, implemented, and tested in recent years. Experience with IISs shows that a further property of the isolation is highly desirable for easing diagnosis: the isolation should contain as few model rows as possible. This article addresses the problem of finding IISs having few rows in infeasible linear programs. Theory is developed, then impleme...
INFORMS Journal on Computing, 2004
This paper develops a method for moving quickly and cheaply from an arbitrary initial point at an... more This paper develops a method for moving quickly and cheaply from an arbitrary initial point at an extreme distance from the feasible region to a point that is relatively near the feasible region of a nonlinearly constrained model. The method is a variant of a projection algorithm that is shown to be robust, even in the presence of nonconvex constraints and infeasibility. Empirical results are presented.
INFORMS Journal on Computing, 2006
Please scroll down for article-it is on subsequent pages With 12,500 members from nearly 90 count... more Please scroll down for article-it is on subsequent pages With 12,500 members from nearly 90 countries, INFORMS is the largest international association of operations research (O.R.) and analytics professionals and students. INFORMS provides unique networking and learning opportunities for individual professionals, and organizations of all types and sizes, to better understand and use O.R. and analytics tools and methods to transform strategic visions and achieve better outcomes. For more information on INFORMS, its publications, membership, or meetings visit http://www.informs.org
INFORMS Journal on Computing, 2009
This issue includes a Special Cluster of seven papers on the topic of High-Throughput Optimizatio... more This issue includes a Special Cluster of seven papers on the topic of High-Throughput Optimization. The concept for the cluster arose from a session that I organized on this topic for the INFORMS Annual Meeting in Pittsburgh in 2006. A call for papers for a Special Issue of the INFORMS Journal on Computing was issued in 2007, but before the due date for the papers arrived, I was appointed Editor-in-Chief of JOC. For this reason, the day-today handling of the papers devolved to a number of the JOC Area Editors: Karen Aardal, Bob Fourer, Harvey Greenberg, and John Hooker. It is these four who should be recognized as the Guest Editors for this Special Cluster. The motivation for the Special Cluster is the observation that parallel computing is no longer limited to very expensive high-end computers, making it the special preserve of well-funded agencies and companies. Most new personal computers have two or even four cores. Most computers are connected via the Internet to many other computers. The opportunity to take advantage of massive and inexpensive parallel computing is here. At the same time, there are numerous difficult and complex problems in optimization that could benefit from high-throughput computing. High-throughput optimization is the result. This is generally described as systems for solving optimization problems that require large amounts of computing resources over lengthy time periods. Bussieck, Ferris, and Meeraus open the Special Cluster with the paper "Grid-Enabled Optimization with GAMS," which describes how the GAMS modeling system has been extended to allow optimization to take place on a loosely coupled grid of heterogeneous computing resources. Michel, See, and Van Hentenryck then describe a system for constraint programming that exploits parallelism transparently without changes to the sequential code in "Transparent Parallelization of Constraint Programming." When a model cannot be automatically decomposed for a parallel solution, then specialized algorithms are needed for particular applications. Xu, Ralphs,
Expert Systems with Applications, 2012
The process of placing a separating hyperplane for data classification is normally disconnected f... more The process of placing a separating hyperplane for data classification is normally disconnected from the process of selecting the features to use. An approach for feature selection that is conceptually simple but computationally explosive is to simply apply the hyperplane placement process to all possible subsets of features, selecting the smallest set of features that provides reasonable classification accuracy. Two ways to speed this process are (i) use a faster filtering criterion instead of a complete hyperplane placement, and (ii) use a greedy forward or backwards sequential selection method. This paper introduces a new filtering criterion that is very fast: maximizing the drop in the sum of infeasibilities in a linear-programming transformation of the problem. It also shows how the linear programming transformation can be applied to reduce the number of features after a separating hyperplane has already been placed while maintaining the separation that was originally induced by the hyperplane. Finally, a new and highly effective integrated method that simultaneously selects features while placing the separating hyperplane is introduced.