Andrew J Parkes | The University of Nottingham (original) (raw)
Papers by Andrew J Parkes
Portfolio optimization is one of the most important problems in the finance field. The traditiona... more Portfolio optimization is one of the most important problems in the finance field. The traditional mean-variance model has its drawbacks since it fails to take the market uncertainty into account. In this work, we investigate a two-stage stochastic portfolio optimization model with a comprehensive set of real world trading constraints in order to capture the market uncertainties in terms of future asset prices. A hybrid approach, which integrates genetic algorithm (GA) and a linear programming (LP) solver is proposed in order to solve the model, where GA is used to search for the assets selection heuristically and the LP solver solves the corresponding sub-problems of weight allocation optimally. Scenarios are generated to capture uncertain prices of assets for five benchmark market instances. The computational results indicate that the proposed hybrid algorithm can obtain very promising solutions. Possible future research directions are also discussed.
Annals of Operations Research, Sep 9, 2015
Commercial airports are under increasing pressure to comply with the Eurocontrol Collaborative De... more Commercial airports are under increasing pressure to comply with the Eurocontrol Collaborative Decision Making (CDM) initiative, to ensure that information is passed between stakeholders, integrate automated decision support or make predictions. These systems can also aid effective operations beyond the airport by communicating scheduling decisions to other relevant parties, such as Eurocontrol, for passing on to downstream airports and enabling overall airspace improvements. One of the major CDM components is aimed at producing the target take-off times and target startup-approval times, i.e. scheduling when the aircraft should push back from the gates and start their engines and when they will take off. For medium-sized airports, a common choice for this is a "Pre-Departure Sequencer" (PDS). In this paper, we describe the design and requirements challenges which arose during our development of a PDS system for medium sized international airports. Firstly, the scheduling problem is highly dynamic and event driven. Secondly, it is important to end-users that the system be predictable and, as far as possible, transparent in its operation, with decisions that can be explained. Thirdly, users can override decisions, and this information has to be taken into account. Finally, it is important that the system is as fair as possible for all users This work was supported in part by an EPSRC 'Research Development Fund' and also by EPSRC grant EP/F033613/1
Nuclear Physics B, Jun 1, 1992
Yang-Mills (SDYM) and also that, in perturbation theory, it has has a vanishing four particle sca... more Yang-Mills (SDYM) and also that, in perturbation theory, it has has a vanishing four particle scattering amplitude. We discuss how the dynamics of the three particle scattering implies that on shell states can only scatter if their momenta lie in the same self-dual plane and then investigate classical SDYM with the aim of comparing exact solutions with the tree level perturbation theory predictions. In particular for the gauge group SL(2,C) with a plane wave Hirota ansatz SDYM reduces to a complicated set of algebraic relations due to de Vega. Here we solve these conditions and the solutions are shown to correspond to collisions of plane wave kinks. The main result is that for a class of kinks the resulting phase shifts are non-zero, the solution as a whole is not pure gauge and so the scattering seems nontrivial. However the stress energy and Lagrangian density are confined to string like regions in the space time and in particular are zero for the incoming/outgoing kinks so the solution does not correspond to physical four point scattering.
National Conference on Artificial Intelligence, Jul 27, 1997
Many problem ensembles exhibit a phase transition that is associated with a large peak in the ave... more Many problem ensembles exhibit a phase transition that is associated with a large peak in the average cost of solving the problem instances. However, this peak is not necessarily due to a lack of solutions: indeed the average number of solutions is typically exponentially large. Here, we study this situation within the context of the satisfiability transition in Random SSAT. We find that a significant subclass of instances emerges as we cross the phase transition. These instances are characterized by having about 85-95% of their variables occurring in unary prime implicates (UPIs), with their remaining variables being subject to few constraints. In such instances the models are not randomly distributed but all lie in a cluster that is exponentially large, but still admits a simple description. Studying the effect of UPIs on the local search algorithm WSAT shows that these "single-cluster" instances are harder to solve, and we relate their appearance at the phase transition to the peak in search cost.
Physics Letters B, Aug 1, 1983
A certain class of N = 2 supersymmetric Yang-Mills theories are known to have a perturbation expa... more A certain class of N = 2 supersymmetric Yang-Mills theories are known to have a perturbation expansion which is ultraviolet convergent at all orders. Using the spurion technique and N = 1 super-Feynman rules, it is shown that one can add certain mass and interaction terms which break some or all of the supersymmetries but do not affect the finiteness. These terms can give rise to a non-zero value for the supertrace of the squares of the masses.
ICGA Journal, Mar 1, 2008
Single-player games (often called puzzles) have received considerable attention from the scientif... more Single-player games (often called puzzles) have received considerable attention from the scientific community. Consequently, interesting insights into some puzzles, and into the approaches for solving them, have emerged. However, many puzzles have been neglected, possibly because they are unknown to many people. In this article, we survey NP-Complete puzzles in the hope of motivating further research in this fascinating area, particularly for those puzzles which have received little scientific attention to date.
National Conference on Artificial Intelligence, Jul 28, 2002
We study the scaling properties of sequential and parallel versions of a local search algorithm, ... more We study the scaling properties of sequential and parallel versions of a local search algorithm, WalkSAT, in the easy regions of the easy-hard-easy phase transition (PT) in Random 3-SAT. In the underconstrained region, we study scaling of the sequential version of WalkSAT. We find linear scaling at fixed clause/variable ratio. We also study the case in which a parameter inspired by "finite-size scaling" is held constant. The scaling then also appears to be a simple power law. Combining these results gives a simple prediction for the performance of WalkSAT over most of the easy region. The experimental results suggest that WalkSAT is acting as a threshold algorithm, but with threshold below the satisfiability threshold. Performance of a parallel version of WalkSAT is studied in the over-constrained region. This is more difficult because it is an optimization rather than decision problem. We use the solution quality, the number of unsatisfied clauses, obtained by the sequential algorithm to set a target for its parallel version. We find that qualities obtained by the sequential search with O(n) steps, are achievable by the parallel version in O(log(n)) steps. Thus, the parallelization is efficient for these "easy MAXSAT" problems.
arXiv (Cornell University), Mar 26, 2018
One way to speed up the algorithm configuration task is to use short runs instead of long runs as... more One way to speed up the algorithm configuration task is to use short runs instead of long runs as much as possible, but without discarding the configurations that eventually do well on the long runs. We consider the problem of selecting the top performing configurations of the Conditional Markov Chain Search (CMCS), a general algorithm schema that includes, for examples, VNS. We investigate how the structure of performance on short tests links with those on long tests, showing that significant differences arise between test domains. We propose a "performance envelope" method to exploit the links; that learns when runs should be terminated, but that automatically adapts to the domain.
arXiv (Cornell University), Apr 4, 2017
An international portfolio allows simultaneous investment in both domestic and foreign markets. I... more An international portfolio allows simultaneous investment in both domestic and foreign markets. It hence has the potential for improved performance by exploiting a wider range of returns, and diversification benefits, than portfolios investing in just one market. However, to obtain the most efficient portfolios (along with the usual management of assets) the risks from currency fluctuations need good management, such as by using appropriate hedging. In this paper, we present a two-stage stochastic international portfolio optimisation model to find an optimal allocation for the combination of both assets and currency hedging positions. Our optimisation model allows a "currency overlay", or a deviation of currency exposure from asset exposure, to provide flexibility in hedging against, or in speculation using, currency exposure. The transaction costs associated with both trading and hedging are also included. To model the realistic dependence structure of the multivariate return distributions, a new scenario generation method, employing a regular-vine copula is developed. The use of vine copulas allows a better representation of the characteristics of returns, specifically, their non-normality and asymmetric dependencies. It hence improves the representation of the uncertainty underlying decisions needed for international portfolio optimisation problems. Efficient portfolios optimised with scenarios generated from the new vine-copula method are compared with the portfolios from a standard scenario generation method. Experimental results show that the proposed method, using realistic non-normal uncertainty, produces portfolios that give better risk-return reward than those from a standard scenario generation approach, using normal distributions. The difference in risk-return compensation is largest when the portfolios are constrained to require higher returns. The paper shows that it can be important to model the nonnormality in uncertainty, and not just assume normal distributions.
arXiv (Cornell University), May 6, 2016
We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the ... more We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the well known Boolean Quadratic Programming Problem (BQP). Applications of the BBQP include mining discrete patterns from binary data, approximating matrices by rank-one binary matrices, computing the cut-norm of a matrix, and solving optimisation problems such as maximum weight biclique, bipartite maximum weight cut, maximum weight induced subgraph of a bipartite graph, etc. For the BBQP, we first present several algorithmic components, specifically, hill climbers and mutations, and then show how to combine them in a high-performance metaheuristic. Instead of hand-tuning a standard metaheuristic to test the efficiency of the hybrid of the components, we chose to use an automated generation of a multi-component metaheuristic to save human time, and also improve objectivity in the analysis and comparisons of components. For this we designed a new metaheuristic schema which we call Conditional Markov Chain Search (CMCS). We show that CMCS is flexible enough to model several standard metaheuristics; this flexibility is controlled by multiple numeric parameters, and so is convenient for automated generation. We study the configurations revealed by our approach and show that the best of them outperforms the previous state-of-the-art BBQP algorithm by several orders of magnitude. In our experiments we use benchmark instances introduced in the preliminary version of this paper and described here, which have already become the de facto standard in the BBQP literature.
arXiv (Cornell University), Jan 15, 2021
The main aim of decision support systems is to find solutions that satisfy user requirements. Oft... more The main aim of decision support systems is to find solutions that satisfy user requirements. Often, this leads to predictability of those solutions, in the sense that having the input data and the model, an adversary or enemy can predict to a great extent the solution produced by your decision support system. Such predictability can be undesirable, for example, in military or security timetabling, or applications that require anonymity. In this paper, we discuss the notion of solution predictability and introduce potential mechanisms to intentionally avoid it.
Local search is a powerful technique on many combinatorial optimisation problems. However, the ef... more Local search is a powerful technique on many combinatorial optimisation problems. However, the effectiveness of local search methods will often depend strongly on the details of the heuristics used within them. There are many potential heuristics, and so finding good ones is in itself a challenging search problem. A natural method to search for effective heuristics is to represent the heuristic as a small program and then apply evolutionary methods, such as genetic programming. However, the search within the space of heuristics is not well understood, and in particular little is known of the associated search landscapes. In this paper, we consider the domain of propositional satisfiability (SAT), and a generic class of local search methods called 'WalkSAT'. We give a language for generating the heuristics; using this we generated over three million heuristics, in a systematic manner, and evaluated their associated fitness values. We then use this data set as the basis for an initial analysis of the landscape of the space of heuristics. We give evidence that the heuristic landscape exhibits clustering. We also consider local search on the space of heuristics and show that it can perform quite well, and could complement genetic programming methods on that space.
arXiv (Cornell University), May 6, 2016
We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the ... more We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the well known Boolean Quadratic Programming Problem (BQP). Applications of the BBQP include mining discrete patterns from binary data, approximating matrices by rank-one binary matrices, computing the cut-norm of a matrix, and solving optimisation problems such as maximum weight biclique, bipartite maximum weight cut, maximum weight induced subgraph of a bipartite graph, etc. For the BBQP, we first present several algorithmic components, specifically, hillclimbers and mutations, and then show how to combine them in a high-performance metaheuristic. Instead of hand-tuning a standard metaheuristic to test the efficiency of the hybrid of the components, we chose to use an automated generation of a multi-component metaheuristic to save human time, and also improve objectivity in the analysis and comparisons of components. For this we designed a new metaheuristic schema which we call Conditional Markov Chain Search (CMCS). We show that CMCS is flexible enough to model several standard metaheuristics; this flexibility is controlled by multiple numeric parameters, and so is convenient for automated generation. We study the configurations revealed by our approach and show that the best of them outperforms the previous state-of-the-art BBQP algorithm by several orders of magnitude. In our experiments we use benchmark instances introduced in the preliminary version of this paper and described here, which have already become the de facto standard in the BBQP literature.
Physics Letters B, Jul 1, 1992
We make a change of field variables in the J formulation of self-dual Yang-Mills theory. The fiel... more We make a change of field variables in the J formulation of self-dual Yang-Mills theory. The field equations for the resulting algebra valued field are derivable from a simple cubic action. The cubic interaction vertex is different from that considered previously from the N=2 string, however, perturbation theory with this action shows that the only non-vanishing connected scattering amplitude is for three external particles just as for the string.
arXiv (Cornell University), Apr 19, 2016
The fixed parameter tractable (FPT) approach is a powerful tool in tackling computationally hard ... more The fixed parameter tractable (FPT) approach is a powerful tool in tackling computationally hard problems. In this paper, we link FPT results to classic artificial intelligence (AI) search techniques to show how they complement each other. Specifically, we consider the workflow satisfiability problem (WSP) which asks whether there exists an assignment of authorised users to the steps in a workflow specification, subject to certain constraints on the assignment. It was shown by Cohen et al. (JAIR 2014) that WSP restricted to the class of user-independent (UI) constraints, covering many practical cases, admits FPT algorithms, i.e. can be solved in time exponential only in the number of steps k and polynomial in the number of users n. Since usually k ≪ n in WSP, such FPT algorithms are of great practical interest. We present a new interpretation of the FPT nature of the WSP with UI constraints giving a decomposition of the problem into two levels. Exploiting this two-level split, we develop a new FPT algorithm that is by many orders of magnitude faster than the previous state-of-the-art WSP algorithm and also has only polynomial-space complexity. We also introduce new pseudo-Boolean (PB) and Constraint Satisfaction (CSP) formulations of the WSP with UI constraints which efficiently exploit this new decomposition of the problem and raise the novel issue of how to use general-purpose solvers to tackle FPT problems in a fashion that meets FPT efficiency expectations. In our computational study, the phase transition (PT) properties of the WSP are investigated for the first time, under a model for generation of random instances. We show how PT studies can be extended, in a novel fashion, to support empirical evaluation of scaling of FPT algorithms.
The first Cross-domain Heuristic Search Challenge (CHeSC 2011) seeks to bring together practition... more The first Cross-domain Heuristic Search Challenge (CHeSC 2011) seeks to bring together practitioners from operational research, computer science and artificial intelligence who are interested in developing more generally applicable search methodologies. The challenge is to design a search algorithm that works well, not only across different instances of the same problem, but also across different problem domains. This article overviews the main features of this challenge.
Local search algorithms, particularly GSAT and WSAT, have attracted considerable recent attention... more Local search algorithms, particularly GSAT and WSAT, have attracted considerable recent attention, primarily because they are the best known approaches to several hard classes of satisfiability problems. However, replicating reported results has been difficult because the setting of certain key parameters is something of an art, and because details of the algorithms, not discussed in the published papers, can have a large impact on performance. In this paper we present an efficient probabilistic method for finding the optimal setting for a critical local search parameter, Maxflips, and discuss important details of two differing versions of WSAT. We then apply the optimization method to study performance of WSAT on satisfiable instances of Random 3SAT at the crossover point and present extensive experimental results over a wide range of problem sizes. We find that the results are well described by having the optimal value of Maxflips scale as a simple power of the number of variables, n, and the average run time scale sub-exponentially (basically as nlod4) over the range n = 25,. .. ,400. *This work has been supported by ARPA/Rome Labs under contracts F30602-93-C-0031 and F30602-95-1-0023 and by a doctoral fellowship of the DFG to the second author (Graduiertenkolleg Kognitionswissenschaft).
Public reporting burden for this collection of information is estimated to average 1 hour per res... more Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington Headquarters Services, Directorate for Information Operations and Reports,
Physics Letters B, 1985
It is shown analytically that there are no oneqoop supersymmetry anomalies in N = 2 and N = 4 sup... more It is shown analytically that there are no oneqoop supersymmetry anomalies in N = 2 and N = 4 supersymmetric Yang-Mills theories. This implies that the two-loop # functions in these theories are in accord with supersymmetry when the oneloop finite local counter terms required by supersymmetry are correctly taken into account.
Evolutionary Computation, Sep 1, 2011
Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effe... more Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing.
Portfolio optimization is one of the most important problems in the finance field. The traditiona... more Portfolio optimization is one of the most important problems in the finance field. The traditional mean-variance model has its drawbacks since it fails to take the market uncertainty into account. In this work, we investigate a two-stage stochastic portfolio optimization model with a comprehensive set of real world trading constraints in order to capture the market uncertainties in terms of future asset prices. A hybrid approach, which integrates genetic algorithm (GA) and a linear programming (LP) solver is proposed in order to solve the model, where GA is used to search for the assets selection heuristically and the LP solver solves the corresponding sub-problems of weight allocation optimally. Scenarios are generated to capture uncertain prices of assets for five benchmark market instances. The computational results indicate that the proposed hybrid algorithm can obtain very promising solutions. Possible future research directions are also discussed.
Annals of Operations Research, Sep 9, 2015
Commercial airports are under increasing pressure to comply with the Eurocontrol Collaborative De... more Commercial airports are under increasing pressure to comply with the Eurocontrol Collaborative Decision Making (CDM) initiative, to ensure that information is passed between stakeholders, integrate automated decision support or make predictions. These systems can also aid effective operations beyond the airport by communicating scheduling decisions to other relevant parties, such as Eurocontrol, for passing on to downstream airports and enabling overall airspace improvements. One of the major CDM components is aimed at producing the target take-off times and target startup-approval times, i.e. scheduling when the aircraft should push back from the gates and start their engines and when they will take off. For medium-sized airports, a common choice for this is a "Pre-Departure Sequencer" (PDS). In this paper, we describe the design and requirements challenges which arose during our development of a PDS system for medium sized international airports. Firstly, the scheduling problem is highly dynamic and event driven. Secondly, it is important to end-users that the system be predictable and, as far as possible, transparent in its operation, with decisions that can be explained. Thirdly, users can override decisions, and this information has to be taken into account. Finally, it is important that the system is as fair as possible for all users This work was supported in part by an EPSRC 'Research Development Fund' and also by EPSRC grant EP/F033613/1
Nuclear Physics B, Jun 1, 1992
Yang-Mills (SDYM) and also that, in perturbation theory, it has has a vanishing four particle sca... more Yang-Mills (SDYM) and also that, in perturbation theory, it has has a vanishing four particle scattering amplitude. We discuss how the dynamics of the three particle scattering implies that on shell states can only scatter if their momenta lie in the same self-dual plane and then investigate classical SDYM with the aim of comparing exact solutions with the tree level perturbation theory predictions. In particular for the gauge group SL(2,C) with a plane wave Hirota ansatz SDYM reduces to a complicated set of algebraic relations due to de Vega. Here we solve these conditions and the solutions are shown to correspond to collisions of plane wave kinks. The main result is that for a class of kinks the resulting phase shifts are non-zero, the solution as a whole is not pure gauge and so the scattering seems nontrivial. However the stress energy and Lagrangian density are confined to string like regions in the space time and in particular are zero for the incoming/outgoing kinks so the solution does not correspond to physical four point scattering.
National Conference on Artificial Intelligence, Jul 27, 1997
Many problem ensembles exhibit a phase transition that is associated with a large peak in the ave... more Many problem ensembles exhibit a phase transition that is associated with a large peak in the average cost of solving the problem instances. However, this peak is not necessarily due to a lack of solutions: indeed the average number of solutions is typically exponentially large. Here, we study this situation within the context of the satisfiability transition in Random SSAT. We find that a significant subclass of instances emerges as we cross the phase transition. These instances are characterized by having about 85-95% of their variables occurring in unary prime implicates (UPIs), with their remaining variables being subject to few constraints. In such instances the models are not randomly distributed but all lie in a cluster that is exponentially large, but still admits a simple description. Studying the effect of UPIs on the local search algorithm WSAT shows that these "single-cluster" instances are harder to solve, and we relate their appearance at the phase transition to the peak in search cost.
Physics Letters B, Aug 1, 1983
A certain class of N = 2 supersymmetric Yang-Mills theories are known to have a perturbation expa... more A certain class of N = 2 supersymmetric Yang-Mills theories are known to have a perturbation expansion which is ultraviolet convergent at all orders. Using the spurion technique and N = 1 super-Feynman rules, it is shown that one can add certain mass and interaction terms which break some or all of the supersymmetries but do not affect the finiteness. These terms can give rise to a non-zero value for the supertrace of the squares of the masses.
ICGA Journal, Mar 1, 2008
Single-player games (often called puzzles) have received considerable attention from the scientif... more Single-player games (often called puzzles) have received considerable attention from the scientific community. Consequently, interesting insights into some puzzles, and into the approaches for solving them, have emerged. However, many puzzles have been neglected, possibly because they are unknown to many people. In this article, we survey NP-Complete puzzles in the hope of motivating further research in this fascinating area, particularly for those puzzles which have received little scientific attention to date.
National Conference on Artificial Intelligence, Jul 28, 2002
We study the scaling properties of sequential and parallel versions of a local search algorithm, ... more We study the scaling properties of sequential and parallel versions of a local search algorithm, WalkSAT, in the easy regions of the easy-hard-easy phase transition (PT) in Random 3-SAT. In the underconstrained region, we study scaling of the sequential version of WalkSAT. We find linear scaling at fixed clause/variable ratio. We also study the case in which a parameter inspired by "finite-size scaling" is held constant. The scaling then also appears to be a simple power law. Combining these results gives a simple prediction for the performance of WalkSAT over most of the easy region. The experimental results suggest that WalkSAT is acting as a threshold algorithm, but with threshold below the satisfiability threshold. Performance of a parallel version of WalkSAT is studied in the over-constrained region. This is more difficult because it is an optimization rather than decision problem. We use the solution quality, the number of unsatisfied clauses, obtained by the sequential algorithm to set a target for its parallel version. We find that qualities obtained by the sequential search with O(n) steps, are achievable by the parallel version in O(log(n)) steps. Thus, the parallelization is efficient for these "easy MAXSAT" problems.
arXiv (Cornell University), Mar 26, 2018
One way to speed up the algorithm configuration task is to use short runs instead of long runs as... more One way to speed up the algorithm configuration task is to use short runs instead of long runs as much as possible, but without discarding the configurations that eventually do well on the long runs. We consider the problem of selecting the top performing configurations of the Conditional Markov Chain Search (CMCS), a general algorithm schema that includes, for examples, VNS. We investigate how the structure of performance on short tests links with those on long tests, showing that significant differences arise between test domains. We propose a "performance envelope" method to exploit the links; that learns when runs should be terminated, but that automatically adapts to the domain.
arXiv (Cornell University), Apr 4, 2017
An international portfolio allows simultaneous investment in both domestic and foreign markets. I... more An international portfolio allows simultaneous investment in both domestic and foreign markets. It hence has the potential for improved performance by exploiting a wider range of returns, and diversification benefits, than portfolios investing in just one market. However, to obtain the most efficient portfolios (along with the usual management of assets) the risks from currency fluctuations need good management, such as by using appropriate hedging. In this paper, we present a two-stage stochastic international portfolio optimisation model to find an optimal allocation for the combination of both assets and currency hedging positions. Our optimisation model allows a "currency overlay", or a deviation of currency exposure from asset exposure, to provide flexibility in hedging against, or in speculation using, currency exposure. The transaction costs associated with both trading and hedging are also included. To model the realistic dependence structure of the multivariate return distributions, a new scenario generation method, employing a regular-vine copula is developed. The use of vine copulas allows a better representation of the characteristics of returns, specifically, their non-normality and asymmetric dependencies. It hence improves the representation of the uncertainty underlying decisions needed for international portfolio optimisation problems. Efficient portfolios optimised with scenarios generated from the new vine-copula method are compared with the portfolios from a standard scenario generation method. Experimental results show that the proposed method, using realistic non-normal uncertainty, produces portfolios that give better risk-return reward than those from a standard scenario generation approach, using normal distributions. The difference in risk-return compensation is largest when the portfolios are constrained to require higher returns. The paper shows that it can be important to model the nonnormality in uncertainty, and not just assume normal distributions.
arXiv (Cornell University), May 6, 2016
We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the ... more We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the well known Boolean Quadratic Programming Problem (BQP). Applications of the BBQP include mining discrete patterns from binary data, approximating matrices by rank-one binary matrices, computing the cut-norm of a matrix, and solving optimisation problems such as maximum weight biclique, bipartite maximum weight cut, maximum weight induced subgraph of a bipartite graph, etc. For the BBQP, we first present several algorithmic components, specifically, hill climbers and mutations, and then show how to combine them in a high-performance metaheuristic. Instead of hand-tuning a standard metaheuristic to test the efficiency of the hybrid of the components, we chose to use an automated generation of a multi-component metaheuristic to save human time, and also improve objectivity in the analysis and comparisons of components. For this we designed a new metaheuristic schema which we call Conditional Markov Chain Search (CMCS). We show that CMCS is flexible enough to model several standard metaheuristics; this flexibility is controlled by multiple numeric parameters, and so is convenient for automated generation. We study the configurations revealed by our approach and show that the best of them outperforms the previous state-of-the-art BBQP algorithm by several orders of magnitude. In our experiments we use benchmark instances introduced in the preliminary version of this paper and described here, which have already become the de facto standard in the BBQP literature.
arXiv (Cornell University), Jan 15, 2021
The main aim of decision support systems is to find solutions that satisfy user requirements. Oft... more The main aim of decision support systems is to find solutions that satisfy user requirements. Often, this leads to predictability of those solutions, in the sense that having the input data and the model, an adversary or enemy can predict to a great extent the solution produced by your decision support system. Such predictability can be undesirable, for example, in military or security timetabling, or applications that require anonymity. In this paper, we discuss the notion of solution predictability and introduce potential mechanisms to intentionally avoid it.
Local search is a powerful technique on many combinatorial optimisation problems. However, the ef... more Local search is a powerful technique on many combinatorial optimisation problems. However, the effectiveness of local search methods will often depend strongly on the details of the heuristics used within them. There are many potential heuristics, and so finding good ones is in itself a challenging search problem. A natural method to search for effective heuristics is to represent the heuristic as a small program and then apply evolutionary methods, such as genetic programming. However, the search within the space of heuristics is not well understood, and in particular little is known of the associated search landscapes. In this paper, we consider the domain of propositional satisfiability (SAT), and a generic class of local search methods called 'WalkSAT'. We give a language for generating the heuristics; using this we generated over three million heuristics, in a systematic manner, and evaluated their associated fitness values. We then use this data set as the basis for an initial analysis of the landscape of the space of heuristics. We give evidence that the heuristic landscape exhibits clustering. We also consider local search on the space of heuristics and show that it can perform quite well, and could complement genetic programming methods on that space.
arXiv (Cornell University), May 6, 2016
We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the ... more We study the Bipartite Boolean Quadratic Programming Problem (BBQP) which is an extension of the well known Boolean Quadratic Programming Problem (BQP). Applications of the BBQP include mining discrete patterns from binary data, approximating matrices by rank-one binary matrices, computing the cut-norm of a matrix, and solving optimisation problems such as maximum weight biclique, bipartite maximum weight cut, maximum weight induced subgraph of a bipartite graph, etc. For the BBQP, we first present several algorithmic components, specifically, hillclimbers and mutations, and then show how to combine them in a high-performance metaheuristic. Instead of hand-tuning a standard metaheuristic to test the efficiency of the hybrid of the components, we chose to use an automated generation of a multi-component metaheuristic to save human time, and also improve objectivity in the analysis and comparisons of components. For this we designed a new metaheuristic schema which we call Conditional Markov Chain Search (CMCS). We show that CMCS is flexible enough to model several standard metaheuristics; this flexibility is controlled by multiple numeric parameters, and so is convenient for automated generation. We study the configurations revealed by our approach and show that the best of them outperforms the previous state-of-the-art BBQP algorithm by several orders of magnitude. In our experiments we use benchmark instances introduced in the preliminary version of this paper and described here, which have already become the de facto standard in the BBQP literature.
Physics Letters B, Jul 1, 1992
We make a change of field variables in the J formulation of self-dual Yang-Mills theory. The fiel... more We make a change of field variables in the J formulation of self-dual Yang-Mills theory. The field equations for the resulting algebra valued field are derivable from a simple cubic action. The cubic interaction vertex is different from that considered previously from the N=2 string, however, perturbation theory with this action shows that the only non-vanishing connected scattering amplitude is for three external particles just as for the string.
arXiv (Cornell University), Apr 19, 2016
The fixed parameter tractable (FPT) approach is a powerful tool in tackling computationally hard ... more The fixed parameter tractable (FPT) approach is a powerful tool in tackling computationally hard problems. In this paper, we link FPT results to classic artificial intelligence (AI) search techniques to show how they complement each other. Specifically, we consider the workflow satisfiability problem (WSP) which asks whether there exists an assignment of authorised users to the steps in a workflow specification, subject to certain constraints on the assignment. It was shown by Cohen et al. (JAIR 2014) that WSP restricted to the class of user-independent (UI) constraints, covering many practical cases, admits FPT algorithms, i.e. can be solved in time exponential only in the number of steps k and polynomial in the number of users n. Since usually k ≪ n in WSP, such FPT algorithms are of great practical interest. We present a new interpretation of the FPT nature of the WSP with UI constraints giving a decomposition of the problem into two levels. Exploiting this two-level split, we develop a new FPT algorithm that is by many orders of magnitude faster than the previous state-of-the-art WSP algorithm and also has only polynomial-space complexity. We also introduce new pseudo-Boolean (PB) and Constraint Satisfaction (CSP) formulations of the WSP with UI constraints which efficiently exploit this new decomposition of the problem and raise the novel issue of how to use general-purpose solvers to tackle FPT problems in a fashion that meets FPT efficiency expectations. In our computational study, the phase transition (PT) properties of the WSP are investigated for the first time, under a model for generation of random instances. We show how PT studies can be extended, in a novel fashion, to support empirical evaluation of scaling of FPT algorithms.
The first Cross-domain Heuristic Search Challenge (CHeSC 2011) seeks to bring together practition... more The first Cross-domain Heuristic Search Challenge (CHeSC 2011) seeks to bring together practitioners from operational research, computer science and artificial intelligence who are interested in developing more generally applicable search methodologies. The challenge is to design a search algorithm that works well, not only across different instances of the same problem, but also across different problem domains. This article overviews the main features of this challenge.
Local search algorithms, particularly GSAT and WSAT, have attracted considerable recent attention... more Local search algorithms, particularly GSAT and WSAT, have attracted considerable recent attention, primarily because they are the best known approaches to several hard classes of satisfiability problems. However, replicating reported results has been difficult because the setting of certain key parameters is something of an art, and because details of the algorithms, not discussed in the published papers, can have a large impact on performance. In this paper we present an efficient probabilistic method for finding the optimal setting for a critical local search parameter, Maxflips, and discuss important details of two differing versions of WSAT. We then apply the optimization method to study performance of WSAT on satisfiable instances of Random 3SAT at the crossover point and present extensive experimental results over a wide range of problem sizes. We find that the results are well described by having the optimal value of Maxflips scale as a simple power of the number of variables, n, and the average run time scale sub-exponentially (basically as nlod4) over the range n = 25,. .. ,400. *This work has been supported by ARPA/Rome Labs under contracts F30602-93-C-0031 and F30602-95-1-0023 and by a doctoral fellowship of the DFG to the second author (Graduiertenkolleg Kognitionswissenschaft).
Public reporting burden for this collection of information is estimated to average 1 hour per res... more Public reporting burden for this collection of information is estimated to average 1 hour per response, including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing this collection of information. Send comments regarding this burden estimate or any other aspect of this collection of information, including suggestions for reducing this burden to Washington Headquarters Services, Directorate for Information Operations and Reports,
Physics Letters B, 1985
It is shown analytically that there are no oneqoop supersymmetry anomalies in N = 2 and N = 4 sup... more It is shown analytically that there are no oneqoop supersymmetry anomalies in N = 2 and N = 4 supersymmetric Yang-Mills theories. This implies that the two-loop # functions in these theories are in accord with supersymmetry when the oneloop finite local counter terms required by supersymmetry are correctly taken into account.
Evolutionary Computation, Sep 1, 2011
Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effe... more Squeaky wheel optimization (SWO) is a relatively new metaheuristic that has been shown to be effective for many real-world problems. At each iteration SWO does a complete construction of a solution starting from the empty assignment. Although the construction uses information from previous iterations, the complete rebuilding does mean that SWO is generally effective at diversification but can suffer from a relatively weak intensification. Evolutionary SWO (ESWO) is a recent extension to SWO that is designed to improve the intensification by keeping the good components of solutions and only using SWO to reconstruct other poorer components of the solution. In such algorithms a standard challenge is to understand how the various parameters affect the search process. In order to support the future study of such issues, we propose a formal framework for the analysis of ESWO. The framework is based on Markov chains, and the main novelty arises because ESWO moves through the space of partial assignments. This makes it significantly different from the analyses used in local search (such as simulated annealing) which only move through complete assignments. Generally, the exact details of ESWO will depend on various heuristics; so we focus our approach on a case of ESWO that we call ESWO-II and that has probabilistic as opposed to heuristic selection and construction operators. For ESWO-II, we study a simple problem instance and explicitly compute the stationary distribution probability over the states of the search space. We find interesting properties of the distribution. In particular, we find that the probabilities of states generally, but not always, increase with their fitness. This nonmonotonocity is quite different from the monotonicity expected in algorithms such as simulated annealing.
A constraint satisfaction problem (CSP) is a combinatorial optimisation problem with many real wo... more A constraint satisfaction problem (CSP) is a combinatorial optimisation problem with many real world applications. One of the key aspects to consider when solving a CSP is the order in which the variables are selected to be instantiated. In this study, we describe a genetic programming hyper-heuristic approach to automatically produce heuristics for CSPs. Human-designed 'standard' heuristics are used as components enabling the construction of new variable ordering heuristics which is achieved through the proposed approach. We present empirical evidence that the heuristics produced by our approach are competitive considering the cost of the search when compared to the standard heuristics which are used to obtain the components for the new heuristics. The proposed approach is able to produce specialized heuristics for specific classes of instances that outperform the best standard heuristics for the same instances.
Variable ordering has been a recurrent topic of study in the field of constraint satisfaction bec... more Variable ordering has been a recurrent topic of study in the field of constraint satisfaction because of its impact in the cost of the search. Various variable ordering heuristics have been proposed to help guiding the search under different situations. One important direction of the study about variable ordering is the use of distinct heuristics as the search progresses to reduce the cost of the search. Even though the idea of combining heuristics goes back to the 60's, only a few works that study which heuristics to use and how they interact with each other have been described. In this investigation, we analyse the interactions of four important variable ordering heuristics by combining them through hyper-heuristics that decide the heuristic to apply based on the depth of the nodes in the search tree. The paper does not include any specific model for generating such hyper-heuristics; instead, it presents an analysis of the changes in the cost when different heuristics are applied during the search by using one simple hyper-heuristic representation. The results show that selectively applying distinct heuristics as the search progresses may lead to important reductions in the cost of the search with respect to the performance of the same heuristics used in isolation.