Learning to Identify Global Bottlenecks in Constraint Satisfaction Search (original) (raw)

Learning from Failure in Constraint Satisfaction Search

2006

Much work has been done on learning from failure in search to boost solving of combinatorial problems, such as clause-learning in boolean satisfiability (SAT), nogood and explanation-based learning, and constraint weighting in constraint satisfaction problems (CSPs), etc. Many of the top solvers in SAT use clause learning to good effect. A similar approach (nogood learning) has not had as large an impact in CSPs. Constraint weighting is a less fine grained approach where the information learnt gives an approximation as to which variables may be the sources of greatest contention. In this paper we present a method for learning from search using restarts, in order to identify these critical variables in a given constraint satisfaction problem, prior to solving. Our method is based on the conflict-directed heuristic (weighted-degree heuristic) introduced by Boussemart et al. and is aimed at producing a better-informed version of the heuristic by gathering information through restarting and probing of the search space prior to solving, while minimising the overhead of these restarts/probes. We show that random probing of the search space can boost the heuristics power by improving early decisions in search. We also provide an in-depth analysis of the effects of constraint weighting.

Learning while searching in constraint-satisfaction-problems

1986

The popular use of backtracking as a control strategy for theorem proving in PROLOG and in Truth-Maintenance-Systems (TMS) led to increased interest in various schemes for enhancing the efficiency of backtrack search. Researchers have referred to these enhancement schemes by the names ' 'Intelligent Backtracking' ' (in PROLOG), ' 'Dependencydirected-backtracking" (in TMS) and others. Those improvements center on the issue of "jumping-back" to the source of the problem in front of dead-end situations. This paper examines another issue (much less explored) which arises in dead-ends. Specifically, we concentrate on the idea of constraint recording, namely, analyzing and storing the reasons for the dead-ends, and using them to guide future decisions, so that the same conflicts will not arise again. We view constraint recording as a process of learning, and examine several possible learning schemes studying the tradeoffs between the amount of learning and the improvement in search efficiency.

In Search of the Best Constraint Satisfaction Search

National Conference on Artificial Intelligence, 1994

We present the results of an empirical study of severalconstraint satisfaction search algorithms and heuristics.Using a random problem generator that allows usto create instances with given characteristics, we showhow the relative performance of various search methodsvaries with the number of variables, the tightnessof the constraints, and the sparseness of the constraintgraph. A version of backjumping using a dynamicvariable ordering heuristic

A probabilistic algorithm for k-SAT and constraint satisfaction problems

1999

We present a simple probabilistic algorithm for solving k-SAT, and more generally, for solving constraint satisfaction problems (CSP). The algorithm follows a simple localsearch paradigm (cf. [9]): randomly guess an initial assignment and then, guided by those clauses (constraints) that are not satisfied, by successively choosing a random literal from such a clause and flipping the corresponding bit, try to find a satisfying assignment. If no satisfying as

Exploiting the constrainedness in constraint satisfaction problems

Artificial Intelligence: Methodology, Systems, and …, 2004

Nowadays, many real problem in Artificial Intelligence can be modeled as constraint satisfaction problems (CSPs). A general rule in constraint satisfaction is to tackle the hardest part of a search problem first. In this paper, we introduce a parameter (τ ) that measures the constrainedness of a search problem. This parameter represents the probability of the problem being feasible. A value of τ = 0 corresponds to an over-constrained problem and no states are expected to be solutions. A value of τ = 1 corresponds to an under-constrained problem which every state is a solution. This parameter can also be used in a heuristic to guide search. To achieve this parameter, a sample in finite population is carried out to compute the tightnesses of each constraint. We take advantage of this tightnesses to classify the constraints from the tightest constraint to the loosest constraint. This heuristic may accelerate the search due to inconsistencies can be found earlier and the number of constraint checks can significantly be reduced.

Mapping the performance of heuristics for Constraint Satisfaction

2010

Hyper-heuristics are high level search methodologies that operate over a set of heuristics which operate directly on the problem domain. In one of the hyper-heuristic frameworks, the goal is automating the process of selecting a human-designed low level heuristic at each step to construct a solution for a given problem. Constraint Satisfaction Problems (CSP) are well know NP complete problems. In this study, behaviours of two variable ordering heuristics Max-Conflicts (MXC) and Saturation Degree (SD) with respect to various combinations of constraint density and tightness values are investigated in depth over a set of random CSP instances. The empirical results show that the performance of these two heuristics are somewhat complementary and they vary for changing constraint density and tightness value pairs. The outcome is used to design three hyper-heuristics using MXC and SD as low level heuristics to construct a solution for unseen CSP instances. It has been observed that these hyper-heuristics improve the performance of individual low level heuristics even further in terms of mean consistency checks for some CSP instances.

Estimating Problem Metrics for SAT Clause Weighting Local Search

2016

Abstract. Considerable progress has recently been made in using clause weight-ing algorithms to solve SAT benchmark problems. While these algorithms have outperformed earlier stochastic techniques on many larger problems, this im-provement has generally required extra, problem specific, parameters which have to be fine tuned to problem domains to obtain optimal run-time performance. In a previous paper, the use of parameters, specifically in relation to the DLM clause weighting algorithm, was examined to identify underlying features in clause weight-ing that could be used to eliminate or predict workable parameter settings. A sim-plified clause weighting algorithm (Maxage), based on DLM, was proposed that reduced the parameters to a single parameter. Also, in a previous paper, the struc-ture of SAT problems was investigated and a measure developed which allowed the classification of SAT problems into random, loosely structured or compactly structured. This paper extends this work by...

Dynamic Constraint Satisfaction Problems: Relations among Search Strategies, Solution Sets and Algorithm Performance

Lecture Notes in Computer Science, 2011

Previously we presented a new approach to solving dynamic constraint satisfaction problems (DCSPs) based on features of the problem that remain stable after small-or moderate-sized change. We also showed that even small changes in a CSP can have profound effects on the search performance of ordinary heuristics. The present work extends this analysis. We show that despite a reduction in search effort, variability after change is still pronounced, as reflected in low correlation coefficients. This is still true when effects on failfirstness are separated from promise effects. Moreover, such variability does not depend on similarity in solution sets, reflected in the Hamming distance of the solution closest to the one found for the original problem, although this correlates well with efficiency of the local changes algorithm. Thus, methods based on identifying sources of contention improve average performance without reducing variation in performance after perturbation of individual problems.

Combining Ordering Heuristics and Bundling Techniques for Solving Finite Constraint Satisfaction Problems

2001

We investigate techniques to enhance the performance of backtrack search procedure with forward-checking (FC-BT) for finding all solutions to a finite Constraint Satisfaction Problem (CSP). We consider ordering heuristics for variables and/or values and bundling techniques based on the computation of interchangeability. While the former methods allow us to traverse the search space more effectively, the latter allow us to reduce it size. We design and compare strategies that combine static and dynamic versions of these two approaches. We show empirically the utility of dynamic variable ordering combined with dynamic bundling in both random problems and puzzles.

Increasing Tree Search Efficiency for Constraint Satisfaction Problems

Artificial Intelligence, 1980

In this paper we explore the number of tree search operations required to solve binary constraint satisfaction problems. We show analytically and experimentally that the two principles of first trying the places most likely to fail and remembering what has been done to avoid repeating the same mistake twice improve the standard backtracking search. We experimentally show that a lookahead procedure called forward checking (to anticipate the future) which employs the most likely to fail principle performs better than standard backtracking, Ullman's, Waltz's, Mackworth's, and Haralick's discrete relaxation in all cases tested, and better than Gaschnig's backmarking in the larger problems.