1. Learning to Search Henry Kautz University of Washington joint work with Dimitri Achlioptas, Carla Gomes, Eric Horvitz, Don Patterson, Yongshao Ruan, Bart Selman CORE – MSR, Cornell, UW
2.
3.
4.
5.
6.
7. Big Picture Problem Instances Solver static features runtime Learning / Analysis Predictive Model dynamic features resource allocation / reformulation control / policy
8. Case Study 1: Beyond 4.25 Problem Instances Solver static features runtime Learning / Analysis Predictive Model
9.
10.
11. Phase Transition Almost all unsolvable area Fraction of pre-assignment Fraction of unsolvable cases Almost all solvable area Phase transition Underconstrained area Critically constrained area Overconstrained area Complexity Graph 42% 50% 20% 42% 50% 20%
12. Easy-Hard-Easy pattern in local search % holes Computational Cost “ Over” constrained area Underconstrained area Walksat Order 30, 33, 36
13.
14. Deep structural features Hardness is also controlled by structure of constraints, not just the fraction of holes Rectangular Pattern Aligned Pattern Balanced Pattern Tractable Very hard
41. Case Study 3: Restart Policies Problem Instances Solver static features runtime Learning / Analysis Predictive Model dynamic features resource allocation / reformulation control / policy
Stochastic solver Sound, but not complete Governed by two key parameters “ Max Flips” “ Noise” Different variations of Walksat apply different heuristics in place of the red text This is the original variation sometimes called Walksat-SKC
Per cent chance of Walksat finding a solution in 100000 flips
Contribution: “ to turn the observation of the relationship of the invariant ratio into an effective procedure for estimating optimal noise level”
20 20 Incomplete nature of local search procedures: they can show consistency of a set of constraints and find a solution or model that satisfies those constraints but they cannot prove inconsistency, i.e., they cannot prove that a solution satisfying those constraints does not exist.
31
Choice points are states in search procedures where the algorithm assigns value to variables where that assignment is not forced via propagation of previous set values, as occurs with unit propagation, backracking,, lookahad, or forward checking. These are points at which an assignment is chosen heuristically per the policies implemented in the problem solver.
Choice points are states in search procedures where the algorithm assigns value to variables where that assignment is not forced via propagation of previous set values, as occurs with unit propagation, backtracking,, lookahead, or forward checking. These are points at which an assignment is chosen heuristically per the policies implemented in the problem solver.
Even better restart policies should be achievable by considering a range of different statistical properties of the search space.