SlideShare une entreprise Scribd logo
1  sur  51
1

Compiled ver 1.0

Dr. C.V. Suresh Babu

5/12/2013

Revision ver 1.1
Revision ver 1.2
Revision ver 1.3
Revision ver 1.4
Revision ver 1.5
Edited ver 2.0

Dr. C.V. Suresh Babu

UNIT-II

OPTIMISATION ALGORITHM
2

INDEX

S.NO

TOPICS

PAGE
NUMBER

1

Optimization problem

3

2

Graph search algorithm

4

2.1 Generic search
2.2 Breadth first search

7

2.3 Dijkstra’s shortest path

14

2.4 Depth first search

21

2.5 Recursive depth first search

28

2.6 Linear ordering of partial order

3

5

30

Network flows and linear programming

32

3.1 Hill climbing

37

3.2 Primary dual hill climbing

39

3.3 Steepest ascent hill climbing

42

3.4 Linear programming

44
3

4

Recursive back tracking

46

4.1 Developing recursive back tracking

46

4.2 Pruning branches

49

4.3 satisfiability

50

1. OPTIMIZATION:





Transform the program to improve efficiency
Performance: faster execution
Size: smaller executable, smaller memory footprint
Tradeoffs:
Performance vs. Size
Compilation speed and memory

OPTIMIZATION PROBLEM:
DEFINITION:
Many important and practical problems can be expressed as optimization problems.
Such problems involve finding the best of an exponentially large set of solutions. It can be like
finding a needle in a haystack.
EXPLANATION:

Ingredients:
An optimization problem is specified by defining instances, solutions, and costs.
Instances: The instances are the possible inputs to the problem.
Solutions for Instance: Each instance has an exponentially large set of solutions. A
solution is valid if it meets a set of criteria determined by the instance at hand.
Measure of Success: Each solution has an easy-to-compute cost, value, or measure
of success that is to be minimized or maximized.

Specification of an Optimization Problem:
Preconditions: The input is one instance.
Postconditions: The output is one of the valid solutions for this instance with optimal (minimum
or maximum as the case may be) measure of success. (The solution to be outputted need not be
unique.)
EXAMPLES:
4

Longest Common Subsequence: This is an example for which we have a polynomialtime algorithm.
Instances: An instance consists of two sequences, e.g., X = _A, B, C, B, D, A, B_ and Y =
_B, D, C, A, B, A_.
Solutions: A subsequence of a sequence is a subset of the elements taken in the same
order. For example, Z = _B, C, A_ is a subsequence of X =_A, B, C, B, D, A, B_. A solution is a
sequence Z that is a subsequence of both X and Y. For example, Z = _B, C, A_ is solution,
because it is a subsequence common to both X and Y (Y = _B, D, C, A, B, A_).
Measure of Success: The value of a solution is the length of the common subsequence,
e.g., |Z| = 3.
Goal: Given two sequences X and Y, the goal is to find the longest common subsequence
(LCS for short). For the example given above, Z = _B, C, B, A_ is a longest common
subsequence.

Course Scheduling: This is an example for which we do not have a polynomial-time
algorithm.
Instances: An instance consists of the set of courses specified by a university, the set
of courses that each student requests, and the set of time slots in that courses can be offered.
Solutions: A solution for an instance is a schedule that assigns each course a time slot.
Measure of Success: A conflict occurs when two courses are scheduled at the same
time even though a student requests them both. The cost of a schedule is the number of conflicts
that it has.
Goal: Given the course and student information, the goal is to find the schedule with
the fewest conflicts.

Airplane: The following is an example of a practical problem.
Instances: An instance specifies the requirements of a plane: size, speed, fuel
efficiency, etc.
Solutions: A solution for an instance is a specification of a plane, right down to every
curve and nut and bolt.
Measure of Success: The company has a way of measuring how well the specification
meets the requirements.
Goal: Given plane requirements, the goal is to find a specification that meets them in
an optimal way.

2. GRAPH SEARCH ALGORITHM:
DEFINITION:
Graph Search Algorithm is the problem of visiting all the nodes in a graph in a
particular manner, updating and/or checking their values along the way. Tree traversal is a
special case of graph traversal.

EXPLANATION:
5

An optimization problem requires finding the best of a large number of solutions. This
can be compared to a mouse finding cheese in a maze. Graph search algorithms provide a way of
systematically searching through this maze of possible solutions.
A "graph" in this context is made up of "vertices" or "nodes" and lines called edges that
connect them. A graph may be undirected, meaning that there is no distinction between the two
vertices associated with each edge, or its edges may be directed from one vertex to another.
A graph consists of
a set N of nodes, called vertices.
a set A of ordered pairs of nodes, called arcs or edges.

EXAMPLE:

TYPES OF GRAPHS:

In directed graphs, edges have a specific direction
In undirected graphs, edges are two-way Vertices u and v are adjacent if (u, v) ∈∈∈ E
A sparse graph has O(|V|) edges (upper bound)
A dense graph has Ω(|V|2) edges (lower bound)
A complete graph has an edge between every pair of vertices
An undirected graph is connected if there is a path between any two vertices

2.1 GENERIC SEARCH ALGORITHM:
DEFINITION:
A generic search algorithm is a generic algorithm whose input-output relation is
specialized to the relation that the inputs are a collection of values and another value, and the
output is the boolean value true if the other value belongs to the collection, false otherwise.
6

This algorithm to search for a solution path in a graph.
The algorithm is independent of any particular graph.

EXPLANATION:
The Reachability Problem:
Preconditions: The input is a graph G (either directed or undirected) and a source node s.
Postconditions: The output consists of all the nodes u that are reachable by a path in G
from s.
Basic Steps:
Suppose you know that node u is reachable froms (denoted as s −→ u) and that there is
an edge fromu to v. Then you can conclude that v is reachable from s (i.e., s −→ u→v). You can
use such steps to build up a set of reachable nodes. _
s has an edge to v4 and v9. Hence, v4 and v9 are reachable.
v4 has an edge to v7 and v3. Hence, v7 and v3 are reachable.
v7 has an edge to v2 and v8. . . .
Difficulties:
Data Structure: How do you keep track of all this?
Exit Condition: How do you know that you have found all the nodes?
Halting: How do you avoid cycling, as in s →v4 →v7 →v2 →v4 →v7 →v2 →v4 →v7
→v2 →v4 →・ ・ ・, forever?
Ingredients of the Loop Invariant:
Found: If you trace a path from s to a node, then we will say that the node has been
found.
Handled: At some point in time after node u has been found, you will want to follow all
the edges fromu and find all the nodes v that have edges fromu. When you have done that for
node u, we say that it has been handled.
Data Structure: You must maintain (1) the set of nodes foundHandled that have
been found and handled and (2) the set of nodes foundNotHandled that have
been found but not handled.
7

The Loop Invariant:
LI1: For each found node v, we know that v is reachable froms, because we have traced
out a path s −→ v from s to it.
LI2: If a node has been handled, then all of its neighbors have been found.

THE ORDER OF HANDLING NODES:
This algorithm specifically did not indicate which node u to select from
foundNotHandled. It did not need to, because the algorithm works no matter how this choice is
made. We will now consider specific orders in which to handle the nodes and specific
applications of these orders.
Queue (Breadth-First Search): One option is to handle nodes in the order they are
found in. This treats foundNotHandled as a queue: ―first in, first out.‖ The effect is that the
search is breadth first,meaning that all nodes at distance 1 from s are handled first, then all those
at distance 2, and so on. A byproduct of this is that we find for each node v a shortest path froms
to v.
Priority Queue (Shortest (Weighted) Paths): Another option calculates for each
node v in foundNotHandled the minimum weighted distance from s to v along any path seen so
far. It then handles the node that is closest to s according to this approximation. Because these
approximations change throughout time, foundNotHandled is implemented using a priority
queue: ―highest current priority out first.‖ Like breadth-first search, the search handles nodes that
are closest to s first, but now the length of a path is the sum of its edge weights. A byproduct of
this method is that we find for each node v the shortest weighted path froms to v.
Stack (Depth-First Search): Another option is to handle the node that was found
most recently. This method treats foundNotHandled as a stack: ―last in, first out.‖ The effect is
that the search is depth first, meaning that a particular path is followed as deeply as possible into
the graph until a dead end is reached, forcing the algorithm to backtrack.

ALGORITHM:
Code:
algorithm GenericSearch (G, s)
<pre-cond>: G is a (directed or undirected) graph, and s is one of its nodes.
<post-cond>: The output consists of all the nodes u that are reachable by a path
in G from s.€
begin
foundHandled = ∅
foundNotHandled = {s}
loop
<loop-invariant>: See LI1, LI2.
8

exit when foundNotHandled = ∅
let u be some node from foundNotHandled
for each v connected to u
if v has not previously been found then
add v to foundNotHandled
end if
end for
move u from foundNotHandled to foundHandled
end loop
return foundHandled
end algorithm

procedure:
1: Search(G,S,goal)
2: Inputs
3: G: graph with nodes N and arcs A
4: S: set of start nodes
5: goal: Boolean function of states
6: Output
7: path from a member of S to a node for which goal is true
8: or ⊥ if there are no solution paths
9: Local
10: Frontier: set of paths
11: Frontier ←{⟨s⟩: s∈S}
12: while (Frontier ≠{})
13: select and remove ⟨s0,...,sk⟩ from Frontier
14: if ( goal(sk)) then
15: return ⟨s0,...,sk⟩
16: Frontier ←Frontier ∪{⟨s0,...,sk,s⟩: ⟨sk,s⟩ ∈ A}
17: return ⊥
This algorithm has a few features that should be noted:
9

The selection of a path at line 13 is non-deterministic. The choice of path that is selected
can affect the efficiency; see the box for more details on our use of "select". A particular
search strategy will determine which path is selected.
It is useful to think of the return at line 15 as a temporary return; another path to a goal
can be searched for by continuing to line 16.
If the procedure returns ⊥ , no solutions exist (or there are no remaining solutions if the
proof has been retried).
If the path chosen does not end at a goal node and the node at the end has no neighbors,
extending the path means removing the path. This outcome is reasonable because this path could
not be part of a path from a start node to a goal node.

2.2 BREADTH-FIRST SEARCH:
DEFINITION:
Breadth-First Search (BFS) is a strategy for searching in a graph when search is
limited to essentially two operations:
(a) visit and inspect a node of a graph;
(b) gain access to visit the nodes that neighbor the currently visited node.
EXPLANATION:
The BFS begins at a root node and inspects all the neighboring nodes. Then for
each of those neighbor nodes in turn, it inspects their neighbor nodes which were unvisited, and
so on. Compare BFS with the equivalent, but more memory-efficient Iterative deepening depthfirst search and contrast with depth-first search.
In breadth-first search the frontier is implemented as a FIFO (first-in, first-out)
queue. Thus, the path that is selected from the frontier is the one that was added earliest.
This approach implies that the paths from the start node are generated in order of
the number of arcs in the path. One of the paths with the fewest arcs is selected at each stage.
10

EXAMPLE:

Step 1:

Step 2:

Step 3:
11

Step 4:

Step 5:

Step 6:
12

Step 7:

Step 8:

Step 9:

Step 10:
13

ALGORITHM:
Code:
algorithm ShortestPath (G, s)
< pre-cond>: G is a (directed or undirected) graph, and s is one of its nodes.
< post-cond>: π specifies a shortest path froms to each node of G, and d specifies
their lengths.
begin
foundHandled = ∅
foundNotHandled = {s}
d(s) = 0, π(s) = _
loop
<loop-invariant>: See above.
exit when foundNotHandled = ∅
let u be the node in the front of the queue foundNotHandled
for each v connected to u
if v has not previously been found then
add v to foundNotHandled
d(v) = d(u) + 1
π(v) = u
end if
end for
move u from foundNotHandled to foundHandled
end loop
(for unfound v, d(v)=∞)
return _d, π_
14

end algorithm
Pseudocode
Input: A graph G and a root v of G
1

procedure BFS(G,v):

2

create a queue Q

3

create a set V

4

enqueue v onto Q

5

add v to V

6

while Q is not empty:

7

t ← Q.dequeue()

8

if t is what we are looking for:

9

return t

10

for all edges e in G.adjacentEdges(t) do

11

u ← G.adjacentVertex(t,e)

12

if u is not in V:

13

add u to V

14

enqueue u onto Q

15

return none

Breadth-first search is useful when
space is not a problem;
you want to find the solution containing the fewest arcs;
few solutions may exist, and at least one has a short path length; and
infinite paths may exist, because it explores all of the search space, even with infinite
paths.
It is a poor method when all solutions have a long path length or there is some heuristic
knowledge available. It is not used very often because of its space complexity.

2.3 DIJKSTRA SHORTEST PATH
DEFINITION:
Dijkstra's algorithm, conceived by Dutchcomputer scientist Edsger Dijkstra in
1956 and published in 1959,[1][2] is a graph search algorithm that solves the single-source shortest
path problem for a graph with non-negative edgepath costs, producing a shortest path tree. This
algorithm is often used in routing and as asubroutine in other graph algorithms.
15

EXPLANATION:
We will now make the shortest-path problem more general by allowing each
edge to have a different weight (length). The length of a path from s to v will be the sum of the
weights on the edges of the path. This makes the problem harder, because the shortest path to a
node may wind deep into the graph along many short edges instead of along a few long edges.
Despite this, only small changes need to be made to the algorithm. The new algorithm is called
Dijkstra’s algorithm.

Specifications of the Shortest-Weighted-Path Problem:
Preconditions: The input is a graph G (either directed or undirected) and a source
node s. Each edge _u, v_ is allocated a nonnegative weightw_u,v_.
Postconditions: The output consists of d and π, where for each node v of G, d(v)
gives the length δ(s, v) of the shortest weighted path from s to v, and π defines a shortestweighted-path tree. (See Section 14.2.)

EXAMPLE:
16

Step 1:

Step 2:
17

Step 3:

Step 4:

Step 5:
18

Step 6:

Step 7:

Step 8:
19

Step 9:

ALGORITHM:
Code:
algorithm DijkstraShortestWeightedPath(G, s)
<pre-cond>: G is a weighted (directed or undirected) graph, and s is one of its nodes.
< post-cond>: π specifies a shortest weighted path from s to each node of G, and d specifies their
lengths.
begin
d(s) = 0, π(s) = _
for other v, d(v)=∞and π(v) = nil
handled = ∅
notHandled = priority queue containing all nodes. Priorities given by d(v).
loop
<loop-invariant>: See above.
exit when notHandled = 0
20

let u be a node fromnotHandled with smallest d(u)
for each v connected to u
foundPathLength = d(u) +w_u,v_
if d(v) > foundPathLength then
d(v) = foundPathLength
π(v) = u
(update the notHandled priority queue)
end if
end for
move u from notHandled to handled
end loop
return _d,

Pseudo code
1 function Dijkstra(Graph, source):
2 for each vertex v in Graph:
3 dist[v] := infinity ;

// Initializations
// Unknown distance function from

4 // source to v
5 previous[v] := undefined ;

// Previous node in optimal path

6 end for

// from source

7
8 dist[source] := 0 ;
9 Q := the set of all nodes in Graph ;

// Distance from source to source
// All nodes in the graph are
// unoptimized – thus are in Q

10
11 while Q is not empty:

// The main loop

12 u := vertex in Q with smallest distance in dist[] ;

// Source node in first case

13 remove u from Q ;
14 if dist[u] = infinity:
15 break ;
16 end if

// all remaining vertices are
// inaccessible from source

17
18 for each neighbor v of u:

// where v has not yet been
21

19

// removed from Q.

20

alt := dist[u] + dist_between(u, v) ;

21

if alt < dist[v]:

22

dist[v] := alt ;

23

previous[v] := u ;

24

decrease-key v in Q;

// Relax (u,v,a)

// Reorder v in the Queue

25 end if
26 end for
27 end while
28 return dist;
29 endfunction
Applications of Dijkstra's Algorithm:

Traffic Information Systems are most prominent use
Mapping (Map Quest, Google Maps)
Routing Systems
2.4 DEPTH-FIRST SEARCH
Definition:
Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data
structures. One starts at the root (selecting some node as the root in the graph case) and explores
as far as possible along each branch before backtracking.
Explanation:
The first strategy is depth-first search. In depth-first search, the frontier acts like a last-in
first-out stack. The elements are added to the stack one at a time. The one selected and taken off
the frontier at any time is the last element that was added.
 Explore edges out of the most recently discovered vertex v.
 When all edges of v have been explored, backtrack to explore other edges leaving the
vertex from which v was discovered (its predecessor).
 ―Search as deep as possible first.‖
 Continue until all vertices reachable from the original source are discovered.
 If any undiscovered vertices remain, then one of them is chosen as a new source and
search is repeated from that source.
22

Example:
Step 1:

Step 2:

Step 3:

Step 4:
23

Step 5:

Step 6:

Step 7:
24

Step 8:

Step 9:

Step 10:

Step 11:
25

Step 12:

Step 13:

Step 14:

Step 15:
26

Step 16:

ALGORITHM:
Code:
algorithm DepthFirstSearch (G, s)
< pre-cond>: G is a (directed or undirected), graph, and s is one of its nodes.
<post-cond>: The output is a depth-first search tree of G rooted at s
begin
foundHandled = 0
foundNotHandled = {_s, 0_}
time = 0%Used for time stamping. See following discussion.
loop
<loop-invariant>: See preceding list.
exit when foundNotHandled = 0
pop <u, i> off the stack foundNotHandled
if u has an (i+1)st edge <u, v>
push <u, i + 1> onto foundNotHandled
if v has not previously been found then
π(v) = u
<u, v> is a tree edge
push <v, 0> onto foundNotHandled
s(v) = time; time = time + 1
else if v has been found but not completely handled then
<u, v> is a back edge
else (v has been completely handled)
27

<u, v> is a forward or cross edge
end if
else
move u to foundHandled
f (v) = time; time = time + 1
end if
end loop
return foundHandled
end algorithm

procedure:
1

procedure DFS-iterative(G,v):

2

label v as discovered

3

let S be a stack

4

S.push(v)

5

while S is not empty

6

t ← S.peek()

7

if t is what we're looking for:

8

return t

9

for all edges e in G.adjacentEdges(t) do

10

if edge e is already labelled

11

continue with the next edge

12

w ← G.adjacentVertex(t,e)

13

if vertex w is not discovered and not explored

14

label e as tree-edge

15

label w as discovered

16

S.push(w)

17

continue at 5

18

else if vertex w is discovered

19

label e as back-edge

20

else

21

// vertex w is explored
28

22

label e as forward- or cross-edge

23

label t as explored

24

S.pop()

Depth-first search is appropriate when either
space is restricted;
many solutions exist, perhaps with long path lengths, particularly for the case where
nearly all paths lead to a solution; or
the order of the neighbors of a node are added to the stack can be tuned so that solutions
are found on the first try.
Depth-first search is the basis for a number of other algorithms, such as iterative deepening.

2.5 RECURSIVE DEPTH FIRST SEARCH:
Definition:
A recursive implementation of a depth-first search algorithm, which directly
mirrors the iterative version. the only difference is that the iterative version uses a stack to keep
track of the route back to the start node, while the recursive version uses the stack of recursive
stack frames. the advantage of the recursive algorithm is that it is easier to code and easier to
understand. the iterative algorithm might run slightly faster, but a good compiler will convert the
recursive algorithm into an iterative one.

EXPLANATION:
Recursive:
Recursion is the process of repeating items in a self-similar way. For instance, when the
surfaces of two mirrors are exactly parallel with each other the nested images that occur are a
form of infinite recursion

Example:
A classic example of recursion is the definition of the factorial function, given here in C
code:
unsigned int factorial(unsigned int n)
{
if (n == 0)
{
return 1;
}
29

else
{
return n * factorial(n - 1);
}
}
Pruning Paths:
Consider the instance graph. There are two obvious paths from node S to node v.
However, there are actually an infinite number of such paths. One path of interest is the one that
starts at S, and traverses around past u up to c and then down to v. All of these equally valued
paths will be pruned from consideration, except the one that goes from S through b and u directly
to v.
Three Friends: Given this instance, we first mark our source node S with an x, and then we
recurse three times, once from each of a, b, and c.
Friend a: Our first friend marks all nodes that are reachable from its source node a = s
without passing through a previously marked node. This includes only the nodes in the leftmost
branch, because when we marked our source S, we blocked his route to the rest of the graph.
Friend b: Our second friend does the same. He finds, for example, the path that goes
from b through u directly to v. He also finds and marks the nodes back around to c.
Friend c: Our third friend is of particular interest.He finds that his source node, c, has
already been marked. Hence, he returns without doing anything. This prunes off this entire
branch of the recursion tree. The reason that he can do this is that for any path to a node that he
would consider, another path to the same node has already been considered.
Achieving the Postcondition:
Consider the component of the graph reachable from our source s without passing
through a previously marked nodes. (Because our instance has no marked nodes, this includes all
the nodes.) To mark the nodes within this component, we do the following. First, we mark our
source s. This partitions our component of reachable nodes into subcomponents that are still
reachable from
ALGORITHM:
Code:
algorithm DepthFirstSearch (s)
_ pre-cond_: An input instance consists of a (directed or undirected) graph G
with some of its nodes marked found and a source node s.
_ post-cond_: The output is the same graph G, except all nodes v reachable from
s without passing through a previously found node are now alsomarked as being
found. The graph G is a global variable ADT, which is assumed to be both input
and output to the routine.
begin
if s is marked as found then
do nothing
30

else
mark s as found
for each v connected to s
DepthFirstSearch (v)
end for
end if
end algorithm
2.6 LINEAR ORDERING OF A PARTIAL ORDER:
Definition:
Finding a linear order consistent with a given partial order is one of many applications
of a depth-first search. (Hint: If a question ever mentions that a graph is directed
acyclic, always start by running this algorithm.)
explanation:
A partial order of a set of objects V supplies only some of the information of a total
order. For each pair of objects u, v ∈ V, it specifies either that u is before v, that v is before u, or
that the order of u and v is undefined. It must also be transitive.
A partial order can be represented by a directed acyclic graph (DAG) G. The
vertices consist of the objects V, and the directed edge u ,v indicates that u is before v. It follows
from transitivity that if there is a directed path in G from u to v, then we know that u is before v.
A cycle in G from u to v and back to u presents a contradiction ,because u cannot be both before
and after v.
Definition of Total Order:
A total order of a set of objects V specifies for each pair of objects u, v ∈ V either (1) that
u is before v or (2) that v is before u. It must be transitive, in that if u is before v and v is before
w, then u is before w.
Definition of Partial Order:
A partial order of a set of objects V supplies only some of the information of a total order.
For each pair of objects u, v ∈ V, it specifies either that u is before v, that v is before u, or that the
order of u and v is undefined. It must also be transitive.
For example, you must put on your underwear before your pants, and you must put on your
shoes after both your pants and your socks. According to transitivity, this means you must put
your underwear on before your shoes. However, you do have the freedom to put your underwear
and your socks on in either order. My son, Josh, when six, mistook this partial order for a total
order and refused to put on his socks before his underwear.When he was eight, he explained
tome that the reason that he could get dressed faster than I was that he had a ―shortcut,‖
consisting in putting his socks on before his pants. I was thrilled that he had at least partially
understood the idea of a partial order:
underwear

pants

socks
/
31

Shoes
A partial order can be represented by a directed acyclic graph (DAG) G. The
vertices consist of the objects V, and the directed edge _u, v_ indicates that u is before v. It
follows from transitivity that if there is a directed path in G from u to v, then we knowthatu is
before v. A cycle in G fromu to v and back tou presents a contradiction, because u cannot be both
before and after v.

Specifications of the Topological Sort Problem:
Preconditions: The input is a directed acyclic graph G representing a partial order.
Postconditions: The output is a total order consistent with the partial order given by G,
that is, for all edges _u, v_ ∈ G, u appears before v in the total order.
An Easy but Slow Algorithm:
The Algorithm: Start at any node v of G. If v has an outgoing edge, walk along it to one of its
neighbors. Continue walking until you find a node t that has no outgoing edges. Such a node is
called a sink. This process cannot continue forever, because the graph has no cycles. The sink t
can go after every node in G. Hence, you should put t last in the total order, delete t from G, and
recursively repeat the process on G −v.
Running Time: It takes up to n time to find the first sink, n − 1 to find the second, and so on.
The total time is _(n2).
Algorithm Using a Depth-First Search:
Start at any node s of G. Do a depth-first search starting at node s. After this search
completes, nodes that are considered found will continue to be considered found, and so should
not be considered again.
Let s_ be any unfound node of G. Do a depth-first search starting at node s_. Repeat
the process until all nodes have been found. Use the time stamp f (u) to keep track of the order in
which nodes are completely handled, that is, removed fromthe stack. Output the nodes in reverse
order. If you ever find a back edge, then stop and report that the graph has a cycle.
Proof of Correctness:
Lemma: For every edge _u, v_ of G, node v is completely handled before u.
Proof of Lemma: Consider some edge _u, v_ of G. Beforeu is completely handled,
it must be put onto the stack foundNotHandled. At this point in time,
there are three cases:
Tree Edge: v has not yet been found. Because u has an edge to v, v is put onto the top
of the stack above u before u has been completely handled. Nomore progresswill bemade
towards handlingu until v has been completely handled and removed fromthe stack.
Back Edge: v has been found, but not completely handled, and hence is on the stack
somewhere below u. Such an edge is a back edge. This contradicts the fact that G is acyclic.
32

Forward or Cross Edge: v has already been completely handled and removed from the
stack. In this case, we are done: v was completely handled before u.

3. NETWORK FLOWS AND LINEAR PROGRAMMING:
Definition:
Network flow is a classic computational problem with a surprisingly large number of
applications, such as routing trucks and matching happy couples. Think of a given directed graph
as a network of pipes starting at a source node s and ending at a sink node t. Through each pipe
water can flow in one direction at some rate up to some maximum capacity. The goal is to find
the maximum total rate at which water can flow from the source node s to the sink node t. If this
were a physical system of pipes, you could determine the answer simply by pushing as much
water through as you could. However, achieving this algorithmically is more difficult than you
might at first think, because the exponentially many paths from s tot overlap, winding forward
and backward in complicated ways.

Explanation:
Network Flow Specification:
Given an instance G, s, t, the goal is to find a maximum rate of flow through graph G
from node s to node t .
Precondition: We are given one of the following instances.
Instances: An instance G, s, t consists of a directed graph G and specific nodes s
and t. Each edge u, v is associated with a positive capacity c u, v
Post condition: The output is a solution with maximum value and the value of that
solution. Solutions for Instance:
A solution for the instance is a flow F, which specifies the flow Fu ,v through each edge of the
graph. The requirements of a flow are as follows.
Unidirectional Flow: For any pair of nodes, it is easiest to assume that flow does not go
in both directions between them. Hence, require that at least one of Fu, v and Fv, u is zero and
that neither be
negative.

a) A network with its edge capacities labeled.
33

b) A maximum flow in this network. The first value associated with each edge is its flow,
and the second is its capacity. The total rate of the flow is 3 = 1 + 2 − 0. Note that no
more flow can be pushed along the top path, because the edge b, cis at capacity. Similarly
for the edge e, f Note also that no flow is pushed along the bottom path, because this
would decrease the total from s to t .
c) A minimum cut in this network. The capacity of this min cut is 3 = 1 + 2. Note that the
capacity of the edge j, is not included in the capacity of the cut, because it is going in the
wrong direction. (b) vs (c):
d) The rate of the maximum flow is the same as the capacity of the min cut. The edges
crossing forward across the cut are at capacity in the flow, while those crossing backward
have zero flow. These things are not coincidences.
Edge Capacity:
The flow through any edge cannot exceed the capacity of the edge, namely Fu, v≤ c u, vNo
Leaks: No water can be added at any node other than the source s, and no water can be drained at
any node other than the sink t. At each other node the total flow into the node equals the total
flow out, i.e., for all nodes u ∈ {s, t }, v Fv, u= v Fu, v
Measure of Success: The value of a flow F, denoted rate (F), is the total rate of flow from the
source s to the sink t. We will define this to be the total that leaves s without coming back, rate
(F) = v [Fs, v− Fv, sAgreeing with our intuition, we will later prove that because no flow leaks
or is created in between s and t , this flow equals that flowing into t without leaving it, namely v
[Fv, t − Ft ,v

EXAMPLE:
NETWORK FLOWS: APPLICATIONS
TRANSPORTATION PROBLEM
Each node is one of two types:
source (supply) node
destination (demand) node
Every arc has:
its tail at a supply node
its head at a demand node
Such a graph is called bipartite.
There are three basic methods:
Northwest Corner Method
Minimum Cost Method
34

Vogel’s Method

MINIMUM COST METHOD
The Northwest Corner Method dos not utilize shipping costs. It can yield an initial bfs easily but
the total shipping cost may be very high. The minimum cost method uses shipping costs in order
come up with a bfs that has a lower cost. To begin the minimum cost method, first we find the
decision variable with the smallest shipping cost (Xij). Then assign Xij its largest possible value,
which is the minimum of si and dj
After that, as in the Northwest Corner Method we should cross out row i and column j and
reduce the supply or demand of the noncrossed-out row or column by the value of Xij. Then we
will choose the cell with the minimum cost of shipping from the cells that do not lie in a crossedout row or column and we will repeat the procedure.

An example for Minimum Cost Method
Step 1: Select the cell with minimum cost.

2

3

5

6
5

2

1

3

5
10

3

8

12

4

8

6

4

15

6

27
.

Step 2: Cross-out column 2
2

3

5

6
5

2

1

3

5
2

8
3

12

8

X

4

4

28
.

6

6

15
35

Step 3: Find the new cell with minimum shipping
cost and cross-out row 2
2

3

5

6
5

2

1

3

5
X

2

8
3

8

10

4

X

6

4

15

6

29
.

Step 4: Find the new cell with minimum shipping
cost and cross-out row 1
2

3

5

6
X

5
2

1

3

5
X

2

8
3

8

5

4

X

6

4

15

6

30
.

Step 5: Find the new cell with minimum shipping
cost and cross-out column 1
2

3

5

6
X

5
2

1

3

5
X

2

8
3

8

4

6

5
X

X

4

31
.

6

10
36

Step 7: Finally assign 6 to last cell. The bfs is found
as: X11=5, X21=2, X22=8, X31=5, X33=4 and X34=6
2

3

5

6
X

5
2

1

3

5
X

2

8
3

8

5

4
4

X

X

6

X

6
X

X

33
.

3.1 HILL CLIMBING ALGORITHM:
In computer science, hill climbing is a mathematical optimization technique which belongs to the family
of local search. It is relatively simple to implement,
Hill-climbing vs. Branch-and-bound
making it a popular first choice. Although more
Hill-climbing searches work by starting off
advanced algorithms may give better results, in some
with an initial guess of a solution, then
situations hill climbing works just as well.
Hill climbing can be used to solve problems that have
many solutions, some of which are better than others. It
starts with a random (potentially poor) solution, and
iteratively makes small changes to the solution, each
time improving it a little. When the algorithm cannot see
any improvement anymore, it terminates. Ideally, at that
point the current solution is close to optimal, but it is
not guaranteed that hill climbing will ever come close to
the optimal solution.
For example, hill climbing can be applied to the traveling
salesman problem. It is easy to find a solution that visits
all the cities but will be very poor compared to the
optimal solution. The algorithm starts with such a
solution and makes small improvements to it, such as
switching the order in which two cities are visited.
Eventually, a much better route is obtained.
Hill climbing is used widely in artificial intelligence, for
reaching a goal state from a starting node. Choice of

iteratively making local changes to it until
either the solution is found or the heuristic
gets stuck in a local maximum. There are
many ways to try to avoid getting stuck in
local maxima, such as running many
searches in parallel, or probabilistically
choosing the successor state, etc. In many
cases, hill-climbing algorithms will rapidly
converge on the correct answer. However,
none of these approaches are guaranteed
to find the optimal solution.

Branch-and-bound solutions work by
cutting the search space into pieces,
exploring one piece, and then attempting to
rule out other parts of the search space
based on the information gained during
each search. They are guaranteed to find
the optimal answer eventually, though
doing so might take a long time. For many
problems, branch-and-bound based
algorithms work quite well, since a small
amount of information can rapidly shrink
the search space.
In short, hill-climbing isn't guaranteed to
find the right answer, but often runs very
quickly and gives good approximations.
Branch-and-bound always finds the right
answer, but might take a while to do so.
37
next node and starting node can be varied to give a list of related algorithms.
Hill climbing attempts to maximize (or minimize) a function f(x), where x are discrete states. These states
are typically represented by vertices in a graph, where edges in the graph encode nearness or similarity
of a graph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or
decreasing) the value of f, until a local maximum (or local minimum) xm is reached. Hill climbing can also
operate on a continuous space: in that case, the algorithm is called gradient ascent (or gradient descent
if the function is minimized).*.
Function for hill climbing:
Function HILL-CLIMBING (problem) returns goal state
Inputs: problem (a problem)
Local variables: current (a node)
Neighbour (a node)
Current MAKE-NODE (INITIAL-STATE [problem])
Loop do
Neighboura highest values successor of current
If value[neighbour] <=value[current] then return STATE[current]
Current neighbour
38
Problems
Local maxima
A problem with hill climbing is that it will find only local maxima. Unless the heuristic is convex, it may
not reach a global maximum. Other local search algorithms try to overcome this problem such as
stochastic hill climbing, random walks and simulated annealing.
Ridges
A ridge is a curve in the search place that leads to a maximum, but the orientation of the ridge
compared to the available moves that are used to climb is such that each move will lead to a smaller
point. In other words, each point on a ridge looks to the algorithm like a local maximum, even though
the point is part of a curve leading to a better optimum.
Plateau
Another problem with hill climbing is that of a plateau, which occurs when we get to a "flat" part of the
search space, i.e. we have a path where the heuristics are all very close together. This kind of flatness
can cause the algorithm to cease progress and wander aimlessly.

HILL CLIMBING TYPES

PRIMARY

STEEPEST ASCENT

DUAL

3.2 PRIMARY DUAL HILL CLIMBING:
Definition:
Suppose that over the hills on which we are climbing there are an exponential
number of roofs, one on top of the other. As before, our problem is to find a place to stand on the
hills that has maximum height. We call this the primal optimization problem. An equally
challenging problem is to find the lowest roof. We call this the dual optimization problem. The
situation is such that each roof is above each place to stand. It follows trivially that the lowest
and hence optimal roof is above the highest and hence optimal place to stand, but off hand we do
not know how far above it is.
FLOW DIAGRAM:

FOUND X SUCCEDED
39

RESTRICTED
PRIMARY

DUAL

PRIMAL

RESTRICTED DUAL

CONSTRUCT A BETTER DUAL

Example:
Lemma: Finds Optimal. A primal–dual hill-climbing algorithm is guaranteed to find an optimal
solution to both the primal and the dual optimization problems.
Proof: By the design of the algorithm, it only stops when it has a location L and a roof R with
matching heights height(L) = height(R). This location must be optimal, because every other
location L_ must be below this roof and hence cannot be higher than this location, that is, ∀ L_,
height(L_) ≤ height(R) = height(L).We say that this dual solution R witnesses the fact that the
primal solution L is optimal. Similarly, L witnesses the fact that R is optimal, that is, ∀ R_,
height(R_) ≥ height(L) = height(R). This is called the duality principle.

Vertex cover and generalized matching:
1. Initialization: x = 0; y = 0.
2. When there is an uncovered edge:
(a) Pick an uncovered edge, and increase ye until some vertices go tight.
(b) Add all tight vertices to the vertex cover.
3. Output the vertex cover.
Primal Dual Pair and Example

max 2000Xfancy
s.t.

1700Xfine

1200Xnew

X fancy

X fine

X new

25Xfancy

20Xfine

19Xnew

280

X new

0

X fancy

,

X fine

,

12
40

(van cap)

(labor)

(profits)

min

12U1

280U2

(ResourcePayments)

s.t.

U1

25U2

2000

(Xfancy)

U1

20U2

1700

(Xfine )

U1

19U2

1200

(Xnew )

U1

,

U2

(nonegativity)

Primal rows become dual columns
Primal Dual Objective Correspondence Dual Variable Relation
Given Feasible X* and U* to Partial Derivative

CX*

U* AX*

U* b

CX*

U* b

Complementary Slackness
At Optimal X*, u*

*

-1
u = CB B =

Z
b

Zero Profits

Given Optimal

*
*
U (b - AX ) =

0

( U * A - C) X * = 0.

u' b = c x

Constructing Dual Solutions
Note we can construct a dual solution from optimal primal solution without resolving the
problem.
Given optimal primal XB* = B-1b and XNB* = 0. This solution must be feasible;
XB* = B-1b > 0 and X > 0
and must satisfy nonnegative reduced cost
CBB-1ANB - CNB> 0.
Given this, suppose try

U

*

= CBB-1 as a dual solution.

First, is this feasible in the dual constraints.
To be feasible, we must have U* A > C and U*> 0.
We know
criteria,

U

*

ANB - CNB> 0 because at optimality this is equivalent to the reduced cost

CBB-1ANB - CNB> 0.
41

Further, for basic variables reduced cost is
CBB-1AB - CB = CBB-1B - CB = CB - CB = 0,
So unifying these statements, we know that U* A> C

Max-Flow–Min-Cut Duality Principle:
The max flow and min cut problems were defined at the beginning of this chapter,
both being interesting problems in their own right. Now we see that cuts can be used as the
ceilings on the flows.
Max Flow = Min Cut: We have proved that, given any network as input, the network
flow algorithm finds amaximum flow and—almost as an accident—finds a minimum cut as well.
This proves that the maximum flow through any network equals itsminimum cut.
Dual Problems: We say that the dual of the max flow problem is the min cut
problem, and conversely that the dual of the min cut problem is the max flow problem.

3.3 STEEPEST ASCENT HILL-CLIMBING:
Definition:
We have all experienced that climbing a hill can take a long time if you wind back and
forth barely increasing your height at all. In contrast, you get there much faster if energetically
you head straight up the hill. This method, which is call the method of steepest ascent, is to
always take the step that increases your height the most. If you already know that the hill
climbing algorithm in which you take any step up the hill works, then this new more specific
algorithm also works. However, if you are lucky it finds the optimal solution faster.
In our network flow algorithm, the choice of what step to take next involves choosing which
path in the augmenting graph to take. The amount the flow increases is the smallest
augmentation capacity of any edge in this path. It follows that the choice that would give us the
biggest improvement is the path whose smallest edge is the largest for any path from s to t. Our
steepest ascent network flow algorithm will augment such a best path each iteration. What
remains to be done is to give an algorithm that finds such a path and to prove that this finds a
maximum flow is found within a polynomial number of iterations.

EXPLANATION:
Finding The Augmenting Path With The Biggest Smallest Edge:
The input consists of a directed graph with positive edge weights and with special
nodes s and t. The output consists of a path from s to t through this graph whose smallest
weighted edge is as big as possible.
Easier Problem: Before attempting to develop an algorithm for this, let us consider an easier
but related problem. In addition to the directed graph, the input to the easier problem provides a
weight denoted wmin. It either outputs a path from s to t whose smallest weighted edge is at least
as big as wmin or states that no such path exists.
Using the Easier Problem: Assuming that we can solve this easier problem, we solve the
original problem by running the first algorithm with wmin being every edge weight in the graph,
42

until we find the weight for which there is a path with such a smallest weight, but there is not a
path with a bigger smallest weight. This is our answer.
Solving the Easier Problem: A path whose smallest weighted edge is at least as big as wmin
will obviously not contain any edge whose weight is smaller than wmin. Hence, the answer to
this easier problem will not change if we delete from the graph all edges whose weight is
smaller. Any path from s to t in the remaining graph will meet our needs. If there is no such path
then we also know there is no such path in our original graph. This solves the problem.
Implementation Details: In order to find a path from s to t in a graph, the algorithm branches
out from s using breadth-first or depth-first search marking every node reachable from s with the
predecessor of the node in the path to it from s. If in the process t is marked, then we have our
path. It seems a waste of time to have to redo this work for each wmin so let’s use an iterative
algorithm. The loop invariant will be that the work for the previous wmin has been done and is
stored in a useful way. The main loop will then complete the work for the current wmin reusing
as much of the previous work as possible. This can be implemented as follows. Sort the edges
from biggest to smallest (breaking ties arbitrarily). Consider them one at a time. When
considering wi, we must construct the graph formed by deleting all the edges with weights
smaller thanwi. Denote this Gwi . We must mark every node reachable from s in this graph.
Suppose that we have already done these things in the graph Gwi−1 . We form Gwi from Gwi−1
by adding the single edge with weight wi. Let hu, vi denote this edge. Nodes are reachable
from s in Gwi that were not reachable in Gwi−1 only if u was reachable and v was not. This new
edge then allows v to be reachable. Unmarked nodes now reachable from s via v can all be
marked reachable by starting a depth first search from v. The algorithm will stop at the first edge
that allows t to be reached. The edge with the smallest weight in this path to t will be the edge
with weight wi added during this iteration. There is not a path from s to t in the input graph with
a larger smallest weighted edge because t was not reachable when only the larger edges were
added. Hence, this path is a path to t in the graph whose smallest weighted edge is the largest.
This is the required output of this subroutine.
Running Time: Even though the algorithm for finding the path with the largest smallest edge
runs depth-first search for each weight wi, because the work done before is reused, no node in
the process is marked reached more than once and hence no edge is traversed more than once. It
follows that this process requires only O(m) time, where m is the number of edges. This time,
however, is dominated by the time O(mlogm) to sort the edges.

ALGORITHM:
algorithmLargestShortestWeight(G, s, t )
pre-cond: G is a weighted directed (augmenting) graph. sis the source
node. tis the sink.
post-cond: P specifies a path from s to t whose smallest edge weight is as
large as possible. u, v is its smallest weighted edge.
begin
Sort the edges by weight from largest to smallest
43

G = graph with no edges
marksreachable
loop
loop-invariant: Every node reachable from s in G is marked
reachable.
exit when t is reachable
u, v = the next largest weighted edge in G
Add u, v to G
if(u is marked reachable and v is not ) then
Do a depth-first search fromv,marking all reachable nodes
notmarked before.
end if
end loop
P = path from s to t in G
return(P, u, v )
end algorithm

3.4 LINEAR PROGRAMMING:
EXPLANATION:
When I was an undergraduate, I had a summer job with a food company. Our goal was
to make cheap hot dogs. Every morningwe got the prices of thousands of ingredients: pig hearts,
sawdust, etc. Each ingredient had an associated variable indicating how much of it to add to the
hot dogs. There are thousands of linear constraints on these variables: so much meat, so much
moisture, and so on. Together these constraints specify which combinations of ingredients
constitute a hot dog. The cost of the hot dog is a linear function of the quantities you put into it
and their prices. The goal is to determine what to put into the hot dogs that day to minimize the
cost. This is an example of a general class of problems referred to as linear programs.
Formal Specification:
A linear program is an optimization problem whose constraints and objective
functions are linear functions. The goal is to find a setting of variables that optimizes the
objective function, while respecting all of the constraints.
Precondition: We are given one of the following instances. Instances: An input
instance consists of (1) a set of linear constraints on a set of variables and (2) a linear objective
function.
Postcondition: The output is a solution with minimum cost and the cost of that
solution.
44

Solutions for Instance:
A solution for the instance is a setting of all the variables that satisfies the
constraints.
Measure of Success:
The cost or value of a solutions is given by the objective function.

EXAMPLE:
Instance Example:
maximize
7x1 − 6x2 + 5x3 + 7x4
subject to
3x1 + 7x2 + 2x3 + 9x4 ≤ 258
6x1 + 3x2 + 9x3 − 6x4 ≤ 721
2x1 + 1x2 + 5x3 + 5x4 ≤ 524
3x1 + 6x2 + 2x3 + 3x4 ≤ 411
4x1 − 8x2 − 4x3 + 4x4 ≤ 685
No Small Local Maximum:
To prove that the algorithm eventually finds a global maximum, we must prove
that it will not get stuck in a small localmaximum.
Convex: Because the bounded polyhedral is the intersection of straight cuts, it is what we
call convex. More formally, this means that the line between any two points in the polyhedral is
also in the polyhedral. This means that there cannot be two local maximum points, because
between these two hills therewould need to be a valley and a line between two points across this
valley would be outside the polyhedral.
The Primal–DualMethod: The primal–dual method formally proves that a global
maximum will be found. Given any linear program, defined by an optimization function and a
set of constraints, there is a way of forming its dual minimization linear program. Each solution
to this dual acts as a roof or upper bound on how high the primal solution can be. Then each
iteration either finds a better solution for the primal or provides a solution for the dual linear
program with a matching value. This dual solution witnesses the fact that no primal solution is
bigger.
Forming the Dual: If the primal linear program is to maximize a ・ x subject
to Mx ≤ b, then the dual is to minimize bT ・ y subject to MT ・ y ≥ aT , where bT , MT , and aT
are the transposes formed by interchanging rows and columns.
The dual of Example 15.4.1 is
minimize
258 + 721y2 + 524y3 + 411y4 + 685y5
45

subject to
3y1 + 6y2 + 2y3 + 3y4 + 4y5 ≥ 7
7y1 + 3y2 + 1y3 + 6y4 − 8y5 ≥ −6
2y1 + 9y2 + 5y3 + 2y4 − 4y5 ≥ 5
9y1 − 6y2 + 5y3 + 3y4 + 4y5 ≥ 7
The dual will have a variable for each constraint in the primal and a constraint for
each of its variables. The coefficients of the objective function become the numbers on the righthand sides of the inequalities, and the numbers on the right-hand sides of the inequalities become
the coefficients of the objective function. Finally, ―maximize‖ becomes ―minimize.‖ The dual is
the same as the original primal.
Upper Bound: We prove that the value of any solution to the primal linear program is
at most the value of any solution to the dual linear program. The value of the primal solution x is
a ・ x. The constraints MT ・ y ≥ aT can be

4. RECURSIVE BACKTRACKING:
DEFINITION:
Objectives:








Understand backtracking algorithms and use them to solve problems
Use recursive functions to implement backtracking algorithms
See how the choice of data structures can affect the efficiency of a program
To understand how recursion applies to backtracking problems.
To be able to implement recursive solutions to problems involving backtracking.
To comprehend the minimax strategy as it applies to two-player games.
To appreciate the importance of developing abstract solutions that can be applied to many
different problem domains.

EXPLANATION:
Backtracking:
Problem space consists of states (nodes) and actions (paths that lead to new states). When
in a node can only see paths to connected nodes
If a node only leads to failure go back to its "parent" node. Try other alternatives. If these
all lead to failure then more backtracking may be necessary.
46

EXAMPLE:

The Eight-Queens Puzzle:
•

How to place eight queens on a chess board so that no queen can take another
– Remember that in chess, a queen can take another piece that lies on the same row,
the same column or the same diagonal (either direction)

•

There seems to be no analytical solutions

•

Solutions do exists
– Requires luck coupled with trial and error
– Or much exhaustive computation

Solve the Puzzle:
•

How will you solve the problem?
– Put the queens on the board one by one in some systematic/logical order
– Make sure that the queen placed will not be taken by another already on the board
– If you are lucky to put eight queens on the board, one solution is found
– Otherwise, some queens need to be removed and placed elsewhere to continue the
search for a solution
47

Main program
int main( )
/* Pre: The user enters a valid board size.
Post: All solutions to the n-queens puzzle for the selected board size are printed.
Uses: The class Queens and the recursive function solve_from. */
{
int board_size;
print_information( );
cout << "What is the size of the board? " << flush;
cin >> board_size;
if (board size < 0 || board size > max_board)
cout << "The number must be between 0 and ― << max_board << endl;
else {
Queens configuration(board_size); // Initialize empty configuration.
solve_from(configuration); // Find all solutions extending configuration.
}}

ALGORITHM:
An Algorithm as a Sequence of Decisions:
An algorithm for finding an optimal solution for your instance must make a sequence of
small decisions about the solution: ―Do we include the first object in the solution or not?‖ ―Do
we include the second?‖ ―The third?‖ . . . , or ―At the first fork in the road, do we go left or
right?‖ ―At the second fork which direction do we go?‖ ―At the third?‖ . . . . As one stack frame
in the recursive algorithm, our task is to deal only with the first of these decisions
High-Level Code: The following is the basic structure that the code will take.
algorithm Alg (I )
< pre-cond_>: I is an instance of the problem.
< post-cond>: optSol is one of the optimal solutions for the instance I, and
optCost is its cost.
begin
if( I is small ) then
return( brute force answer )
else
48

% Deal with the first decision by trying each of the K possibilities
for k = 1 to K
% Temporarily, commit tomaking the first decision in the kth way.
<optSolk , optCostk > = Recursively deal with all the remaining decisions,
and in doing so find the best solution optSolk for
our instance I from among those consistent with this
kth way of making the first decision. optCostk is its
cost.
end for
% Having the best, optSolk , for each possibility k, we keep the best of
these best.
kmin = ―a k that minimizes optCostk‖
optSol = optSolkmin
optCost = optCostkmin
return _optSol, optCost_
end if
end algorithm

4.1 PRUNING BRANCHES:
DEFINITION:
This recursive backtracking algorithm for SAT can be sped up. This can either be viewed
globally as a pruning off of entire branches of the classification tree or be viewed locally as
seeing that some subinstances, after they have been sufficiently reduced, are trivial to solve.
The following are typical reasons why an entire branch of the solution classification tree
can be pruned off.

Explanation:
Invalid Solutions:
Recall that in a recursive backtracking algorithm, the little bird tells the algorithm
something about the solution and then the algorithm recurses by asking a friend a question. Then
this friend gets more information about the solution from his little bird, and so on. Hence,
following a path down the recursive tree specifies more and more about a solution until a leaf of
the tree fully specifies one particular solution. Sometimes it happens that partway down the tree,
the algorithm has already received enough information about the solution to determine that it
49

contains a conflict or defect making any such solution invalid. The algorithm can stop recursing
at this point and backtrack. This effectively prunes off the entire subtree of solutions rooted at
this node in the tree.
Queens:
Before we try placing a queen on the square _r + 1, k_, we should check to make sure
that this does not conflict with the locations of any of the queens on __1, c1_, _2, c2_, . . . , _r, cr
__. If it does, we do not need to ask for help from this friend. Exercise 17.5.4 bounds the
resulting running time.
Time Saved:
The time savings can be huge. reducing the number of recursive calls from two to one
decreased the running time from _(N) to _(log N), and reducing the number of recursive calls
from four to three decreased the running time from _(n2) to _(n1.58...).
No Highly Valued Solutions:
Similarly, when the algorithm arrives at the root of a subtree, it might realize that no
solutions within this subtree are rated sufficiently high to be optimal—perhaps because the
algorithm has already found a solution provably better than all of these. Again, the algorithm can
prune this entire subtree from its search.
Greedy Algorithms:
Greedy algorithms are effectively recursive backtracking algorithms with extreme
pruning. Whenever the algorithm has a choice as to which little bird’s answer to take, i.e., which
path down the recursive tree to take, instead of iterating through all of the options, it goes only
for the one that looks best according to some greedy criterion
Modifying Solutions:
Let us recall why greedy algorithms are able to prune, so that we can use the same
reasoning with recursive backtracking algorithms. In each step in a greedy algorithm, the
algorithm commits to some decision about the solution. This effectively burns some of its
bridges, because it eliminates some solutions from consideration. However, this is fine as long as
it does not burn all its bridges. The prover proves that there is an optimal solution consistent with
the choices made by modifying any possible solution that is not consistent with the latest choice
into one that has at least as good value and is consistent with this choice. Similarly, a recursive
backtracking algorithm can prune of branches in its tree when it knows that this does not
eliminate all remaining optimal solutions.
Queens: By symmetry, any solution that has the queen in the second half of the first
row can be modified into one that has the this queen in the first half, simply by flipping the
solution left to right.Hence, when placing a queen in the first row, there is no need to try placing
it in the second half of the row.
Depth-First Search: Recursive depth-first search is a recursive backtracking algorithm.
A solution to the optimization problem of searching a maze for cheese is a path in the graph
starting from s. The value of a solution is the weight of the node at the end of the path. The
algorithm marks nodes that it has visited. Then, when the algorithm revisits a node, it knows that
it can prune this subtree in this recursive search, because it knows that any node reachable
from the current node has already been reached. the path _s, c, u, v_ is pruned because it can be
modified into the path _s, b, u, v_, which is just as good.
50

4.2 SATISFIABILITY:
DEFINITION:
A famous optimization problem is called satisfiability, or SAT for short. It is one of the
basic problems arising in many fields. The recursive backtracking algorithm given here is
referred to as the Davis–Putnam algorithm. It is an example of an algorithm whose running time
is exponential for worst case inputs, yet in many practical situations can work well. This
algorithm is one of the basic algorithms underlying automated theorem proving and robot path
planning, among other things.

EXPLANATION:
The Satisfiability Problem:
Instances:
An instance (input) consists of a set of constraints on the assignment to the binary
variables x1, x2, . . . , xn. A typical constraint might be _x1 or x3 or x8_, meaning (x1 = 1 or x3 =
0 or x8 = 1) or equivalently that either x1 is true, x3 is false, or x8 is true. More generally an
instance could be a more general circuit built with AND, OR, and NOT gates.
Solutions:
Each of the 2n assignments is a possible solution. An assignment is valid for the given
instance if it satisfies all of the constraints.
Measure of Success:
An assignment is assigned the value one if it satisfies all of the constraints, and the value
zero otherwise.
Goal: Given the constraints, the goal is to find a satisfying assignment

ALGORITHM:
Code:

algorithm DavisPutnam (c)
<pre-cond>: c is a set of constraints on the assignment to _x.
<post-cond>: If possible, optSol is a satisfying assignment and optCost is also one.
Otherwise optCost is zero.
begin
if( c has no constraints or no variables ) then
% c is trivially satisfiable.
return _∅ , 1_
else if( c has both a constraint forcing a variable xi to 0
51

and one forcing the same variable to 1) then
% c is trivially not satisfiable.
return _∅ , 0_
else
for any variable forced by a constraint to some value
substitute this value into c.
let xi be the variable that appears the most often in c
% Loop over the possible bird answers.
for k = 0 to 1 (unless a satisfying solution has been found)
% Get help from friend.
let c_ be the constraints c with k substituted in for xi
<optSubSol, optSubCost> = DavisPutnam(c)
optSolk=(forced values, xi = k, optSubSol)
optCostk= optSubCost
end for
% Take the best bird answer.
kmax = a k that maximizes optCostk
optSol = optSolkmax
optCost = optCostkmax
return <optSol, optCost>
end if
end algorithm

Contenu connexe

Tendances

hankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijchankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijcVasilis Tsoulkas
 
1984 Article on An Application of AI to Operations Reserach
1984 Article on An Application of AI to Operations Reserach1984 Article on An Application of AI to Operations Reserach
1984 Article on An Application of AI to Operations ReserachBob Marcus
 
Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Traian Rebedea
 
A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...
A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...
A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...Khoa Mac Tu
 
04 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x204 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x2MuradAmn
 
01. design & analysis of agorithm intro & complexity analysis
01. design & analysis of agorithm intro & complexity analysis01. design & analysis of agorithm intro & complexity analysis
01. design & analysis of agorithm intro & complexity analysisOnkar Nath Sharma
 
Graph Traversal Algorithms - Breadth First Search
Graph Traversal Algorithms - Breadth First SearchGraph Traversal Algorithms - Breadth First Search
Graph Traversal Algorithms - Breadth First SearchAmrinder Arora
 
Bidirectional graph search techniques for finding shortest path in image base...
Bidirectional graph search techniques for finding shortest path in image base...Bidirectional graph search techniques for finding shortest path in image base...
Bidirectional graph search techniques for finding shortest path in image base...Navin Kumar
 
Analysis Of Algorithms I
Analysis Of Algorithms IAnalysis Of Algorithms I
Analysis Of Algorithms ISri Prasanna
 
Analysis of CANADAIR CL-215 retractable landing gear.
Analysis of CANADAIR CL-215 retractable landing gear.Analysis of CANADAIR CL-215 retractable landing gear.
Analysis of CANADAIR CL-215 retractable landing gear.Nagesh NARASIMHA PRASAD
 
Minimal spanning tree class 15
Minimal spanning tree class 15Minimal spanning tree class 15
Minimal spanning tree class 15Kumar
 
Backtracking & branch and bound
Backtracking & branch and boundBacktracking & branch and bound
Backtracking & branch and boundVipul Chauhan
 
Seminar Report (Final)
Seminar Report (Final)Seminar Report (Final)
Seminar Report (Final)Aruneel Das
 
Algorithm Design and Complexity - Course 5
Algorithm Design and Complexity - Course 5Algorithm Design and Complexity - Course 5
Algorithm Design and Complexity - Course 5Traian Rebedea
 

Tendances (20)

hankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijchankel_norm approximation_fir_ ijc
hankel_norm approximation_fir_ ijc
 
Unit i
Unit iUnit i
Unit i
 
Jarrar: Games
Jarrar: GamesJarrar: Games
Jarrar: Games
 
Siegel
SiegelSiegel
Siegel
 
1984 Article on An Application of AI to Operations Reserach
1984 Article on An Application of AI to Operations Reserach1984 Article on An Application of AI to Operations Reserach
1984 Article on An Application of AI to Operations Reserach
 
Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10Algorithm Design and Complexity - Course 10
Algorithm Design and Complexity - Course 10
 
A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...
A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...
A study on_contrast_and_comparison_between_bellman-ford_algorithm_and_dijkstr...
 
04 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x204 greedyalgorithmsii 2x2
04 greedyalgorithmsii 2x2
 
01. design & analysis of agorithm intro & complexity analysis
01. design & analysis of agorithm intro & complexity analysis01. design & analysis of agorithm intro & complexity analysis
01. design & analysis of agorithm intro & complexity analysis
 
Graph Traversal Algorithms - Breadth First Search
Graph Traversal Algorithms - Breadth First SearchGraph Traversal Algorithms - Breadth First Search
Graph Traversal Algorithms - Breadth First Search
 
Bidirectional graph search techniques for finding shortest path in image base...
Bidirectional graph search techniques for finding shortest path in image base...Bidirectional graph search techniques for finding shortest path in image base...
Bidirectional graph search techniques for finding shortest path in image base...
 
Analysis Of Algorithms I
Analysis Of Algorithms IAnalysis Of Algorithms I
Analysis Of Algorithms I
 
Analysis of CANADAIR CL-215 retractable landing gear.
Analysis of CANADAIR CL-215 retractable landing gear.Analysis of CANADAIR CL-215 retractable landing gear.
Analysis of CANADAIR CL-215 retractable landing gear.
 
Minimal spanning tree class 15
Minimal spanning tree class 15Minimal spanning tree class 15
Minimal spanning tree class 15
 
Shortest path
Shortest pathShortest path
Shortest path
 
Backtracking & branch and bound
Backtracking & branch and boundBacktracking & branch and bound
Backtracking & branch and bound
 
Fortran induction project. DGTSV DGESV
Fortran induction project. DGTSV DGESVFortran induction project. DGTSV DGESV
Fortran induction project. DGTSV DGESV
 
(148065320) dijistra algo
(148065320) dijistra algo(148065320) dijistra algo
(148065320) dijistra algo
 
Seminar Report (Final)
Seminar Report (Final)Seminar Report (Final)
Seminar Report (Final)
 
Algorithm Design and Complexity - Course 5
Algorithm Design and Complexity - Course 5Algorithm Design and Complexity - Course 5
Algorithm Design and Complexity - Course 5
 

En vedette

Decomposition Methods in SLP
Decomposition Methods in SLPDecomposition Methods in SLP
Decomposition Methods in SLPSSA KPI
 
Interviews with professionals
Interviews with professionalsInterviews with professionals
Interviews with professionalstbrown38
 
Primal Dual
Primal DualPrimal Dual
Primal Dualcarlol
 
Global optimization
Global optimizationGlobal optimization
Global optimizationbpenalver
 
Time and space complexity
Time and space complexityTime and space complexity
Time and space complexityAnkit Katiyar
 
Karmarkar's Algorithm For Linear Programming Problem
Karmarkar's Algorithm For Linear Programming ProblemKarmarkar's Algorithm For Linear Programming Problem
Karmarkar's Algorithm For Linear Programming ProblemAjay Dhamija
 
Special Cases in Simplex Method
Special Cases in Simplex MethodSpecial Cases in Simplex Method
Special Cases in Simplex MethodDivyansh Verma
 
Linear programming - Model formulation, Graphical Method
Linear programming  - Model formulation, Graphical MethodLinear programming  - Model formulation, Graphical Method
Linear programming - Model formulation, Graphical MethodJoseph Konnully
 

En vedette (11)

Decomposition Methods in SLP
Decomposition Methods in SLPDecomposition Methods in SLP
Decomposition Methods in SLP
 
Interviews with professionals
Interviews with professionalsInterviews with professionals
Interviews with professionals
 
Primal Dual
Primal DualPrimal Dual
Primal Dual
 
Global optimization
Global optimizationGlobal optimization
Global optimization
 
Duality
DualityDuality
Duality
 
PRIMAL & DUAL PROBLEMS
PRIMAL & DUAL PROBLEMSPRIMAL & DUAL PROBLEMS
PRIMAL & DUAL PROBLEMS
 
Time and space complexity
Time and space complexityTime and space complexity
Time and space complexity
 
Karmarkar's Algorithm For Linear Programming Problem
Karmarkar's Algorithm For Linear Programming ProblemKarmarkar's Algorithm For Linear Programming Problem
Karmarkar's Algorithm For Linear Programming Problem
 
Special Cases in Simplex Method
Special Cases in Simplex MethodSpecial Cases in Simplex Method
Special Cases in Simplex Method
 
Linear Programming
Linear ProgrammingLinear Programming
Linear Programming
 
Linear programming - Model formulation, Graphical Method
Linear programming  - Model formulation, Graphical MethodLinear programming  - Model formulation, Graphical Method
Linear programming - Model formulation, Graphical Method
 

Similaire à Optimal Graph Search Algorithms

PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptxPPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptxRaviKiranVarma4
 
Comparison of various heuristic
Comparison of various heuristicComparison of various heuristic
Comparison of various heuristicijaia
 
unit-1-l3AI..........................ppt
unit-1-l3AI..........................pptunit-1-l3AI..........................ppt
unit-1-l3AI..........................pptShilpaBhatia32
 
Analysis of Pathfinding Algorithms
Analysis of Pathfinding AlgorithmsAnalysis of Pathfinding Algorithms
Analysis of Pathfinding AlgorithmsSigSegVSquad
 
Artificial Intelligence_Searching.pptx
Artificial Intelligence_Searching.pptxArtificial Intelligence_Searching.pptx
Artificial Intelligence_Searching.pptxRatnakar Mikkili
 
AI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxAI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxYousef Aburawi
 
ALGORITHMS - SHORT NOTES
ALGORITHMS - SHORT NOTESALGORITHMS - SHORT NOTES
ALGORITHMS - SHORT NOTESsuthi
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithmsguest084d20
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithmsguest084d20
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithmsguest084d20
 

Similaire à Optimal Graph Search Algorithms (20)

Unit ii-ppt
Unit ii-pptUnit ii-ppt
Unit ii-ppt
 
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptxPPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
PPT ON INTRODUCTION TO AI- UNIT-1-PART-2.pptx
 
Searching techniques
Searching techniquesSearching techniques
Searching techniques
 
Comparison of various heuristic
Comparison of various heuristicComparison of various heuristic
Comparison of various heuristic
 
unit-1-l3.ppt
unit-1-l3.pptunit-1-l3.ppt
unit-1-l3.ppt
 
unit-1-l3AI..........................ppt
unit-1-l3AI..........................pptunit-1-l3AI..........................ppt
unit-1-l3AI..........................ppt
 
Analysis of Pathfinding Algorithms
Analysis of Pathfinding AlgorithmsAnalysis of Pathfinding Algorithms
Analysis of Pathfinding Algorithms
 
Artificial Intelligence_Searching.pptx
Artificial Intelligence_Searching.pptxArtificial Intelligence_Searching.pptx
Artificial Intelligence_Searching.pptx
 
Ai1.pdf
Ai1.pdfAi1.pdf
Ai1.pdf
 
AI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptxAI_03_Solving Problems by Searching.pptx
AI_03_Solving Problems by Searching.pptx
 
DAA Notes.pdf
DAA Notes.pdfDAA Notes.pdf
DAA Notes.pdf
 
chapter3part1.ppt
chapter3part1.pptchapter3part1.ppt
chapter3part1.ppt
 
DS ppt
DS pptDS ppt
DS ppt
 
ALGORITHMS - SHORT NOTES
ALGORITHMS - SHORT NOTESALGORITHMS - SHORT NOTES
ALGORITHMS - SHORT NOTES
 
DAA-Module-5.pptx
DAA-Module-5.pptxDAA-Module-5.pptx
DAA-Module-5.pptx
 
dsa.pptx
dsa.pptxdsa.pptx
dsa.pptx
 
A star algorithms
A star algorithmsA star algorithms
A star algorithms
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithms
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithms
 
Parallel algorithms
Parallel algorithmsParallel algorithms
Parallel algorithms
 

Plus de Dr. C.V. Suresh Babu (20)

Data analytics with R
Data analytics with RData analytics with R
Data analytics with R
 
Association rules
Association rulesAssociation rules
Association rules
 
Clustering
ClusteringClustering
Clustering
 
Classification
ClassificationClassification
Classification
 
Blue property assumptions.
Blue property assumptions.Blue property assumptions.
Blue property assumptions.
 
Introduction to regression
Introduction to regressionIntroduction to regression
Introduction to regression
 
DART
DARTDART
DART
 
Mycin
MycinMycin
Mycin
 
Expert systems
Expert systemsExpert systems
Expert systems
 
Dempster shafer theory
Dempster shafer theoryDempster shafer theory
Dempster shafer theory
 
Bayes network
Bayes networkBayes network
Bayes network
 
Bayes' theorem
Bayes' theoremBayes' theorem
Bayes' theorem
 
Knowledge based agents
Knowledge based agentsKnowledge based agents
Knowledge based agents
 
Rule based system
Rule based systemRule based system
Rule based system
 
Formal Logic in AI
Formal Logic in AIFormal Logic in AI
Formal Logic in AI
 
Production based system
Production based systemProduction based system
Production based system
 
Game playing in AI
Game playing in AIGame playing in AI
Game playing in AI
 
Diagnosis test of diabetics and hypertension by AI
Diagnosis test of diabetics and hypertension by AIDiagnosis test of diabetics and hypertension by AI
Diagnosis test of diabetics and hypertension by AI
 
A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”
 
A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”A study on “impact of artificial intelligence in covid19 diagnosis”
A study on “impact of artificial intelligence in covid19 diagnosis”
 

Dernier

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdfQucHHunhnh
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfciinovamais
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactPECB
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdfQucHHunhnh
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Celine George
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationnomboosow
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfJayanti Pande
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesFatimaKhan178732
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 

Dernier (20)

SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
1029 - Danh muc Sach Giao Khoa 10 . pdf
1029 -  Danh muc Sach Giao Khoa 10 . pdf1029 -  Danh muc Sach Giao Khoa 10 . pdf
1029 - Danh muc Sach Giao Khoa 10 . pdf
 
Activity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdfActivity 01 - Artificial Culture (1).pdf
Activity 01 - Artificial Culture (1).pdf
 
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
Mattingly "AI & Prompt Design: Structured Data, Assistants, & RAG"
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Beyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global ImpactBeyond the EU: DORA and NIS 2 Directive's Global Impact
Beyond the EU: DORA and NIS 2 Directive's Global Impact
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17Advanced Views - Calendar View in Odoo 17
Advanced Views - Calendar View in Odoo 17
 
Interactive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communicationInteractive Powerpoint_How to Master effective communication
Interactive Powerpoint_How to Master effective communication
 
Web & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdfWeb & Social Media Analytics Previous Year Question Paper.pdf
Web & Social Media Analytics Previous Year Question Paper.pdf
 
Separation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and ActinidesSeparation of Lanthanides/ Lanthanides and Actinides
Separation of Lanthanides/ Lanthanides and Actinides
 
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"Mattingly "AI & Prompt Design: The Basics of Prompt Design"
Mattingly "AI & Prompt Design: The Basics of Prompt Design"
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 

Optimal Graph Search Algorithms

  • 1. 1 Compiled ver 1.0 Dr. C.V. Suresh Babu 5/12/2013 Revision ver 1.1 Revision ver 1.2 Revision ver 1.3 Revision ver 1.4 Revision ver 1.5 Edited ver 2.0 Dr. C.V. Suresh Babu UNIT-II OPTIMISATION ALGORITHM
  • 2. 2 INDEX S.NO TOPICS PAGE NUMBER 1 Optimization problem 3 2 Graph search algorithm 4 2.1 Generic search 2.2 Breadth first search 7 2.3 Dijkstra’s shortest path 14 2.4 Depth first search 21 2.5 Recursive depth first search 28 2.6 Linear ordering of partial order 3 5 30 Network flows and linear programming 32 3.1 Hill climbing 37 3.2 Primary dual hill climbing 39 3.3 Steepest ascent hill climbing 42 3.4 Linear programming 44
  • 3. 3 4 Recursive back tracking 46 4.1 Developing recursive back tracking 46 4.2 Pruning branches 49 4.3 satisfiability 50 1. OPTIMIZATION:     Transform the program to improve efficiency Performance: faster execution Size: smaller executable, smaller memory footprint Tradeoffs: Performance vs. Size Compilation speed and memory OPTIMIZATION PROBLEM: DEFINITION: Many important and practical problems can be expressed as optimization problems. Such problems involve finding the best of an exponentially large set of solutions. It can be like finding a needle in a haystack. EXPLANATION: Ingredients: An optimization problem is specified by defining instances, solutions, and costs. Instances: The instances are the possible inputs to the problem. Solutions for Instance: Each instance has an exponentially large set of solutions. A solution is valid if it meets a set of criteria determined by the instance at hand. Measure of Success: Each solution has an easy-to-compute cost, value, or measure of success that is to be minimized or maximized. Specification of an Optimization Problem: Preconditions: The input is one instance. Postconditions: The output is one of the valid solutions for this instance with optimal (minimum or maximum as the case may be) measure of success. (The solution to be outputted need not be unique.) EXAMPLES:
  • 4. 4 Longest Common Subsequence: This is an example for which we have a polynomialtime algorithm. Instances: An instance consists of two sequences, e.g., X = _A, B, C, B, D, A, B_ and Y = _B, D, C, A, B, A_. Solutions: A subsequence of a sequence is a subset of the elements taken in the same order. For example, Z = _B, C, A_ is a subsequence of X =_A, B, C, B, D, A, B_. A solution is a sequence Z that is a subsequence of both X and Y. For example, Z = _B, C, A_ is solution, because it is a subsequence common to both X and Y (Y = _B, D, C, A, B, A_). Measure of Success: The value of a solution is the length of the common subsequence, e.g., |Z| = 3. Goal: Given two sequences X and Y, the goal is to find the longest common subsequence (LCS for short). For the example given above, Z = _B, C, B, A_ is a longest common subsequence. Course Scheduling: This is an example for which we do not have a polynomial-time algorithm. Instances: An instance consists of the set of courses specified by a university, the set of courses that each student requests, and the set of time slots in that courses can be offered. Solutions: A solution for an instance is a schedule that assigns each course a time slot. Measure of Success: A conflict occurs when two courses are scheduled at the same time even though a student requests them both. The cost of a schedule is the number of conflicts that it has. Goal: Given the course and student information, the goal is to find the schedule with the fewest conflicts. Airplane: The following is an example of a practical problem. Instances: An instance specifies the requirements of a plane: size, speed, fuel efficiency, etc. Solutions: A solution for an instance is a specification of a plane, right down to every curve and nut and bolt. Measure of Success: The company has a way of measuring how well the specification meets the requirements. Goal: Given plane requirements, the goal is to find a specification that meets them in an optimal way. 2. GRAPH SEARCH ALGORITHM: DEFINITION: Graph Search Algorithm is the problem of visiting all the nodes in a graph in a particular manner, updating and/or checking their values along the way. Tree traversal is a special case of graph traversal. EXPLANATION:
  • 5. 5 An optimization problem requires finding the best of a large number of solutions. This can be compared to a mouse finding cheese in a maze. Graph search algorithms provide a way of systematically searching through this maze of possible solutions. A "graph" in this context is made up of "vertices" or "nodes" and lines called edges that connect them. A graph may be undirected, meaning that there is no distinction between the two vertices associated with each edge, or its edges may be directed from one vertex to another. A graph consists of a set N of nodes, called vertices. a set A of ordered pairs of nodes, called arcs or edges. EXAMPLE: TYPES OF GRAPHS: In directed graphs, edges have a specific direction In undirected graphs, edges are two-way Vertices u and v are adjacent if (u, v) ∈∈∈ E A sparse graph has O(|V|) edges (upper bound) A dense graph has Ω(|V|2) edges (lower bound) A complete graph has an edge between every pair of vertices An undirected graph is connected if there is a path between any two vertices 2.1 GENERIC SEARCH ALGORITHM: DEFINITION: A generic search algorithm is a generic algorithm whose input-output relation is specialized to the relation that the inputs are a collection of values and another value, and the output is the boolean value true if the other value belongs to the collection, false otherwise.
  • 6. 6 This algorithm to search for a solution path in a graph. The algorithm is independent of any particular graph. EXPLANATION: The Reachability Problem: Preconditions: The input is a graph G (either directed or undirected) and a source node s. Postconditions: The output consists of all the nodes u that are reachable by a path in G from s. Basic Steps: Suppose you know that node u is reachable froms (denoted as s −→ u) and that there is an edge fromu to v. Then you can conclude that v is reachable from s (i.e., s −→ u→v). You can use such steps to build up a set of reachable nodes. _ s has an edge to v4 and v9. Hence, v4 and v9 are reachable. v4 has an edge to v7 and v3. Hence, v7 and v3 are reachable. v7 has an edge to v2 and v8. . . . Difficulties: Data Structure: How do you keep track of all this? Exit Condition: How do you know that you have found all the nodes? Halting: How do you avoid cycling, as in s →v4 →v7 →v2 →v4 →v7 →v2 →v4 →v7 →v2 →v4 →・ ・ ・, forever? Ingredients of the Loop Invariant: Found: If you trace a path from s to a node, then we will say that the node has been found. Handled: At some point in time after node u has been found, you will want to follow all the edges fromu and find all the nodes v that have edges fromu. When you have done that for node u, we say that it has been handled. Data Structure: You must maintain (1) the set of nodes foundHandled that have been found and handled and (2) the set of nodes foundNotHandled that have been found but not handled.
  • 7. 7 The Loop Invariant: LI1: For each found node v, we know that v is reachable froms, because we have traced out a path s −→ v from s to it. LI2: If a node has been handled, then all of its neighbors have been found. THE ORDER OF HANDLING NODES: This algorithm specifically did not indicate which node u to select from foundNotHandled. It did not need to, because the algorithm works no matter how this choice is made. We will now consider specific orders in which to handle the nodes and specific applications of these orders. Queue (Breadth-First Search): One option is to handle nodes in the order they are found in. This treats foundNotHandled as a queue: ―first in, first out.‖ The effect is that the search is breadth first,meaning that all nodes at distance 1 from s are handled first, then all those at distance 2, and so on. A byproduct of this is that we find for each node v a shortest path froms to v. Priority Queue (Shortest (Weighted) Paths): Another option calculates for each node v in foundNotHandled the minimum weighted distance from s to v along any path seen so far. It then handles the node that is closest to s according to this approximation. Because these approximations change throughout time, foundNotHandled is implemented using a priority queue: ―highest current priority out first.‖ Like breadth-first search, the search handles nodes that are closest to s first, but now the length of a path is the sum of its edge weights. A byproduct of this method is that we find for each node v the shortest weighted path froms to v. Stack (Depth-First Search): Another option is to handle the node that was found most recently. This method treats foundNotHandled as a stack: ―last in, first out.‖ The effect is that the search is depth first, meaning that a particular path is followed as deeply as possible into the graph until a dead end is reached, forcing the algorithm to backtrack. ALGORITHM: Code: algorithm GenericSearch (G, s) <pre-cond>: G is a (directed or undirected) graph, and s is one of its nodes. <post-cond>: The output consists of all the nodes u that are reachable by a path in G from s.€ begin foundHandled = ∅ foundNotHandled = {s} loop <loop-invariant>: See LI1, LI2.
  • 8. 8 exit when foundNotHandled = ∅ let u be some node from foundNotHandled for each v connected to u if v has not previously been found then add v to foundNotHandled end if end for move u from foundNotHandled to foundHandled end loop return foundHandled end algorithm procedure: 1: Search(G,S,goal) 2: Inputs 3: G: graph with nodes N and arcs A 4: S: set of start nodes 5: goal: Boolean function of states 6: Output 7: path from a member of S to a node for which goal is true 8: or ⊥ if there are no solution paths 9: Local 10: Frontier: set of paths 11: Frontier ←{⟨s⟩: s∈S} 12: while (Frontier ≠{}) 13: select and remove ⟨s0,...,sk⟩ from Frontier 14: if ( goal(sk)) then 15: return ⟨s0,...,sk⟩ 16: Frontier ←Frontier ∪{⟨s0,...,sk,s⟩: ⟨sk,s⟩ ∈ A} 17: return ⊥ This algorithm has a few features that should be noted:
  • 9. 9 The selection of a path at line 13 is non-deterministic. The choice of path that is selected can affect the efficiency; see the box for more details on our use of "select". A particular search strategy will determine which path is selected. It is useful to think of the return at line 15 as a temporary return; another path to a goal can be searched for by continuing to line 16. If the procedure returns ⊥ , no solutions exist (or there are no remaining solutions if the proof has been retried). If the path chosen does not end at a goal node and the node at the end has no neighbors, extending the path means removing the path. This outcome is reasonable because this path could not be part of a path from a start node to a goal node. 2.2 BREADTH-FIRST SEARCH: DEFINITION: Breadth-First Search (BFS) is a strategy for searching in a graph when search is limited to essentially two operations: (a) visit and inspect a node of a graph; (b) gain access to visit the nodes that neighbor the currently visited node. EXPLANATION: The BFS begins at a root node and inspects all the neighboring nodes. Then for each of those neighbor nodes in turn, it inspects their neighbor nodes which were unvisited, and so on. Compare BFS with the equivalent, but more memory-efficient Iterative deepening depthfirst search and contrast with depth-first search. In breadth-first search the frontier is implemented as a FIFO (first-in, first-out) queue. Thus, the path that is selected from the frontier is the one that was added earliest. This approach implies that the paths from the start node are generated in order of the number of arcs in the path. One of the paths with the fewest arcs is selected at each stage.
  • 12. 12 Step 7: Step 8: Step 9: Step 10:
  • 13. 13 ALGORITHM: Code: algorithm ShortestPath (G, s) < pre-cond>: G is a (directed or undirected) graph, and s is one of its nodes. < post-cond>: π specifies a shortest path froms to each node of G, and d specifies their lengths. begin foundHandled = ∅ foundNotHandled = {s} d(s) = 0, π(s) = _ loop <loop-invariant>: See above. exit when foundNotHandled = ∅ let u be the node in the front of the queue foundNotHandled for each v connected to u if v has not previously been found then add v to foundNotHandled d(v) = d(u) + 1 π(v) = u end if end for move u from foundNotHandled to foundHandled end loop (for unfound v, d(v)=∞) return _d, π_
  • 14. 14 end algorithm Pseudocode Input: A graph G and a root v of G 1 procedure BFS(G,v): 2 create a queue Q 3 create a set V 4 enqueue v onto Q 5 add v to V 6 while Q is not empty: 7 t ← Q.dequeue() 8 if t is what we are looking for: 9 return t 10 for all edges e in G.adjacentEdges(t) do 11 u ← G.adjacentVertex(t,e) 12 if u is not in V: 13 add u to V 14 enqueue u onto Q 15 return none Breadth-first search is useful when space is not a problem; you want to find the solution containing the fewest arcs; few solutions may exist, and at least one has a short path length; and infinite paths may exist, because it explores all of the search space, even with infinite paths. It is a poor method when all solutions have a long path length or there is some heuristic knowledge available. It is not used very often because of its space complexity. 2.3 DIJKSTRA SHORTEST PATH DEFINITION: Dijkstra's algorithm, conceived by Dutchcomputer scientist Edsger Dijkstra in 1956 and published in 1959,[1][2] is a graph search algorithm that solves the single-source shortest path problem for a graph with non-negative edgepath costs, producing a shortest path tree. This algorithm is often used in routing and as asubroutine in other graph algorithms.
  • 15. 15 EXPLANATION: We will now make the shortest-path problem more general by allowing each edge to have a different weight (length). The length of a path from s to v will be the sum of the weights on the edges of the path. This makes the problem harder, because the shortest path to a node may wind deep into the graph along many short edges instead of along a few long edges. Despite this, only small changes need to be made to the algorithm. The new algorithm is called Dijkstra’s algorithm. Specifications of the Shortest-Weighted-Path Problem: Preconditions: The input is a graph G (either directed or undirected) and a source node s. Each edge _u, v_ is allocated a nonnegative weightw_u,v_. Postconditions: The output consists of d and π, where for each node v of G, d(v) gives the length δ(s, v) of the shortest weighted path from s to v, and π defines a shortestweighted-path tree. (See Section 14.2.) EXAMPLE:
  • 19. 19 Step 9: ALGORITHM: Code: algorithm DijkstraShortestWeightedPath(G, s) <pre-cond>: G is a weighted (directed or undirected) graph, and s is one of its nodes. < post-cond>: π specifies a shortest weighted path from s to each node of G, and d specifies their lengths. begin d(s) = 0, π(s) = _ for other v, d(v)=∞and π(v) = nil handled = ∅ notHandled = priority queue containing all nodes. Priorities given by d(v). loop <loop-invariant>: See above. exit when notHandled = 0
  • 20. 20 let u be a node fromnotHandled with smallest d(u) for each v connected to u foundPathLength = d(u) +w_u,v_ if d(v) > foundPathLength then d(v) = foundPathLength π(v) = u (update the notHandled priority queue) end if end for move u from notHandled to handled end loop return _d, Pseudo code 1 function Dijkstra(Graph, source): 2 for each vertex v in Graph: 3 dist[v] := infinity ; // Initializations // Unknown distance function from 4 // source to v 5 previous[v] := undefined ; // Previous node in optimal path 6 end for // from source 7 8 dist[source] := 0 ; 9 Q := the set of all nodes in Graph ; // Distance from source to source // All nodes in the graph are // unoptimized – thus are in Q 10 11 while Q is not empty: // The main loop 12 u := vertex in Q with smallest distance in dist[] ; // Source node in first case 13 remove u from Q ; 14 if dist[u] = infinity: 15 break ; 16 end if // all remaining vertices are // inaccessible from source 17 18 for each neighbor v of u: // where v has not yet been
  • 21. 21 19 // removed from Q. 20 alt := dist[u] + dist_between(u, v) ; 21 if alt < dist[v]: 22 dist[v] := alt ; 23 previous[v] := u ; 24 decrease-key v in Q; // Relax (u,v,a) // Reorder v in the Queue 25 end if 26 end for 27 end while 28 return dist; 29 endfunction Applications of Dijkstra's Algorithm: Traffic Information Systems are most prominent use Mapping (Map Quest, Google Maps) Routing Systems 2.4 DEPTH-FIRST SEARCH Definition: Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. One starts at the root (selecting some node as the root in the graph case) and explores as far as possible along each branch before backtracking. Explanation: The first strategy is depth-first search. In depth-first search, the frontier acts like a last-in first-out stack. The elements are added to the stack one at a time. The one selected and taken off the frontier at any time is the last element that was added.  Explore edges out of the most recently discovered vertex v.  When all edges of v have been explored, backtrack to explore other edges leaving the vertex from which v was discovered (its predecessor).  ―Search as deep as possible first.‖  Continue until all vertices reachable from the original source are discovered.  If any undiscovered vertices remain, then one of them is chosen as a new source and search is repeated from that source.
  • 24. 24 Step 8: Step 9: Step 10: Step 11:
  • 26. 26 Step 16: ALGORITHM: Code: algorithm DepthFirstSearch (G, s) < pre-cond>: G is a (directed or undirected), graph, and s is one of its nodes. <post-cond>: The output is a depth-first search tree of G rooted at s begin foundHandled = 0 foundNotHandled = {_s, 0_} time = 0%Used for time stamping. See following discussion. loop <loop-invariant>: See preceding list. exit when foundNotHandled = 0 pop <u, i> off the stack foundNotHandled if u has an (i+1)st edge <u, v> push <u, i + 1> onto foundNotHandled if v has not previously been found then π(v) = u <u, v> is a tree edge push <v, 0> onto foundNotHandled s(v) = time; time = time + 1 else if v has been found but not completely handled then <u, v> is a back edge else (v has been completely handled)
  • 27. 27 <u, v> is a forward or cross edge end if else move u to foundHandled f (v) = time; time = time + 1 end if end loop return foundHandled end algorithm procedure: 1 procedure DFS-iterative(G,v): 2 label v as discovered 3 let S be a stack 4 S.push(v) 5 while S is not empty 6 t ← S.peek() 7 if t is what we're looking for: 8 return t 9 for all edges e in G.adjacentEdges(t) do 10 if edge e is already labelled 11 continue with the next edge 12 w ← G.adjacentVertex(t,e) 13 if vertex w is not discovered and not explored 14 label e as tree-edge 15 label w as discovered 16 S.push(w) 17 continue at 5 18 else if vertex w is discovered 19 label e as back-edge 20 else 21 // vertex w is explored
  • 28. 28 22 label e as forward- or cross-edge 23 label t as explored 24 S.pop() Depth-first search is appropriate when either space is restricted; many solutions exist, perhaps with long path lengths, particularly for the case where nearly all paths lead to a solution; or the order of the neighbors of a node are added to the stack can be tuned so that solutions are found on the first try. Depth-first search is the basis for a number of other algorithms, such as iterative deepening. 2.5 RECURSIVE DEPTH FIRST SEARCH: Definition: A recursive implementation of a depth-first search algorithm, which directly mirrors the iterative version. the only difference is that the iterative version uses a stack to keep track of the route back to the start node, while the recursive version uses the stack of recursive stack frames. the advantage of the recursive algorithm is that it is easier to code and easier to understand. the iterative algorithm might run slightly faster, but a good compiler will convert the recursive algorithm into an iterative one. EXPLANATION: Recursive: Recursion is the process of repeating items in a self-similar way. For instance, when the surfaces of two mirrors are exactly parallel with each other the nested images that occur are a form of infinite recursion Example: A classic example of recursion is the definition of the factorial function, given here in C code: unsigned int factorial(unsigned int n) { if (n == 0) { return 1; }
  • 29. 29 else { return n * factorial(n - 1); } } Pruning Paths: Consider the instance graph. There are two obvious paths from node S to node v. However, there are actually an infinite number of such paths. One path of interest is the one that starts at S, and traverses around past u up to c and then down to v. All of these equally valued paths will be pruned from consideration, except the one that goes from S through b and u directly to v. Three Friends: Given this instance, we first mark our source node S with an x, and then we recurse three times, once from each of a, b, and c. Friend a: Our first friend marks all nodes that are reachable from its source node a = s without passing through a previously marked node. This includes only the nodes in the leftmost branch, because when we marked our source S, we blocked his route to the rest of the graph. Friend b: Our second friend does the same. He finds, for example, the path that goes from b through u directly to v. He also finds and marks the nodes back around to c. Friend c: Our third friend is of particular interest.He finds that his source node, c, has already been marked. Hence, he returns without doing anything. This prunes off this entire branch of the recursion tree. The reason that he can do this is that for any path to a node that he would consider, another path to the same node has already been considered. Achieving the Postcondition: Consider the component of the graph reachable from our source s without passing through a previously marked nodes. (Because our instance has no marked nodes, this includes all the nodes.) To mark the nodes within this component, we do the following. First, we mark our source s. This partitions our component of reachable nodes into subcomponents that are still reachable from ALGORITHM: Code: algorithm DepthFirstSearch (s) _ pre-cond_: An input instance consists of a (directed or undirected) graph G with some of its nodes marked found and a source node s. _ post-cond_: The output is the same graph G, except all nodes v reachable from s without passing through a previously found node are now alsomarked as being found. The graph G is a global variable ADT, which is assumed to be both input and output to the routine. begin if s is marked as found then do nothing
  • 30. 30 else mark s as found for each v connected to s DepthFirstSearch (v) end for end if end algorithm 2.6 LINEAR ORDERING OF A PARTIAL ORDER: Definition: Finding a linear order consistent with a given partial order is one of many applications of a depth-first search. (Hint: If a question ever mentions that a graph is directed acyclic, always start by running this algorithm.) explanation: A partial order of a set of objects V supplies only some of the information of a total order. For each pair of objects u, v ∈ V, it specifies either that u is before v, that v is before u, or that the order of u and v is undefined. It must also be transitive. A partial order can be represented by a directed acyclic graph (DAG) G. The vertices consist of the objects V, and the directed edge u ,v indicates that u is before v. It follows from transitivity that if there is a directed path in G from u to v, then we know that u is before v. A cycle in G from u to v and back to u presents a contradiction ,because u cannot be both before and after v. Definition of Total Order: A total order of a set of objects V specifies for each pair of objects u, v ∈ V either (1) that u is before v or (2) that v is before u. It must be transitive, in that if u is before v and v is before w, then u is before w. Definition of Partial Order: A partial order of a set of objects V supplies only some of the information of a total order. For each pair of objects u, v ∈ V, it specifies either that u is before v, that v is before u, or that the order of u and v is undefined. It must also be transitive. For example, you must put on your underwear before your pants, and you must put on your shoes after both your pants and your socks. According to transitivity, this means you must put your underwear on before your shoes. However, you do have the freedom to put your underwear and your socks on in either order. My son, Josh, when six, mistook this partial order for a total order and refused to put on his socks before his underwear.When he was eight, he explained tome that the reason that he could get dressed faster than I was that he had a ―shortcut,‖ consisting in putting his socks on before his pants. I was thrilled that he had at least partially understood the idea of a partial order: underwear pants socks /
  • 31. 31 Shoes A partial order can be represented by a directed acyclic graph (DAG) G. The vertices consist of the objects V, and the directed edge _u, v_ indicates that u is before v. It follows from transitivity that if there is a directed path in G from u to v, then we knowthatu is before v. A cycle in G fromu to v and back tou presents a contradiction, because u cannot be both before and after v. Specifications of the Topological Sort Problem: Preconditions: The input is a directed acyclic graph G representing a partial order. Postconditions: The output is a total order consistent with the partial order given by G, that is, for all edges _u, v_ ∈ G, u appears before v in the total order. An Easy but Slow Algorithm: The Algorithm: Start at any node v of G. If v has an outgoing edge, walk along it to one of its neighbors. Continue walking until you find a node t that has no outgoing edges. Such a node is called a sink. This process cannot continue forever, because the graph has no cycles. The sink t can go after every node in G. Hence, you should put t last in the total order, delete t from G, and recursively repeat the process on G −v. Running Time: It takes up to n time to find the first sink, n − 1 to find the second, and so on. The total time is _(n2). Algorithm Using a Depth-First Search: Start at any node s of G. Do a depth-first search starting at node s. After this search completes, nodes that are considered found will continue to be considered found, and so should not be considered again. Let s_ be any unfound node of G. Do a depth-first search starting at node s_. Repeat the process until all nodes have been found. Use the time stamp f (u) to keep track of the order in which nodes are completely handled, that is, removed fromthe stack. Output the nodes in reverse order. If you ever find a back edge, then stop and report that the graph has a cycle. Proof of Correctness: Lemma: For every edge _u, v_ of G, node v is completely handled before u. Proof of Lemma: Consider some edge _u, v_ of G. Beforeu is completely handled, it must be put onto the stack foundNotHandled. At this point in time, there are three cases: Tree Edge: v has not yet been found. Because u has an edge to v, v is put onto the top of the stack above u before u has been completely handled. Nomore progresswill bemade towards handlingu until v has been completely handled and removed fromthe stack. Back Edge: v has been found, but not completely handled, and hence is on the stack somewhere below u. Such an edge is a back edge. This contradicts the fact that G is acyclic.
  • 32. 32 Forward or Cross Edge: v has already been completely handled and removed from the stack. In this case, we are done: v was completely handled before u. 3. NETWORK FLOWS AND LINEAR PROGRAMMING: Definition: Network flow is a classic computational problem with a surprisingly large number of applications, such as routing trucks and matching happy couples. Think of a given directed graph as a network of pipes starting at a source node s and ending at a sink node t. Through each pipe water can flow in one direction at some rate up to some maximum capacity. The goal is to find the maximum total rate at which water can flow from the source node s to the sink node t. If this were a physical system of pipes, you could determine the answer simply by pushing as much water through as you could. However, achieving this algorithmically is more difficult than you might at first think, because the exponentially many paths from s tot overlap, winding forward and backward in complicated ways. Explanation: Network Flow Specification: Given an instance G, s, t, the goal is to find a maximum rate of flow through graph G from node s to node t . Precondition: We are given one of the following instances. Instances: An instance G, s, t consists of a directed graph G and specific nodes s and t. Each edge u, v is associated with a positive capacity c u, v Post condition: The output is a solution with maximum value and the value of that solution. Solutions for Instance: A solution for the instance is a flow F, which specifies the flow Fu ,v through each edge of the graph. The requirements of a flow are as follows. Unidirectional Flow: For any pair of nodes, it is easiest to assume that flow does not go in both directions between them. Hence, require that at least one of Fu, v and Fv, u is zero and that neither be negative. a) A network with its edge capacities labeled.
  • 33. 33 b) A maximum flow in this network. The first value associated with each edge is its flow, and the second is its capacity. The total rate of the flow is 3 = 1 + 2 − 0. Note that no more flow can be pushed along the top path, because the edge b, cis at capacity. Similarly for the edge e, f Note also that no flow is pushed along the bottom path, because this would decrease the total from s to t . c) A minimum cut in this network. The capacity of this min cut is 3 = 1 + 2. Note that the capacity of the edge j, is not included in the capacity of the cut, because it is going in the wrong direction. (b) vs (c): d) The rate of the maximum flow is the same as the capacity of the min cut. The edges crossing forward across the cut are at capacity in the flow, while those crossing backward have zero flow. These things are not coincidences. Edge Capacity: The flow through any edge cannot exceed the capacity of the edge, namely Fu, v≤ c u, vNo Leaks: No water can be added at any node other than the source s, and no water can be drained at any node other than the sink t. At each other node the total flow into the node equals the total flow out, i.e., for all nodes u ∈ {s, t }, v Fv, u= v Fu, v Measure of Success: The value of a flow F, denoted rate (F), is the total rate of flow from the source s to the sink t. We will define this to be the total that leaves s without coming back, rate (F) = v [Fs, v− Fv, sAgreeing with our intuition, we will later prove that because no flow leaks or is created in between s and t , this flow equals that flowing into t without leaving it, namely v [Fv, t − Ft ,v EXAMPLE: NETWORK FLOWS: APPLICATIONS TRANSPORTATION PROBLEM Each node is one of two types: source (supply) node destination (demand) node Every arc has: its tail at a supply node its head at a demand node Such a graph is called bipartite. There are three basic methods: Northwest Corner Method Minimum Cost Method
  • 34. 34 Vogel’s Method MINIMUM COST METHOD The Northwest Corner Method dos not utilize shipping costs. It can yield an initial bfs easily but the total shipping cost may be very high. The minimum cost method uses shipping costs in order come up with a bfs that has a lower cost. To begin the minimum cost method, first we find the decision variable with the smallest shipping cost (Xij). Then assign Xij its largest possible value, which is the minimum of si and dj After that, as in the Northwest Corner Method we should cross out row i and column j and reduce the supply or demand of the noncrossed-out row or column by the value of Xij. Then we will choose the cell with the minimum cost of shipping from the cells that do not lie in a crossedout row or column and we will repeat the procedure. An example for Minimum Cost Method Step 1: Select the cell with minimum cost. 2 3 5 6 5 2 1 3 5 10 3 8 12 4 8 6 4 15 6 27 . Step 2: Cross-out column 2 2 3 5 6 5 2 1 3 5 2 8 3 12 8 X 4 4 28 . 6 6 15
  • 35. 35 Step 3: Find the new cell with minimum shipping cost and cross-out row 2 2 3 5 6 5 2 1 3 5 X 2 8 3 8 10 4 X 6 4 15 6 29 . Step 4: Find the new cell with minimum shipping cost and cross-out row 1 2 3 5 6 X 5 2 1 3 5 X 2 8 3 8 5 4 X 6 4 15 6 30 . Step 5: Find the new cell with minimum shipping cost and cross-out column 1 2 3 5 6 X 5 2 1 3 5 X 2 8 3 8 4 6 5 X X 4 31 . 6 10
  • 36. 36 Step 7: Finally assign 6 to last cell. The bfs is found as: X11=5, X21=2, X22=8, X31=5, X33=4 and X34=6 2 3 5 6 X 5 2 1 3 5 X 2 8 3 8 5 4 4 X X 6 X 6 X X 33 . 3.1 HILL CLIMBING ALGORITHM: In computer science, hill climbing is a mathematical optimization technique which belongs to the family of local search. It is relatively simple to implement, Hill-climbing vs. Branch-and-bound making it a popular first choice. Although more Hill-climbing searches work by starting off advanced algorithms may give better results, in some with an initial guess of a solution, then situations hill climbing works just as well. Hill climbing can be used to solve problems that have many solutions, some of which are better than others. It starts with a random (potentially poor) solution, and iteratively makes small changes to the solution, each time improving it a little. When the algorithm cannot see any improvement anymore, it terminates. Ideally, at that point the current solution is close to optimal, but it is not guaranteed that hill climbing will ever come close to the optimal solution. For example, hill climbing can be applied to the traveling salesman problem. It is easy to find a solution that visits all the cities but will be very poor compared to the optimal solution. The algorithm starts with such a solution and makes small improvements to it, such as switching the order in which two cities are visited. Eventually, a much better route is obtained. Hill climbing is used widely in artificial intelligence, for reaching a goal state from a starting node. Choice of iteratively making local changes to it until either the solution is found or the heuristic gets stuck in a local maximum. There are many ways to try to avoid getting stuck in local maxima, such as running many searches in parallel, or probabilistically choosing the successor state, etc. In many cases, hill-climbing algorithms will rapidly converge on the correct answer. However, none of these approaches are guaranteed to find the optimal solution. Branch-and-bound solutions work by cutting the search space into pieces, exploring one piece, and then attempting to rule out other parts of the search space based on the information gained during each search. They are guaranteed to find the optimal answer eventually, though doing so might take a long time. For many problems, branch-and-bound based algorithms work quite well, since a small amount of information can rapidly shrink the search space. In short, hill-climbing isn't guaranteed to find the right answer, but often runs very quickly and gives good approximations. Branch-and-bound always finds the right answer, but might take a while to do so.
  • 37. 37 next node and starting node can be varied to give a list of related algorithms. Hill climbing attempts to maximize (or minimize) a function f(x), where x are discrete states. These states are typically represented by vertices in a graph, where edges in the graph encode nearness or similarity of a graph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value of f, until a local maximum (or local minimum) xm is reached. Hill climbing can also operate on a continuous space: in that case, the algorithm is called gradient ascent (or gradient descent if the function is minimized).*. Function for hill climbing: Function HILL-CLIMBING (problem) returns goal state Inputs: problem (a problem) Local variables: current (a node) Neighbour (a node) Current MAKE-NODE (INITIAL-STATE [problem]) Loop do Neighboura highest values successor of current If value[neighbour] <=value[current] then return STATE[current] Current neighbour
  • 38. 38 Problems Local maxima A problem with hill climbing is that it will find only local maxima. Unless the heuristic is convex, it may not reach a global maximum. Other local search algorithms try to overcome this problem such as stochastic hill climbing, random walks and simulated annealing. Ridges A ridge is a curve in the search place that leads to a maximum, but the orientation of the ridge compared to the available moves that are used to climb is such that each move will lead to a smaller point. In other words, each point on a ridge looks to the algorithm like a local maximum, even though the point is part of a curve leading to a better optimum. Plateau Another problem with hill climbing is that of a plateau, which occurs when we get to a "flat" part of the search space, i.e. we have a path where the heuristics are all very close together. This kind of flatness can cause the algorithm to cease progress and wander aimlessly. HILL CLIMBING TYPES PRIMARY STEEPEST ASCENT DUAL 3.2 PRIMARY DUAL HILL CLIMBING: Definition: Suppose that over the hills on which we are climbing there are an exponential number of roofs, one on top of the other. As before, our problem is to find a place to stand on the hills that has maximum height. We call this the primal optimization problem. An equally challenging problem is to find the lowest roof. We call this the dual optimization problem. The situation is such that each roof is above each place to stand. It follows trivially that the lowest and hence optimal roof is above the highest and hence optimal place to stand, but off hand we do not know how far above it is. FLOW DIAGRAM: FOUND X SUCCEDED
  • 39. 39 RESTRICTED PRIMARY DUAL PRIMAL RESTRICTED DUAL CONSTRUCT A BETTER DUAL Example: Lemma: Finds Optimal. A primal–dual hill-climbing algorithm is guaranteed to find an optimal solution to both the primal and the dual optimization problems. Proof: By the design of the algorithm, it only stops when it has a location L and a roof R with matching heights height(L) = height(R). This location must be optimal, because every other location L_ must be below this roof and hence cannot be higher than this location, that is, ∀ L_, height(L_) ≤ height(R) = height(L).We say that this dual solution R witnesses the fact that the primal solution L is optimal. Similarly, L witnesses the fact that R is optimal, that is, ∀ R_, height(R_) ≥ height(L) = height(R). This is called the duality principle. Vertex cover and generalized matching: 1. Initialization: x = 0; y = 0. 2. When there is an uncovered edge: (a) Pick an uncovered edge, and increase ye until some vertices go tight. (b) Add all tight vertices to the vertex cover. 3. Output the vertex cover. Primal Dual Pair and Example max 2000Xfancy s.t. 1700Xfine 1200Xnew X fancy X fine X new 25Xfancy 20Xfine 19Xnew 280 X new 0 X fancy , X fine , 12
  • 40. 40 (van cap) (labor) (profits) min 12U1 280U2 (ResourcePayments) s.t. U1 25U2 2000 (Xfancy) U1 20U2 1700 (Xfine ) U1 19U2 1200 (Xnew ) U1 , U2 (nonegativity) Primal rows become dual columns Primal Dual Objective Correspondence Dual Variable Relation Given Feasible X* and U* to Partial Derivative CX* U* AX* U* b CX* U* b Complementary Slackness At Optimal X*, u* * -1 u = CB B = Z b Zero Profits Given Optimal * * U (b - AX ) = 0 ( U * A - C) X * = 0. u' b = c x Constructing Dual Solutions Note we can construct a dual solution from optimal primal solution without resolving the problem. Given optimal primal XB* = B-1b and XNB* = 0. This solution must be feasible; XB* = B-1b > 0 and X > 0 and must satisfy nonnegative reduced cost CBB-1ANB - CNB> 0. Given this, suppose try U * = CBB-1 as a dual solution. First, is this feasible in the dual constraints. To be feasible, we must have U* A > C and U*> 0. We know criteria, U * ANB - CNB> 0 because at optimality this is equivalent to the reduced cost CBB-1ANB - CNB> 0.
  • 41. 41 Further, for basic variables reduced cost is CBB-1AB - CB = CBB-1B - CB = CB - CB = 0, So unifying these statements, we know that U* A> C Max-Flow–Min-Cut Duality Principle: The max flow and min cut problems were defined at the beginning of this chapter, both being interesting problems in their own right. Now we see that cuts can be used as the ceilings on the flows. Max Flow = Min Cut: We have proved that, given any network as input, the network flow algorithm finds amaximum flow and—almost as an accident—finds a minimum cut as well. This proves that the maximum flow through any network equals itsminimum cut. Dual Problems: We say that the dual of the max flow problem is the min cut problem, and conversely that the dual of the min cut problem is the max flow problem. 3.3 STEEPEST ASCENT HILL-CLIMBING: Definition: We have all experienced that climbing a hill can take a long time if you wind back and forth barely increasing your height at all. In contrast, you get there much faster if energetically you head straight up the hill. This method, which is call the method of steepest ascent, is to always take the step that increases your height the most. If you already know that the hill climbing algorithm in which you take any step up the hill works, then this new more specific algorithm also works. However, if you are lucky it finds the optimal solution faster. In our network flow algorithm, the choice of what step to take next involves choosing which path in the augmenting graph to take. The amount the flow increases is the smallest augmentation capacity of any edge in this path. It follows that the choice that would give us the biggest improvement is the path whose smallest edge is the largest for any path from s to t. Our steepest ascent network flow algorithm will augment such a best path each iteration. What remains to be done is to give an algorithm that finds such a path and to prove that this finds a maximum flow is found within a polynomial number of iterations. EXPLANATION: Finding The Augmenting Path With The Biggest Smallest Edge: The input consists of a directed graph with positive edge weights and with special nodes s and t. The output consists of a path from s to t through this graph whose smallest weighted edge is as big as possible. Easier Problem: Before attempting to develop an algorithm for this, let us consider an easier but related problem. In addition to the directed graph, the input to the easier problem provides a weight denoted wmin. It either outputs a path from s to t whose smallest weighted edge is at least as big as wmin or states that no such path exists. Using the Easier Problem: Assuming that we can solve this easier problem, we solve the original problem by running the first algorithm with wmin being every edge weight in the graph,
  • 42. 42 until we find the weight for which there is a path with such a smallest weight, but there is not a path with a bigger smallest weight. This is our answer. Solving the Easier Problem: A path whose smallest weighted edge is at least as big as wmin will obviously not contain any edge whose weight is smaller than wmin. Hence, the answer to this easier problem will not change if we delete from the graph all edges whose weight is smaller. Any path from s to t in the remaining graph will meet our needs. If there is no such path then we also know there is no such path in our original graph. This solves the problem. Implementation Details: In order to find a path from s to t in a graph, the algorithm branches out from s using breadth-first or depth-first search marking every node reachable from s with the predecessor of the node in the path to it from s. If in the process t is marked, then we have our path. It seems a waste of time to have to redo this work for each wmin so let’s use an iterative algorithm. The loop invariant will be that the work for the previous wmin has been done and is stored in a useful way. The main loop will then complete the work for the current wmin reusing as much of the previous work as possible. This can be implemented as follows. Sort the edges from biggest to smallest (breaking ties arbitrarily). Consider them one at a time. When considering wi, we must construct the graph formed by deleting all the edges with weights smaller thanwi. Denote this Gwi . We must mark every node reachable from s in this graph. Suppose that we have already done these things in the graph Gwi−1 . We form Gwi from Gwi−1 by adding the single edge with weight wi. Let hu, vi denote this edge. Nodes are reachable from s in Gwi that were not reachable in Gwi−1 only if u was reachable and v was not. This new edge then allows v to be reachable. Unmarked nodes now reachable from s via v can all be marked reachable by starting a depth first search from v. The algorithm will stop at the first edge that allows t to be reached. The edge with the smallest weight in this path to t will be the edge with weight wi added during this iteration. There is not a path from s to t in the input graph with a larger smallest weighted edge because t was not reachable when only the larger edges were added. Hence, this path is a path to t in the graph whose smallest weighted edge is the largest. This is the required output of this subroutine. Running Time: Even though the algorithm for finding the path with the largest smallest edge runs depth-first search for each weight wi, because the work done before is reused, no node in the process is marked reached more than once and hence no edge is traversed more than once. It follows that this process requires only O(m) time, where m is the number of edges. This time, however, is dominated by the time O(mlogm) to sort the edges. ALGORITHM: algorithmLargestShortestWeight(G, s, t ) pre-cond: G is a weighted directed (augmenting) graph. sis the source node. tis the sink. post-cond: P specifies a path from s to t whose smallest edge weight is as large as possible. u, v is its smallest weighted edge. begin Sort the edges by weight from largest to smallest
  • 43. 43 G = graph with no edges marksreachable loop loop-invariant: Every node reachable from s in G is marked reachable. exit when t is reachable u, v = the next largest weighted edge in G Add u, v to G if(u is marked reachable and v is not ) then Do a depth-first search fromv,marking all reachable nodes notmarked before. end if end loop P = path from s to t in G return(P, u, v ) end algorithm 3.4 LINEAR PROGRAMMING: EXPLANATION: When I was an undergraduate, I had a summer job with a food company. Our goal was to make cheap hot dogs. Every morningwe got the prices of thousands of ingredients: pig hearts, sawdust, etc. Each ingredient had an associated variable indicating how much of it to add to the hot dogs. There are thousands of linear constraints on these variables: so much meat, so much moisture, and so on. Together these constraints specify which combinations of ingredients constitute a hot dog. The cost of the hot dog is a linear function of the quantities you put into it and their prices. The goal is to determine what to put into the hot dogs that day to minimize the cost. This is an example of a general class of problems referred to as linear programs. Formal Specification: A linear program is an optimization problem whose constraints and objective functions are linear functions. The goal is to find a setting of variables that optimizes the objective function, while respecting all of the constraints. Precondition: We are given one of the following instances. Instances: An input instance consists of (1) a set of linear constraints on a set of variables and (2) a linear objective function. Postcondition: The output is a solution with minimum cost and the cost of that solution.
  • 44. 44 Solutions for Instance: A solution for the instance is a setting of all the variables that satisfies the constraints. Measure of Success: The cost or value of a solutions is given by the objective function. EXAMPLE: Instance Example: maximize 7x1 − 6x2 + 5x3 + 7x4 subject to 3x1 + 7x2 + 2x3 + 9x4 ≤ 258 6x1 + 3x2 + 9x3 − 6x4 ≤ 721 2x1 + 1x2 + 5x3 + 5x4 ≤ 524 3x1 + 6x2 + 2x3 + 3x4 ≤ 411 4x1 − 8x2 − 4x3 + 4x4 ≤ 685 No Small Local Maximum: To prove that the algorithm eventually finds a global maximum, we must prove that it will not get stuck in a small localmaximum. Convex: Because the bounded polyhedral is the intersection of straight cuts, it is what we call convex. More formally, this means that the line between any two points in the polyhedral is also in the polyhedral. This means that there cannot be two local maximum points, because between these two hills therewould need to be a valley and a line between two points across this valley would be outside the polyhedral. The Primal–DualMethod: The primal–dual method formally proves that a global maximum will be found. Given any linear program, defined by an optimization function and a set of constraints, there is a way of forming its dual minimization linear program. Each solution to this dual acts as a roof or upper bound on how high the primal solution can be. Then each iteration either finds a better solution for the primal or provides a solution for the dual linear program with a matching value. This dual solution witnesses the fact that no primal solution is bigger. Forming the Dual: If the primal linear program is to maximize a ・ x subject to Mx ≤ b, then the dual is to minimize bT ・ y subject to MT ・ y ≥ aT , where bT , MT , and aT are the transposes formed by interchanging rows and columns. The dual of Example 15.4.1 is minimize 258 + 721y2 + 524y3 + 411y4 + 685y5
  • 45. 45 subject to 3y1 + 6y2 + 2y3 + 3y4 + 4y5 ≥ 7 7y1 + 3y2 + 1y3 + 6y4 − 8y5 ≥ −6 2y1 + 9y2 + 5y3 + 2y4 − 4y5 ≥ 5 9y1 − 6y2 + 5y3 + 3y4 + 4y5 ≥ 7 The dual will have a variable for each constraint in the primal and a constraint for each of its variables. The coefficients of the objective function become the numbers on the righthand sides of the inequalities, and the numbers on the right-hand sides of the inequalities become the coefficients of the objective function. Finally, ―maximize‖ becomes ―minimize.‖ The dual is the same as the original primal. Upper Bound: We prove that the value of any solution to the primal linear program is at most the value of any solution to the dual linear program. The value of the primal solution x is a ・ x. The constraints MT ・ y ≥ aT can be 4. RECURSIVE BACKTRACKING: DEFINITION: Objectives:        Understand backtracking algorithms and use them to solve problems Use recursive functions to implement backtracking algorithms See how the choice of data structures can affect the efficiency of a program To understand how recursion applies to backtracking problems. To be able to implement recursive solutions to problems involving backtracking. To comprehend the minimax strategy as it applies to two-player games. To appreciate the importance of developing abstract solutions that can be applied to many different problem domains. EXPLANATION: Backtracking: Problem space consists of states (nodes) and actions (paths that lead to new states). When in a node can only see paths to connected nodes If a node only leads to failure go back to its "parent" node. Try other alternatives. If these all lead to failure then more backtracking may be necessary.
  • 46. 46 EXAMPLE: The Eight-Queens Puzzle: • How to place eight queens on a chess board so that no queen can take another – Remember that in chess, a queen can take another piece that lies on the same row, the same column or the same diagonal (either direction) • There seems to be no analytical solutions • Solutions do exists – Requires luck coupled with trial and error – Or much exhaustive computation Solve the Puzzle: • How will you solve the problem? – Put the queens on the board one by one in some systematic/logical order – Make sure that the queen placed will not be taken by another already on the board – If you are lucky to put eight queens on the board, one solution is found – Otherwise, some queens need to be removed and placed elsewhere to continue the search for a solution
  • 47. 47 Main program int main( ) /* Pre: The user enters a valid board size. Post: All solutions to the n-queens puzzle for the selected board size are printed. Uses: The class Queens and the recursive function solve_from. */ { int board_size; print_information( ); cout << "What is the size of the board? " << flush; cin >> board_size; if (board size < 0 || board size > max_board) cout << "The number must be between 0 and ― << max_board << endl; else { Queens configuration(board_size); // Initialize empty configuration. solve_from(configuration); // Find all solutions extending configuration. }} ALGORITHM: An Algorithm as a Sequence of Decisions: An algorithm for finding an optimal solution for your instance must make a sequence of small decisions about the solution: ―Do we include the first object in the solution or not?‖ ―Do we include the second?‖ ―The third?‖ . . . , or ―At the first fork in the road, do we go left or right?‖ ―At the second fork which direction do we go?‖ ―At the third?‖ . . . . As one stack frame in the recursive algorithm, our task is to deal only with the first of these decisions High-Level Code: The following is the basic structure that the code will take. algorithm Alg (I ) < pre-cond_>: I is an instance of the problem. < post-cond>: optSol is one of the optimal solutions for the instance I, and optCost is its cost. begin if( I is small ) then return( brute force answer ) else
  • 48. 48 % Deal with the first decision by trying each of the K possibilities for k = 1 to K % Temporarily, commit tomaking the first decision in the kth way. <optSolk , optCostk > = Recursively deal with all the remaining decisions, and in doing so find the best solution optSolk for our instance I from among those consistent with this kth way of making the first decision. optCostk is its cost. end for % Having the best, optSolk , for each possibility k, we keep the best of these best. kmin = ―a k that minimizes optCostk‖ optSol = optSolkmin optCost = optCostkmin return _optSol, optCost_ end if end algorithm 4.1 PRUNING BRANCHES: DEFINITION: This recursive backtracking algorithm for SAT can be sped up. This can either be viewed globally as a pruning off of entire branches of the classification tree or be viewed locally as seeing that some subinstances, after they have been sufficiently reduced, are trivial to solve. The following are typical reasons why an entire branch of the solution classification tree can be pruned off. Explanation: Invalid Solutions: Recall that in a recursive backtracking algorithm, the little bird tells the algorithm something about the solution and then the algorithm recurses by asking a friend a question. Then this friend gets more information about the solution from his little bird, and so on. Hence, following a path down the recursive tree specifies more and more about a solution until a leaf of the tree fully specifies one particular solution. Sometimes it happens that partway down the tree, the algorithm has already received enough information about the solution to determine that it
  • 49. 49 contains a conflict or defect making any such solution invalid. The algorithm can stop recursing at this point and backtrack. This effectively prunes off the entire subtree of solutions rooted at this node in the tree. Queens: Before we try placing a queen on the square _r + 1, k_, we should check to make sure that this does not conflict with the locations of any of the queens on __1, c1_, _2, c2_, . . . , _r, cr __. If it does, we do not need to ask for help from this friend. Exercise 17.5.4 bounds the resulting running time. Time Saved: The time savings can be huge. reducing the number of recursive calls from two to one decreased the running time from _(N) to _(log N), and reducing the number of recursive calls from four to three decreased the running time from _(n2) to _(n1.58...). No Highly Valued Solutions: Similarly, when the algorithm arrives at the root of a subtree, it might realize that no solutions within this subtree are rated sufficiently high to be optimal—perhaps because the algorithm has already found a solution provably better than all of these. Again, the algorithm can prune this entire subtree from its search. Greedy Algorithms: Greedy algorithms are effectively recursive backtracking algorithms with extreme pruning. Whenever the algorithm has a choice as to which little bird’s answer to take, i.e., which path down the recursive tree to take, instead of iterating through all of the options, it goes only for the one that looks best according to some greedy criterion Modifying Solutions: Let us recall why greedy algorithms are able to prune, so that we can use the same reasoning with recursive backtracking algorithms. In each step in a greedy algorithm, the algorithm commits to some decision about the solution. This effectively burns some of its bridges, because it eliminates some solutions from consideration. However, this is fine as long as it does not burn all its bridges. The prover proves that there is an optimal solution consistent with the choices made by modifying any possible solution that is not consistent with the latest choice into one that has at least as good value and is consistent with this choice. Similarly, a recursive backtracking algorithm can prune of branches in its tree when it knows that this does not eliminate all remaining optimal solutions. Queens: By symmetry, any solution that has the queen in the second half of the first row can be modified into one that has the this queen in the first half, simply by flipping the solution left to right.Hence, when placing a queen in the first row, there is no need to try placing it in the second half of the row. Depth-First Search: Recursive depth-first search is a recursive backtracking algorithm. A solution to the optimization problem of searching a maze for cheese is a path in the graph starting from s. The value of a solution is the weight of the node at the end of the path. The algorithm marks nodes that it has visited. Then, when the algorithm revisits a node, it knows that it can prune this subtree in this recursive search, because it knows that any node reachable from the current node has already been reached. the path _s, c, u, v_ is pruned because it can be modified into the path _s, b, u, v_, which is just as good.
  • 50. 50 4.2 SATISFIABILITY: DEFINITION: A famous optimization problem is called satisfiability, or SAT for short. It is one of the basic problems arising in many fields. The recursive backtracking algorithm given here is referred to as the Davis–Putnam algorithm. It is an example of an algorithm whose running time is exponential for worst case inputs, yet in many practical situations can work well. This algorithm is one of the basic algorithms underlying automated theorem proving and robot path planning, among other things. EXPLANATION: The Satisfiability Problem: Instances: An instance (input) consists of a set of constraints on the assignment to the binary variables x1, x2, . . . , xn. A typical constraint might be _x1 or x3 or x8_, meaning (x1 = 1 or x3 = 0 or x8 = 1) or equivalently that either x1 is true, x3 is false, or x8 is true. More generally an instance could be a more general circuit built with AND, OR, and NOT gates. Solutions: Each of the 2n assignments is a possible solution. An assignment is valid for the given instance if it satisfies all of the constraints. Measure of Success: An assignment is assigned the value one if it satisfies all of the constraints, and the value zero otherwise. Goal: Given the constraints, the goal is to find a satisfying assignment ALGORITHM: Code: algorithm DavisPutnam (c) <pre-cond>: c is a set of constraints on the assignment to _x. <post-cond>: If possible, optSol is a satisfying assignment and optCost is also one. Otherwise optCost is zero. begin if( c has no constraints or no variables ) then % c is trivially satisfiable. return _∅ , 1_ else if( c has both a constraint forcing a variable xi to 0
  • 51. 51 and one forcing the same variable to 1) then % c is trivially not satisfiable. return _∅ , 0_ else for any variable forced by a constraint to some value substitute this value into c. let xi be the variable that appears the most often in c % Loop over the possible bird answers. for k = 0 to 1 (unless a satisfying solution has been found) % Get help from friend. let c_ be the constraints c with k substituted in for xi <optSubSol, optSubCost> = DavisPutnam(c) optSolk=(forced values, xi = k, optSubSol) optCostk= optSubCost end for % Take the best bird answer. kmax = a k that maximizes optCostk optSol = optSolkmax optCost = optCostkmax return <optSol, optCost> end if end algorithm