The course concentrates on algorithmic problems present in computer games. The aim of the course is to review common solution methods, analyse their usability, and describe possible improvements. The topics cover among other things random numbers, game trees, path finding, terrain generation, and decision-making for synthetic players.
2. Course syllabus
credits: 5 cp (3 cu)
n prerequisites
n
n fundamentals
of algorithms and data structures (e.g.,
Cormen et al., Introduction to Algorithms)
n knowledge in programming (e.g., with Java)
n
assessment
n examination
only (no exercises)
4. Examinations 1(2)
n
examination dates (to be confirmed)
1.
2.
3.
n
n
?? (possibly October 2009)
?? (possibly November 2009)
?? (possibly January 2010)
check the exact times and places at http://
www.it.utu.fi/opetus/tentit/
remember to enrol!
https://ssl.utu.fi/nettiopsu/
5. Examinations 2(2)
n
questions
n based
on both lectures and the textbook
n two questions, à 5 points
n to pass the examination, at least 5 points (50%) are
required
n grade: g = ⎡p − 5⎤
n questions are in English, but you can answer in
English or in Finnish
10. In the beginning...
“If, when walking down the
halls of MIT, you should
happen to hear strange cries
of ‘No! No! Turn! Fire!
ARRRGGGHHH!!,’ do not
be alarmed. Another western
is not being filmed—MIT
students and others are merely
participating in a new sport,
SPACEWAR!”
D. J. Edwards & J. M.
Graetz, “PDP-1 Plays at
Spacewar”, Decuscope,
1(1):2–4, April 1962
11. ...and then
n
n
n
n
n
n
n
n
1962: Spacewar
1971: Nutting: Computer Space
1972: Atari: Pong
1978: Midway: Space Invaders
1979: Roy Trubshaw: MUD
1980: Namco: Pac-Man
1981: Nintendo: Donkey Kong
1983: Commodore 64
n
n
n
n
n
n
n
1985: Alexei Pajitnov: Tetris
1989: Nintendo Game Boy
1993: id Software: Doom
1994: Sony Playstation
1997: Origin: Ultima Online
2001: Microsoft Xbox
2006: Nintendo Wii
12. Three academic perspectives to
computer games
Humanistic
perspective
Game design
n
n
n
GAME
Technical
n
perspective
rules
graphics
animation
audio
Game
programming
n
n
Administrative/
business
perspective
n
Software development
n
n
n
n
n
design patterns
architectures
testing
reuse
gfx & audio
simulation
networking
AI
16. §1 Introduction
definitions: play, game, computer game
n anatomy of computer games
n synthetic players
n multiplaying
n games and story-telling
n other game design considerations
n
18. Components of a game
players: willing to participate for enjoyment,
diversion or amusement
n rules: define limits of the game
n goals: gives a sense of purpose
n opponents: give arise to contest and rivarly
n representation: concretizes the game
n
19. Components, relationships and
aspects of a game
cor r
n ce
ponde
es
representation
rules
definition
goal
obstru
ction
opponent
CHALLENGE
PLAY
CONFLICT
player
20. Definition for ‘computer game’
a game that is carried out with the help of a
computer program
n roles:
n
n coordinating
the game process
n illustrating the situation
n participating as a player
→ Model–View–Controller architectural pattern
22. Synthetic players
n
synthetic player = computer-generated actor in
the game
n displays
human-like features
n has a stance towards the human player
n
games are anthropocentric!
23. Humanness
n
human traits and characteristics
n fear
n
and panic (Half-Life, Halo)
computer game comprising only synthetic
players
n semi-autonomous
actors (The Sims)
n fully autonomous actors (Core War, AOE2)
25. Enemy
n
provides challenge
n opponent
must demonstrate intelligent (or at least
purposeful) behaviour
n cheating
n
n quick-and-dirty
methods
n when the human player cannot observe enemy’s
actions
26. Ally
n
augmenting the user interface
n hints
n
and guides
aiding the human player
n reconnaissance
officer
n teammate, wingman
n
should observe the human point of view
n provide
information in an accessible format
n consistency of actions
27. Neutral
n
commentator
n highlighting
events and providing background
information
n
camera director
n choosing
n
referee
n judging
n
camera views, angles and cuts
the rule violations
should observe the context and conventions
28. Studying synthetic players:
AIsHockey
n
simplified ice hockey:
official IIHF rules
n realistic measures and
weights
n Newtonian physics
engine
n
n
distributed system
n
n
client/server
architecture
implemented with Java
n
source code available
(under BSD licence)
29. Example: MyAI.java
import fi.utu.cs.hockey.ai.*;
public class MyAI extends AI implements Constants {
public void react() {
if (isPuckWithinReach()) {
head(headingTo(0.0, THEIR_GOAL_LINE));
brake(0.5);
shoot(1.0);
say(1050L);
} else {
head(headingTo(puck()));
dash(1.0);
}
}
}
30. Try it yourself!
challenge: implement a team of autonomous
collaborating synthetic players
n the platform and ready-to-use teams available at:
http://www.iki.fi/smed/aishockey
n
31. Multiplaying
multiple human players sharing the same game
n methods:
n
n divide
the screen
n divide the playtime
n networking
All this and more in the follow-up course
Multiplayer Computer Games
starting October 27, 2009.
32. Games and story-telling
n
traditional, linear story-telling
n
n
n
n
events remain from time to time (almost) unchangeable
books, theatre, cinema
participant (reader, watcher) is passive
interactive story-telling
n
n
n
events change and adapt to the choices the participant makes
computer games
participant (player) is active
33. A story is always told to human
beings
n
story-telling is not about actions but reasons for
actions
n humans
use a story (i.e., a narrative) to understand
intentional behaviour
n how can we model and generate this?
n
story-telling is about humans
n humans
humanize the characters’ behaviour and
understand the story through themselves
n how can we model and generate this?
All this and more in the course Interactive Storytelling
lectured in the Autumn 2010.
34. Other game design considerations
customization
n tutorial
n profiles
n modification
n replaying
n
→ parameterization!
35. §2 Random Numbers
what is randomness?
n linear congruential method
n
n parameter
choices
n testing
random shuffling
n uses in computer games
n
36. What are random numbers good for
(according to D.E.K.)
simulation
n sampling
n numerical analysis
n computer programming
n decision-making
n aesthetics
n recreation
n
37. Random numbers?
n
there is no such thing as a ‘random number’
n is
n
42 a random number?
definition: a sequence of statistically independent
random numbers with a uniform distribution
n numbers
are obtained by chance
n they have nothing to do with the other numbers in
the sequence
n
uniform distribution: each possible number is
equally probable
38. Methods
n
random selection
n drawing
n
tables of random digits
n decimals
n
balls out of a ‘well-stirred urn’
from π
generating data
n white
noise generators
n cosmic background radiation
n
computer programs?
39. Generating random numbers with
arithmetic operations
n
von Neumann (ca. 1946): middle square method
n take
the square of previous number and extract the
middle digits
n
example: four-digit numbers
n ri
= 8269
n ri + 1 = 3763 (ri2 = 68376361)
n ri + 2 = 1601 (ri + 12 = 14160169)
n ri + 3 = 5632 (ri + 22 = 2563201)
40. Truly random numbers?
each number is completely determined by its
predecessor!
n sequence is not random but appears to be
n → pseudo-random numbers
n all random generators based arithmetic
operation have their own in-built characteristic
regularities
n hence, testing and analysis is required
n
41. Middle square (revisited)
n
another example:
n ri
= 6100
n ri + 1 = 2100 (ri2 = 37210000)
n ri + 2 = 4100 (ri + 12 = 4410000)
n ri + 3 = 8100 (ri + 22 = 16810000)
n ri + 4 = 6100 = ri (ri + 32 = 65610000)
n
how to counteract?
42. Words of the wise
‘random numbers should not be generated with
a method chosen at random’
— D. E. Knuth
n ‘Any one who considers arithmetical methods
of producing random digits is, of course, in a
state of sin.’
— J. von Neumann
n
45. Words of the more (or less) wise
n
‘We guarantee that each number is random
individually, but we don’t guarantee that more
than one of them is random.’
— anonymous computer centre’s
programming consultant (quoted in Numerical
Recipes in C)
46. Other concerns
speed of the algorithm
n ease of implementation
n parallelization techniques
n portable implementations
n
47. Linear congruential method
D. H. Lehmer (1949)
n choose four integers
n
n modulus:
m (0 < m)
n multiplier: a (0 ≤ a < m)
n increment: c (0 ≤ c < m)
n starting value (or seed): X0 (0 ≤ X0 < m)
n
obtain a sequence 〈Xn〉 by setting
Xn + 1 = (aXn + c) mod m (n ≥ 0)
48. Linear congruential method (cont’d)
let b = a – 1
n generalization:
Xn + k = (akXn + (ak – 1) c/b) mod m
(k ≥ 0, n ≥ 0)
n random floating point numbers Un ∈ [0, 1):
Un = Xn / m
n
49.
50. Random integers from a given
interval
n
Monte Carlo methods
approximate solution
n accuracy can be improved at the cost
of running time
n
n
Las Vegas methods
n
n
n
exact solution
termination is not guaranteed
Sherwood methods
exact solution, termination guaranteed
n reduce the difference between good
and bad inputs
n
51.
52.
53. Choice of modulus m
sequence of random numbers is finite → period
(repeating cycle)
n period has at most m elements → modulus
should be large
n recommendation: m is a prime
n reducing modulo: m is a power of 2
n
n m
= 2i : x mod m = x п (2i – 1)
54. Choice of multiplier a
n
period of maximum length
n
n
n
a = c = 1: Xn + 1 = (Xn + 1) mod m
hardly random: …, 0, 1, 2, …, m – 1, 0, 1, 2, …
results from Theorem 2.1.1
if m is a product of distinct primes, only a = 1 produces full
period
n if m is divisible by a high power of some prime, there is
latitude when choosing a
n
n
rules of thumb
n
n
0.01m < a < 0.99m
no simple, regular bit patterns in the binary representation
55. Choice of increment c
n
no common factor with m
n c
=1
n c = a
n
if c = 0, addition operation can be eliminated
n faster
processing
n period length decreases
56. Choice of starting value X0
determines from where in the sequence the
numbers are taken
n to guarantee randomness, initialization from a
varying source
n
n built-in
clock of the computer
n last value from the previous run
n
using the same value allows to repeat the
sequence
57. Tests for randomness 1(2)
Frequency test
n Serial test
n Gap test
n Poker test
n Coupon collector’s test
n
58. Tests for randomness 2(2)
Permutation test
n Run test
n Collision test
n Birthday spacings test
n Spectral test
n
59. Spectral test
good generators will pass it
n bad generators are likely to fail it
n idea:
n
n let
the length of the period be m
n take t consecutive numbers
n construct a set of t-dimensional points:
{ (Xn, Xn + 1, …, Xn + t – 1) | 0 ≤ n < m }
n
when t increases the periodic accuracy decreases
n a
truly random sequence would retain the accuracy
63. Random shuffling
n
n
n
n
generate random permutation, where all permutations
have a uniform random distribution
shuffling ≈ inverse sorting (!)
ordered set S = 〈s1, …, sn〉 to be shuffled
naïve solution
enumerate all possible n! permutations
n generate a random integer [1, n!] and select the corresponding
permutation
n practical only when n is small
n
64. Random sampling without
replacement
n
guarantees that the distribution of permutations is
uniform
every element has a probability 1/n to become selected in the
first position
n subsequent position are filled with the remaining n – 1
elements
n because selections are independent, the probability of any
generated ordered set is
1/n · 1/(n – 1) · 1/(n – 2) · … · 1/1 = 1/n!
n there are exactly n! possible permutations
→ generated ordered sets have a uniform distribution
n
72. Random numbers in games
terrain generation
n events
n character creation
n decision-making
n game world compression
n synchronized simulation
n
73. Game world compression
n
n
n
n
n
used in Elite (1984)
finite and discrete galaxy
enumerate the positions
set the seed value
generate a random value for each position
n
n
n
n
if smaller than a given density, create a star
otherwise, space is void
each star is associated with a randomly generated
number, which used as a seed when creating the star
system details (name, composition, planets)
can be hierarchically extended
78. Random game world generation
n
discrete game worlds
n example:
Nethack, Age of Empires
n rooms, passages, item placements
n
continuous game worlds
n random
world is not believable
n modular segments put together randomly
n
terrain generation
99. §3 Tournaments
n
rank adjustment (or challege) tournament
n
n
n
elimination tournament (or cup)
n
n
n
each match eliminates the loser from the tournament
types: single elimination
scoring tournament
n
n
n
each match is a challenge for a rank exchange
types: ladder, hill climbing, pyramid, king of the hill
each match rewards the winner
types: round robin
hybridizations
100. Other uses
n
game balancing
n
n
n
heuristic search
n
n
selecting suboptimal candidates for a genetic algorithm
group behaviour
n
n
duelling synthetic players
adjusting point rewarding schemes
modelling pecking order
learning player characteristics
n
managing history knowledge
101. Example: Hill climbing tournament
Juhani
Tuomas
Aapo
Simeoni
Timo
Lauri
Eero
m0
m1
m2
m3
m4
m5
104. Terms
players: p0…pn − 1
n match between pi and pj: match(i, j)
n outcome: WIN, LOSE, TIE
n rank of pi: rank(i)
n players with the rank r: rankeds(r)
n round: a set of (possibly) concurrent matches
n bracket: diagram of match pairings and rounds
n
105. Rank adjustment tournaments
a set of already ranked players
n matches
n
n independent
from one another
n outcome affects only the participating players
n
suits on-going tournaments
n example:
n
boxing
matches can be limited by the rank difference
108. Hill-climbing tournament
n
a.k.a.
n
n
n
top-of-the-mountain tournament
last man standing tournament
specialization of the ladder tournament
reigning champion defends the title against challlengers
n similarly: king of the hill tournament specializes the pyramid
tournament
n
n
initialization
n
n
based on previous competitions
random
109.
110. Elimination tournaments
n
loser of a match is eliminated from the
tournament
n no
ties! → tiebreak competition
winner of a match continues to the next round
n how to assign pairings for the first round?
n
n seeding
n
examples
n football
cups, snooker tournaments
113. Seeding
some match pairing will not occur in a single
elimination tournament
n pairings for the first round (i.e., seeding) affects
the future pairings
n seeding can be based on existing ranking
n
n favour
the top-ranked players
n reachability: give the best players an equal
opportunity to proceed the final rounds
114. Seeding methods
n
random
n
n
n
standard and ordered standard
n
n
n
does not favour any player
does not fulfil reachability criterion
favours the top-ranked players
ordered standard: matches are listed in increasing order
equitable
n
in the first round, the rank difference between the players is
the same for each match
115.
116. Byes and fairness
the byes have bottom ranks so that they get
paired with best players
n the byes appear only in the first round
n
117. Runners-up
n
we find only the champion
n how
to determine the runners-up (e.g. silver and
bronze medallists)?
n
random pairing can reduce the effect of seeding
n best
players are put into different sub-brackets
n the rest is seeded randomly
n
re-seed the players before each round
n previous
n
matches indicate the current position
multiple matches per round (best-of-m)
118. Double elimination tournament
n
two brackets
n winners’
bracket
n losers’ (or consolation) bracket
n
initially everyone is in the winners’ bracket
n if
a player loses, he is moved to the losers’ bracket
n if he loses again, he is out from the tournament
n
the brackets are combined at some point
n for
example, the champion of the losers’ bracket
gets to the semifinal in the winners’ bracket
119. Scoring tournaments
round robin: everybody meets everybody else
once
n scoring table determines the tournament winner
n
n players
are rewarded with scoring points
n winner and tie
n
matches are independent from one another
120. Reduction to a graph
n players
n clique Kn
n players as vertices, matches as edges
n how to organize the rounds?
n
n a
player has at most one match in a round
n a round has as many matches as possible
K5
121. Reduction to a graph (cont’d)
n
if n is odd, partition the edges of the clique to
(n − 1) / 2 disjoint sets
n in
each turn, one player is resting
n player pi rests in the round i
n
if n is even, reduce the problem
n player pn − 1
is taken out from the clique
n solve the pairings for n − 1 players as above
n for each round, pair the resting player pi with
player pn − 1
125. Real-world tournament examples
n
boxing
n
n
sport wrestling
n
n
n
n
double elimination: consolation bracket
professional wrestling
n
n
reigning champion and challengers
royal rumble
World Cup
ice hockey championship
snooker
126. matches (n = 15)
14
14
14
105
rounds
14
6
4
15
1…14
1…6
3…4
14
1
1…4
1…7
7
champion’s
matches
match in a round
128. §4 Game Trees
n
perfect information games
n
n
two-player, perfect information games
n
n
n
n
Noughts and Crosses
Chess
Go
imperfect information games
n
n
n
n
no hidden information
Poker
Backgammon
Monopoly
zero-sum property
n
one player’s gain equals another player’s loss
129. Game tree
n
all possible plays of two-player, perfect
information games can be represented with a
game tree
n nodes:
positions (or states)
n edges: moves
players: MAX (has the first move) and MIN
n ply = the length of the path between two nodes
n
n
n
has even plies counting from the root node
MIN has odd plies counting from the root node
MAX
132. Problem statement
Given a node v in a game tree
find a winning strategy for MAX (or MIN) from v
or (equivalently)
show that MAX (or MIN) can force a win from v
133. Minimax
n
n
assumption: players are rational and try to win
given a game tree, we know the outcome in the leaves
n
n
at nodes one ply above the leaves, we choose the best
outcome among the children (which are leaves)
n
n
n
assign the leaves to win, draw, or loss (or a numeric value like
+1, 0, –1) according to MAX’s point of view
MAX:
win if possible; otherwise, draw if possible; else loss
MIN: loss if possible; otherwise, draw if possible; else win
recurse through the nodes until in the root
134. Minimax rules
1.
2.
n
If the node is labelled to MAX, assign it to the
maximum value of its children.
If the node is labelled to MIN, assign it to the
minimum value of its children.
MIN
minimizes, MAX maximizes → minimax
138. Analysis
n
simplifying assumptions
n
n
n
time consumption is proportional to the number of
expanded nodes
n
n
n
n
n
internal nodes have the same branching factor b
game tree is searched to a fixed depth d
1 — root node (the initial ply)
b — nodes in the first ply
b2 — nodes in the second ply
bd — nodes in the dth ply
overall running time O(bd)
139. Rough estimates on running
times when d = 5
suppose expanding a node takes 1 ms
n branching factor b depends on the game
n Draughts (b ≈ 3): t = 0.243 s
n Chess (b ≈ 30): t = 6¾ h
n Go (b ≈ 300): t = 77 a
n alpha-beta pruning reduces b
n
140. Controlling the search depth
usually the whole game tree is too large
→ limit the search depth
→ a partial game tree
→ partial minimax
n n-move look-ahead strategy
n
n stop
searching after n moves
n make the internal nodes (i.e., frontier nodes) leaves
n use an evaluation function to ‘guess’ the outcome
141. Evaluation function
n
combination of numerical measurements
mi(s, p) of the game state
n single
measurement: mi(s, p)
n difference measurement: mi(s, p) − mj(s, q)
n ratio of measurements: mi(s, p) / mj(s, q)
n
aggregate the measurements maintaining the
zero-sum property
142. Example: Noughts and Crosses
n
heuristic evaluation function e:
n count
the winning lines open to MAX
n subtract the number of winning lines open to MIN
n
forced wins
n state
is evaluated +∞, if it is a forced win for MAX
n state is evaluated –∞, if it is forced win for MIN
144. Drawbacks of partial minimax
n
horizon effect
heuristically promising path can lead to an unfavourable
situation
n staged search: extend the search on promising nodes
n iterative deepening: increase n until out of memory or time
n phase-related search: opening, midgame, end game
n however, horizon effect cannot be totally eliminated
n
n
bias
we want to have an estimate of minimax but get a minimax of
estimates
n distortion in the root: odd plies → win, even plies → loss
n
145. The deeper the better...?
n
assumptions:
n
n
n
n
minimax convergence theorem:
n
n
n increases → root value converges to f(b, d)
last player theorem:
n
n
n-move look-ahead
branching factor b, depth d,
leaves with uniform random distribution
root values from odd and even plies not comparable
minimax pathology theorem:
n
n increases → probability of selecting non-optimal move
increases (← uniformity assumption!)
146. Alpha-beta pruning
reduce the branching factor of nodes
n alpha value
n
n associated
with MAX nodes
n represents the worst outcome MAX can achieve
n can never decrease
n
beta value
n associated
with MIN nodes
n represents the worst outcome MIN can achieve
n can never increase
147. Example
n
in a MAX node, α = 4
n we
know that MAX can make a move which will
result at least the value 4
n we can omit children whose value is less than or
equal to 4
n
in a MIN node, β = 4
n we
know that MIN can make a move which will result
at most the value 4
n we can omit children whose value is greater than or
equal to 4
148.
149. Rules of pruning
1.
2.
Prune below any MIN node having a beta value
less than or equal to the alpha value of any of
its MAX ancestors.
Prune below any MAX node having an alpha
value greater than or equal to the beta value of
any of its MIN ancestors
Or, simply put: If α ≥ β, then prune below!
155. Best-case analysis
omit the principal variation
n at depth d – 1 optimum pruning: each node
expands one child at depth d
n at depth d – 2 no pruning: each node expands all
children at depth d – 1
n at depth d – 3 optimum pruning
n at depth d – 4 no pruning, etc.
n total amount of expanded nodes: Ω(bd/2)
n
156. Principal variation search
n
alpha-beta range should be small
n
n
n
limit the range artificially → aspiration search
if search fails, revert to the original range
if we find a move between α and β, assume we have
found a principal variation node
search the rest of nodes the assuming they will not produce a
good move
n if the assumption fails, re-search the node
n
n
works well if the principal variation node is likely to get
selected first
157. Games of chance
n
minimax trees assume determistic moves
n what
about indeterministic events like tossing a coin,
casting a die or shuffling cards?
chance nodes: *-minimax tree
n expectiminimax
n
n if
node v is labelled to CHANCE, multiply the
probability of a child with its expectiminimax value
and return the sum over all v’s children
n otherwise, act as in minimax
158. §5 Path Finding
n
common problem in computer games
n routing
n
characters, troops etc.
computationally intensive problem
n complex
game worlds
n high number of entities
n dynamically changing environments
n real-time response
159. Problem statement
given a start point s and a goal point r, find a
path from s to r minimizing a given criterion
n search problem formulation
n
n find
n
a path that minimizes the cost
optimization problem formulation
n minimize
cost subject to the constraint of the path
160. The three phases of path finding
1.
discretize the game world
n
2.
solve the path finding problem in a graph
n
n
3.
select the waypoints and connections
let waypoints = vertices, connections = edges,
costs = weights
find a minimum path in the graph
realize the movement in the game world
n
n
aesthetic concerns
user-interface concerns
161.
162.
163.
164. Discretization
n
waypoints (vertices)
n doorways,
n
corners, obstacles, tunnels, passages, …
connections (edges)
n based
on the game world geometry, are two
waypoints connected
n
costs (weights)
n distance,
n
environment type, difference in altitude, …
manual or automatic process?
n grids,
navigation meshes
168. Navigation mesh
n
convex partitioning of the game world geometry
n convex
polygons covering the game world
n adjacent polygons share only two points and one
edge
n no overlapping
n
polygon = waypoint
n middlepoints,
n
centre of edges
adjacent polygons = connections
169.
170. Solving the convex partitioning
problem
n
minimize the number of polygons
n
n
n
optimal solution
n
n
points: n
points with concave interior angle (notches): r ≤ n − 3
dynamic programming: O(r2n log n)
Hertel–Mehlhorn heuristic
n
n
n
number of polygons ≤ 4 × optimum
running time: O(n + r log r)
requires triangulation
running time: O(n) (at least in theory)
n Seidel’s algorithm: O(n lg* n) (also in practice)
n
171.
172.
173. Path finding in a graph
n
after discretization form a graph G = (V, E)
n waypoints
= vertices (V)
n connections = edges (E)
n costs = weights of edges (weight : E → R+)
n
next, find a path in the graph
175. Heuristical improvements
n
best-first search
n order
the vertices in the neighbourhood according to
a heuristic estimate of their closeness to the goal
n returns optimal solution
n
beam search
n order
the vertices but expand only the most
promising candidates
n can return suboptimal solution
176.
177. Evaluation function
expand vertex minimizing
f(v) = g(s ~> v) + h(v ~> r)
n g(s ~> v) estimates the minimum cost from the
start vertex to v
n h(v ~> r) estimates (heuristically) the cost from v
to the goal vertex
n if we had exact evaluation function f *, we could
solve the problem without expanding any
unnecessary vertices
n
178. Cost function g
n
actual cost from s to v along the cheapest path
found so far
n exact
cost if G is a tree
n can never underestimate the cost if G is a general
graph
n f(v) = g(s ~> v) and unit cost
→ breadth-first search
n f(v) = –g(s ~> v) and unit cost
→ depth-first search
179. Heuristic function h
carries information from outside the graph
n defined for the problem domain
n the closer to the actual cost, the less superfluous
vertices are expanded
n f(v) = g(s ~> v) → cheapest-first search
n f(v) = h(v ~> r) → best-first search
n
180.
181. Admissibility
let Algorithm A be a best-first search using the
evaluation function f
n search algorithm is admissible if it finds the
minimal path (if it exists)
n
n if
n
f = f *, Algorithm A is admissible
Algorithm A* = Algorithm A using an estimate
function h
n A*
is admissible, if h does not overestimate the
actual cost
182. Monotonicity
h is locally admissible → h is monotonic
n monotonic heuristic is also admissible
n actual cost is never less than the heuristic cost
→ f will never decrease
n monotonicity → A* finds the shortest path to
any vertex the first time it is expanded
n
n if
a vertex is rediscovered, path will not be shorter
n simplifies implementation
184. Informedness
the more closely h approximates h*, the better A*
performs
n if A1 using h1 will never expand a vertex that is
not also expanded by A2 using h2, A1 is more
informed that A2
n informedness → no other search strategy with
the same amount of outside knowledge can do less
work than A* and be sure of finding the optimal
solution
n
185. Algorithm A*
n
because of monotonicity
n all
weights must be positive
n closed list can be omitted
n
the path is constructed from the mapping π
starting from the goal vertex
n s
→ … → π(π(π(r))) → π(π(r)) → π(r) → r
194. Practical considerations
n
computing h
n despite
the extra vertices expanded, less informed h
may yield computationally less intensive
implementation
n
suboptimal solutions
n by
allowing overestimation A* becomes
inadmissible, but the results may be good enough for
practical purposes
195. Realizing the movement
n
movement through the waypoints
n unrealistic:
does not follow the game world
geometry
n aesthetically displeasing: straight lines and sharp
turns
n
improvements
n line-of-sight
testing
n obstacle avoidance
n
combining path finding to user-interface
n real-time
response
196.
197.
198. Recapitulation
1.
discretization of the game world
n
n
2.
path finding in a graph
n
3.
grid, navigation mesh
waypoints, connections, costs
Algorithm A*
realizing the movement
n
n
geometric corrections
aesthetic improvements
199. Alternatives?
Although this is the de facto approach in
(commercial) computer games, are there
alternatives?
n possible answers
n
n AI
processors (unrealistic?)
n robotics: reactive agents (unintelligent?)
n analytical approaches (inaccessible?)
200. §6 Decision-Making
n
decision-making and games
n
n
n
n
example methods
n
n
n
n
levels of decision-making
modelled knowledge
method
finite state machines
flocking algorithms
influence maps
this will not be a comprehensive guide into decisionmaking!
201. MVC (revisited)
model
controller
state instance
control logic
core structures
proto-view
configuration
driver
synthetic
player
instance data
input
device
script
action
human player
synthetic
view
view
rendering
output
device
perception
options
203. Three perspectives for decisionmaking in computer games
n
level of decision-making
n strategic,
n
tactical, operational
use of the modelled knowledge
n prediction,
n
production
methods
n optimization,
adaptation
205. Strategic level
n
long-term decisions
n infrequent
→ can be computed offline or in the
background
n
large amount of data, which is filtered to bring
forth the essentials
n quantization
problem?
speculative (what-if scenarios)
n the cost of a wrong decision is high
n
206. Tactical level
medium-term decisions
n intermediary between strategic and operational
levels
n
n follow
the plan made on the strategic level
n convey the feedback from the operational level
n
considers a group of entities
n a
selected set of data to be scrutinized
n co-operation within the group
207. Operational level
n
short-term decisions
n reactive,
real-time response
concrete and closely connected to the game
world
n considers individual entities
n the cost of a wrong decision is relatively low
n
n of
course not to the entity itself
208. Use of the modelled knowledge
time series data
n world = a generator of events and states, which
can be labelled with symbols
n prediction
n
n what
n
the generator will produce next?
production
n simulating
n
the output of the generator
how to cope with uncertainty?
211. Decision-making methods
n
optimization
n find
an optimal solution for a given objective
function
n affecting factors can be modelled
n
adaption
n find
a function behind the given solutions
n affecting factors are unknown or dynamic
216. Finite state machine (FSM)
n
components:
n states
n transitions
n events
n actions
n
state chart: fully connected directed graph
n vertices
= states
n edges = transitions
217.
218. Properties of FSM
1.
acceptor
n
2.
transducer
n
3.
what is the corresponding output sequence for a
given input sequence?
computator
n
n
does the input sequence fulfil given criteria?
what is the sequence of actions for a given input
sequence?
these properties are independent!
219.
220. Mealy and Moore machines
n
n
theoretical cathegories for FSMs
Mealy machine
actions are in transitions
n the next action is determined by the current state and the
occurring event
n more compact but harder to comprehend
n
n
Moore machine
n
n
n
actions are in states
the next action is determined by the next state
helps to understand and use state machines in UML
221.
222. Implementation
n
design by contract
n two
parties: the supplier and the client
n formal agreement using interfaces
n
FSM software components
n environment:
view to the FSM (client)
n context: handles the dynamic aspects of the FSM
(supplier)
n structure: maintains the representation of the FSM
(supplier)
223.
224. Noteworthy
n
structure is static
n
n
reactivity
n
n
not for continuous or multivalued values
combinatorial explosion
n
n
memoryless representation of all possible walks from the
initial state
states are mutually exclusive: one state at a time
n
n
hard to modify
if the states and events are independent
risk of total rewriting
n
high cohesion of actions
225. Flocking
C. W. Reynolds: “Flocks, herds, and schools: A
distributed behavioral model” (1987)
n a flock seems to react as autonomous entity
although it is a collection of individual beings
n flocking algorithm emulates this phenomenon
n results resemble various natural group
movements
n boid = an autonomous agent in a flock
n
226. Rules of flocking
1.
2.
3.
4.
Separation: Do not crowd flockmates.
Alignment: Move in the same direction as
flockmates.
Cohesion: Stay close to flockmates.
Avoidance: Avoid obstacles and enemies.
→ boid’s behavioural urges
233. Other uses for flocking
n
swarm algorithms
n solution
candidate = boid
n solution space = flying space
n separation prevents crowding the local optima
n
obstacle avoidance in path finding
n steer
away from obstacles along the path
234. Influence maps
discrete representation of the synthetic player’s
knowledge of the world
n strategic and tactical information
n
n frontiers,
n
control points, weaknesses
influence
n type
n repulsiveness/alluringness
n
recall path finding and terrain generation
235. Assumptions
a regular grid over the game world
n each tile holds numeric information of the
corresponding area
n
n positive
values: alluringness
n negative values: repulsiveness
240. Evaluation
static features: compute beforehand
n periodical updates
n
n categorize
the maps based on the rate of change
n lazy evaluation
241.
242.
243.
244. Key questions for synthetic players
how to achieve real-time response?
n how to distribute the synthetic players in a
network?
n how autonomous the synthetic players should
be?
n how to communicate with other synthetic
players?
n
245. §7 Modelling Uncertainty
n
probabilistic uncertainty
n probability
of an outcome
n dice, shuffled cards
n statistical reasoning
n Bayesian
n
networks, Dempster-Shafer theory
possibilistic uncertainty
n possibility
of classifying object
n sorites paradoxes
n fuzzy sets
246. Probabilistic or possibilistic
uncertainty?
Is the vase broken?
n Is the vase broken by a burglar?
n Is there a burglar in the closet?
n Is the burglar in the closet a man?
n Is the man in the closet a burglar?
n
247. Bayes’ theorem
hypothesis H
n evidence E
n probability of the hypothesis P(H)
n probability of the evidence P(E)
n probability of the hypothesis based on the
evidence
P(H|E) = (P(E|H) · P(H)) / P(E)
n
248. Example
H — there is a bug in the code
n E — a bug is detected in the test
n E|H — a bug is detected in the test given that
there is a bug in the code
n H|E — there is a bug in the code given that a
bug is detected in the test
n
249. Example (cont’d)
P(H) = 0.10
n P(E|H) = 0.90
n P(E|¬H) = 0.10
n P(E) = P(E|H) · P(H) + P(E|¬H) · P(¬H)
= 0.18
n from Bayes’ theorem:
P(H|E) = 0.5
n conclusion: a detected bug has fifty-fifty chance
that it is not in the actual code
n
250. Bayesian networks
n
describe cause-and-effect relationships with a
directed graph
n vertices
= propositions or variables
n edges = dependencies as probabilities
propagation of the probabilities
n problems:
n
n relationships
between the evidence and hypotheses
are known
n establishing and updating the probabilities
251.
252.
253. Dempster-Shafer theory
belief about a proposition as an interval
[ belief, plausability ] ⊆ [ 0, 1]
n belief supporting A: Bel(A)
n plausability of A: Pl(A) = 1 − Bel(¬A)
n Bel(intruder) = 0.3, Pl(intruder) = 0.8
n
n Bel(no
intruder) = 0.2
n 0.5 of the probability range
is indeterminate
255. Example 1(5)
n
hypotheses: animal, weather, trap, enemy
n Θ
n
= { A, W, T, E}
task: assign a belief value for each hypothesis
n evidence
n
mass function m(H) = current belief to the set H
of hypotheses
n in
n
can affect one or more hypotheses
the beginning m(Θ) = 1
evidence ‘noise’ supports A, W and E
n mass
function mn({ A, W, E }) = 0.6, mn(Θ) = 0.4
256.
257. Example 2(3)
n
evidence ‘footprints’ supports A, T, E
n
n
combination with Dempster’s rule:
n
n
mf({ A, T, E }) = 0.8, mf(Θ) = 0.2
mnf({A, E}) = 0.48, mnf({W, A, E}) = 0.12,
mnf({A, T, E}) = 0.32, mnf(Θ) = 0.08
enemy, trap, trap or enemy, weather, or animal?
n
n
n
n
n
Bel(E) = 0, Pl(E) = 1
Bel(T) = 0, Pl(T) = 0.4
Bel(T, E) = 0, Pl(T, E) = 1
Bel(W) = 0, Pl(W) = 0.2
Bel(A) = 0, Pl(A) = 1
259. Fuzzy sets
n
element x has a membership in the set A
defined by a membership function μA(x)
in the set: μA(x) = 0
n fully in the set: μA(x) = 1
n partially in the set: 0 < μA(x) < 1
n not
n
contrast to classical ‘crisp’ sets
n not
in the set: χA(x) = 0
n in the set: χA(x) = 1
262. How to assign membership
functions?
n
real-word data
n
n
n
subjective evaluation
n
n
n
human experts’ cognitive knowledge
questionnaires, psychological tests
adaptation
n
n
physical measurements
statistical data
neural networks, genetic algorithms
→ simple functions usually work well enough as long as
they model the general trend
263. Fuzzy operations
union: μC(x) = max{μA(x), μB(x)}
n intersection: μC(x) = min{μA(x), μB(x)}
n complement: μC(x) = 1 − μA(x)
n
n
note: operations can be defined differently
268. Uses for fuzzy sets
approximate reasoning
n fuzzy constraint satisfaction problem
n fuzzy numbers
n almost any ‘crisp’ method can be fuzzified!
n
269. Constraint satisfaction problem
n
constraint satisfaction problem (CSP):
n a
set of n variables X
n a domain Di for each variable xi in X
n a set of constraints restricting the feasibility of the
tuples (x0, x1,…, xn – 1) ∈ D0 × … × Dn – 1
n
solution: an assignment of value to each variable
so that every constraint is satisfied
n no
objective function → not an optimization
problem
270. Example: n queens problem as a
CSP
problem: place n queens on a n × n chessboard
so that they do not threat one another
n CSP formulation
n
n variables:
xi for each row i
n domain: Di = { 1, 2,…, n }
n constraints:
n xi
≠ xj
n xi – xj ≠ i – j
n xj – xi ≠ i – j
271.
272. Fuzzy constraint satisfaction
problem
n
fuzzy constraint satiscation problem (FCSP) is a
five-tuple P = 〈 V, Cµ, W, T, U 〉
n V:
variables
n U: universes (domains) for the variables
n Cµ: constraints as membership functions
n W: weighting scheme
n T: aggregation function
273.
274. Dog Eat Dog: Modelling the
criteria as fuzzy sets
if the visual observation of the enemy is reliable,
then avoid the enemy
n if the visual observation of the prey is reliable,
then chase the prey
n if the olfactory observation of the pond is
reliable, then go to the pond
n if the visual observation of the enemy is reliable,
then stay in the centre of the play field
n
275.
276.
277.
278. Dog Eat Dog: Weighting the
criteria importances
n
fuzzy criterion Ci has a weight wi
n
n
[0, 1]
a greater value wi corresponds to a greater importance
the weighted value from the implication wi → Ci
n
n
classical definition (A → B ⇔ ¬A B): min{ (1 − wi ), Ci }
Yager’s weighting scheme: the weighted membership value:
μCw(x) =
⎧ 1, if μ(x) = 0 and w = 0
⎨
⎩ (μC(x))w, otherwise
279. Dog Eat Dog: Aggregating the
criteria
aggregator should have compensatory properties
n the effect of a poorly satisfied criterion is not so
drastic
n mean-based operators instead of conjunction
n
n ordered
weighted averaging (OWA)
280. Ordered weighted averaging
(OWA)
n
weight sequence W = (w0, w1,…,wn – 1)T
n
n
F(a0, a1,…,an – 1) = Σwjbj
n
n
bj is the (j+1)th largest element of the sequence
A = 〈a0, a1,…,an – 1〉
by setting the weight sequence we can get
n
n
n
n
∀wi ∈ [0, 1] and Σwi = 1
conjunction: W = { 0, 0,…, 1} = min{A}
disjunction: W = { 1, 0,…, 0} = max{A}
average: W = {1/n, 1/n,…, 1/n}
soft-and operator: wi = 2(i + 1) / (n(n + 1))
n
example: n = 4, W = { 0.1, 0.2, 0.3, 0.4 }
287. Examinations (cont’d)
n
questions
n based
on both lectures and the textbook
n two questions, à 5 points
n to pass the examination, at least 5 points (50%) are
required
n grade: g = ⎡p − 5⎤
n questions are in English, but you can answer in
English or in Finnish
288. Follow-up course:
Multiplayer Computer Games
focus: networking in computer games
n credits: 5 cp (3 cu)
n schedule:
n
n October
27 – November 26, 2009
n Tuesdays 10–12 a.m., and Thursdays 10–12 p.m.
n
web page:
http://www.iki.fi/smed/mcg