3. Introduction
Motivation
Robot, go to the living room Increasing demand of service robots
◮ a robot has to be programmed
according to the task to be
accomplished
◮ programming abilities to perform
tasks is difficult and usually,
users are not programmers or
robotic experts
◮ a possibility is that the user
shows the robot what to do
A mobile robot that learns to navigate and to identify gestures
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 3 / 23
4. Introduction
Motivation
Sequences are used to describe different problems in many fields
◮ The robot can learn from
Goal
dynamic obstacle sequences of actions generated
turn-right
by the user
turn-left
orient go-forward
◮ In this work the idea is to learn
leave-a-
grammars from sequences of
orient
trap
actions that can be used to
execute tasks or as classifiers
Grammars are used as control programs and as classifiers
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 4 / 23
5. Grammar learning
Grammar learning
FOSeq (First Order Sequence learning)
Input: a set of sequences of first order state-action pairs. FOSeq:
1. Induces a grammar for each sequence, identifying repeated
elements (n-grams e.g., sub-sequences of n-items, in our case
n-predicates) .
2. Evaluates the sequences with each learned grammar.
3. Selects the best evaluated grammar and a list of candidates to
improve the evaluation of the best grammar.
4. Applies a generalization process to the best grammar.
Output: A grammar that can be applied as controller or as classifier.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 5 / 23
6. Grammar learning
Input
The user steers the robot to execute the task. Raw data from the
sensors are transformed into predicates
The input is a sequence of first order state-action pairs like the
following:
pred1(State1,action1),pred2(State2,action1),pred1(State3,action2),. . .
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 6 / 23
7. Grammar learning Induction
Induction
◮ The algorithm looks for n-grams that appear at least twice in the
sequence.
◮ The candidate n-grams are incrementally searched by their length.
◮ The n-gram with the highest frequency of each iteration is
selected, generating a new grammar rule and replacing in the
sequence, all occurrences of the n-gram with a new non-terminal
symbol.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 7 / 23
8. Grammar learning Evaluation
Induction: example
S→a b c b c b c b a b c b d b e b c S2 → R2 d b e R1
b c (5) R1 → b c
Add: R1 → b c R2 → a R1 b
S1 → a R1 R1 R1 b a R1 b d b e R1
Removing repeated items:
S1 → a R1 b a R1 b d b e R1
R1 b (2)
a R1 (2) R2 d b e R1
a R1 b (2)
Add: R2 → a R1 b a R1 b b c
S2 → R2 R2 d b e R1
Removing repeated items: b c
S2 → R2 d b e R1
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 8 / 23
9. Grammar learning Evaluation
Induction: predicates
When the items of the sequence are first order predicates, the learned
grammar is a definite clause grammar (DCG). DCGs:
◮ are an extension of context free grammars,
◮ can have arguments, and
◮ are expressed and executed in Prolog.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 9 / 23
10. Grammar learning Evaluation
Evaluation
◮ Every learned grammar is used to parse all the sequences in the
set of traces provided by the user.
◮ For each parsed sequence, each grammar is evaluated using the
following function:
n
cj
eval(Gi ) = (1)
cj + fj
j=1
where Gi is the grammar being evaluated, cj and fj are the number
of items that the grammar is able or unable to parse respectively.
◮ The best evaluated grammar is selected.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 10 / 23
11. Grammar learning Generalization
Generalization
The key idea is to obtain a new grammar that improves the covering of
the best grammar.
1. Select the grammar that provides the largest number of different
instantiations of predicates.
2. Compute the lgg* between both grammars.
3. Accept the new grammar rule if it improves the original coverage,
otherwise discard it.
4. The process continues until a coverage threshold is reached or
there is no longer improvement with the generalization process.
*Least general generalization (lgg) [Plotkin,1970]
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 11 / 23
12. Grammar learning Generalization
lgg example
c1 = pred(State,action1) ← c2 = pred(State,action3) ←
cond1(State,action1), cond1(State,action1),
cond2(State,action2). cond2(State,action3).
Computing lgg(c1,c2), the clause:
pred(State,Action) ←
cond1(State,action1),
cond2(State,Action).
where the constants action2 and action3 of predicate cond2 are
replaced by the variable Action.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 12 / 23
13. Experiments
Experiments
Learning navigation tasks Classification of dynamic gestures
Stop
Left
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 13 / 23
16. Experiments Navigation
Navigation experiments: Markovito (1/2)
A service ActivMedia robot equipped with a sonar ring, a Laser SICK
LMS200 and a vision stereo system.
◮ Navigating to several places in the environment.
Each place has a previously defined name (e.g.,
kitchen, sofa).
◮ following a human under user commands,
◮ finding one of a set of different objects in a house,
and
◮ delivering messages and/or objects between
different people.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 16 / 23
18. Experiments Gesture recognition
Learning gesture grammars
Goal: learn grammars from sequences of dynamic gestures and use
them to classify new sequences.
(a) (b) (c) (d) (e)
(f) (g) (h) (i) (j)
(a) initial/final position, (b) atenttion, (c) come, (d) left, (e) right, (f) stop, (g) turn-right, (h) turn-left,
(i) waving-hand, (j) point ´
[Aviles,2006]
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 18 / 23
19. Experiments Gesture recognition
Representation
Each sequence is a vector with sets of seven attributes describing the
executed gesture. An example of a sequence is:
(+ + −− T F F),(+ + −+ T F F),(+ + +0 T F F),(+ + ++T F F)...
Motion features Posture features
◮ ∆area ◮ form : horizontal(−),
vertical(+), tilted (0).
◮ ∆x
◮ above (the head).
◮ ∆y
◮ right (to the head).
These features take one of
three possible values: {+,−,0} ◮ torso (hand over the torso).
indicating increment, decrement
or no change.
hmov(State,right), vmov(State,up), size(State,inc), forma(State,vertical),
right(State,yes), above face(State,no),. . .
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 19 / 23
21. Experiments Gesture recognition
Example of gesture grammars
Come Point
S → ...R8,size(dec),R6,R1,R12,R7
S → ... R3,size(dec),form(horizontal),
R1 → above head(State,no),
right(si),above head(no),over torso(no)
over torso(State,yes)
R1 → above head(State,no),
R8 → hmov(State,right),
over torso(State,yes)
vmov(State,down)
R3 → hmov(State,right),
...
vmov(State,down)
Relational grammars help to find similarities between different gestures
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 21 / 23
22. Conclusions
Conclusions
◮ We have introduced an algorithm called FOSeq, that takes
sequences of states and actions and induces a grammar able to
parse and reproduce the sequences.
◮ FOSeq learns a grammar for each sequence, followed by a
generalization process between the best evaluated grammar and
other grammars to produce a generalized grammar covering most
of the sequences.
◮ FOSeq has been applied to learn navigation tasks and to learn
grammars from gesture sequences with very competitive results.
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 22 / 23
23. Future work
Future work
◮ Learn more TRPs to solve other robotic tasks,
◮ extend the experiments with gestures to obtain a general grammar
for a gesture performed by more than one person,
◮ reproduce gestures with a manipulator.
Thank you for your attention
´
(Robotics Lab INAOE, Mexico) CIARP 09 November 18, 2009 23 / 23