Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Why Robots may need to be self-‐aware, before we can really trust them - Alan Winfield.
1. Why
Robots
may
need
to
be
self-‐aware,
before
we
can
really
trust
them
Alan
FT
Winfield
Bristol
Robo=cs
Laboratory
Awareness
Summer
School,
Lucca
26
June
2013
2. Outline
• The
safety
problem
• The
central
proposi=on
of
this
talk
• Introducing
Internal
Models
in
robo=cs
• A
generic
Internal
Modelling
architecture,
for
safety
– worked
example:
a
scenario
with
safety
hazards
• Towards
an
ethical
robot
– worked
example:
a
hazardous
scenario
with
a
human
and
a
robot
• The
major
challenges
• How
self-‐aware
would
the
robot
be?
• A
hint
of
neuroscien=fic
plausibility
3. The
safety
problem
• For
any
engineered
system
to
be
trusted,
it
must
be
safe
– We
already
have
many
examples
of
complex
engineered
systems
that
are
trusted;
passenger
airliners,
for
instance
– These
systems
are
trusted
because
they
are
designed,
built,
verified
and
operated
to
very
stringent
design
and
safety
standards
– The
same
will
need
to
apply
to
autonomous
systems
4. The
safety
problem
• The
problem
of
safe
autonomous
systems
in
unstructured
or
unpredictable
environments,
i.e.
– robots
designed
to
share
human
workspaces
and
physically
interact
with
humans
must
be
safe,
– yet
guaranteeing
safe
behaviour
is
extremely
difficult
because
the
robot’s
human-‐centred
working
environment
is,
by
defini5on,
unpredictable
– it
becomes
even
more
difficult
if
the
robot
is
also
capable
of
learning
or
adapta5on
5. The
proposi=on
In
unknown
or
unpredictable
environments,
safety
cannot
be
achieved
without
self-‐awareness
6. What
is
an
internal
model?
• It
is
an
internal
mechanism
for
represen=ng
both
the
system
itself
and
its
environment
– example:
a
robot
with
a
simula5on
of
itself
and
its
currently
perceived
environment,
inside
itself
• The
mechanism
might
be
centralized,
distributed,
or
emergent
“..an
internal
model
allows
a
system
to
look
ahead
to
the
future
consequences
of
current
ac=ons,
without
actually
commiYng
itself
to
those
ac=ons”
John
Holland
(1992),
Complex
Adap=ve
Systems,
Daedalus.
7. Using
internal
models
• Internal
models
can
provide
a
minimal
level
of
func5onal
self-‐awareness
– sufficient
to
allow
complex
systems
to
ask
what-‐if
ques=ons
about
the
consequences
of
their
next
possible
ac=ons,
for
safety
• Following
Dennea
an
internal
model
can
generate
and
test
what-‐if
hypotheses:
– what if I carry out action x..?!
– of several possible next actions xi, which
should I choose?!
8. Dennea’s
Tower
of
Generate
and
Test
Darwinian
Creatures
Skinnerian
Creatures
Popperian
Creatures
Dennea,
D.
(1995).
Darwin’s
Dangerous
Idea,
London,
Penguin.
Natural
Selec=on
Individual
(Reinforcement)
Learning
Internal
Modelling
9. Examples
1
• A
robot
using
self-‐
simula=on
to
plan
a
safe
route
with
incomplete
knowledge
Vaughan,
R.
T.
and
Zuluaga,
M.
(2006).
Use
your
illusion:
Sensorimotor
self-‐
simula=on
allows
complex
agents
to
plan
with
incomplete
self-‐knowledge,
in
Proceedings
of
the
Interna=onal
Conference
on
Simula=on
of
Adap=ve
Behaviour
(SAB),
pp.
298–309.
10. Examples
2
• A
robot
with
an
internal
model
that
can
learn
how
to
control
itself
Bongard,
J.,
Zykov,
V.,
Lipson,
H.
(2006)
Resilient
machines
through
con=nuous
self-‐
modeling.
Science,
314:
1118-‐1121.
11. Examples
3
• ECCE-‐Robot
– A
robot
with
a
complex
body
uses
an
internal
model
as
a
‘func=onal
imagina=on’
Marques,
H.
and
Holland,
O.
(2009).
Architectures
for
func=onal
imagina=on,
Neurocompu=ng
72,
4-‐6,
pp.
743–759.
Diamond,
A.,
Knight,
R.,
Devereux,
D.
and
Holland,
O.
(2012).
Anthropomime=c
robots:
Concept,
construc=on
and
modelling,
Interna=onal
Journal
of
Advanced
Robo=c
Systems
9,
pp.
1–14.
12. Examples
4
• A
distributed
system
in
which
each
robot
has
an
internal
model
of
itself
and
the
whole
system
– Robot
controllers
and
the
internal
simulator
are
co-‐
evolved
O’Dowd
P,
Winfield
A
and
Studley
M
(2011),
The
Distributed
Co-‐Evolu=on
of
an
Embodied
Simulator
and
Controller
for
Swarm
Robot
Behaviours,
in
Proc
IEEE/RSJ
Interna=onal
Conference
on
Intelligent
Robots
and
Systems
(IROS
2011),
San
Francisco,
September
2011.
25. Challenges
and
open
ques=ons
• Fidelity:
to
model
both
the
system
and
its
environment
with
sufficient
fidelity;
• To
connect
the
IM
with
the
system’s
real
sensors
and
actuators
(or
equivalent);
• Timing
and
data
flows:
to
synchronize
the
internal
model
with
both
changing
perceptual
data,
and
efferent
actuator
data;
• Valida5on,
i.e.
of
the
consequence
rules.
26. Major
challenges:
performance
• Example
–
imagine
placing
this
Webots
simula=on
inside
each
NAO
robot:
Note
the
simulated
robot’s
eye
view
of
it’s
world
27. A
science
of
simula=on:
the
CoSMoS
approach
The
Complex
Systems
Modelling
and
Simula=on
(CoSMoS)
process,
from
Susan
Stepney,
et
al,
Engineering
Simula=ons
as
Scien=fic
Instruments
—
a
paaern
language,
Springer,
in
prepara=on.
The
CoSMoS
Process
Version
0.1:
A
Process
for
the
Modelling
and
Simula=on
of
Complex
Systems,
Paul
S.
Andrews,
et
al,
Dept
of
Computer
Science,
University
of
York,
Number
YCS-‐2010-‐453
28. Major
challenges:
=ming
• When
and
how
oqen
do
we
need
to
ini=ate
the
generate-‐and-‐test-‐loop
(IM
cycle)?
– Maybe
when
the
object
tracker
senses
a
nearby
object
star=ng
to
move..?
• How
far
ahead
should
the
IM
simulate
– Let
us
call
this
=me
ts.
if
ts
is
too
short
the
IM
will
not
encounter
the
hazard;
too
long
will
slow
down
the
robot.
– Ideally
ts
and
its
upper
limit
should
be
adap=ve.
29. How
self-‐aware
would
this
robot
be?
• The
robot
would
not
pass
the
mirror
test
– Haikkonen
(2007),
Reflec=ons
of
consciousness
• However,
I
argue
this
robot
would
be
minimally
but
sufficiently
self-‐aware
to
merit
the
label
– But
this
would
have
to
be
demonstrated
by
the
robot
behaving
in
interes5ng
ways,
that
were
not
pre-‐programmed,
in
response
to
novel
situa5ons
– Valida=ng
any
claims
to
self-‐awareness
would
be
very
challenging
30. Some
neuroscien=fic
plausibility?
• Libet’s
famous
experimental
result
showed
that
ini=a=on
of
ac=on
occurs
before
the
conscious
decision
to
make
take
that
ac=on
– Libet,
B
(1985),
Unconscious
cerebral
Ini=a=ve
and
the
role
of
conscious
will
in
voluntary
ac=on,
Behavioral
and
Brain
Science,
8,
529-‐539.
• Although
controversial
there
appears
to
be
a
growing
body
of
opinion
toward
consciousness
as
a
mechanism
for
vetoing
ac=ons
– Libet
coined
the
term:
free
won’t
31. In
conclusion
• I
strongly
suspect
that
self-‐awareness
via
internal
models
might
prove
to
be
the
only
way
to
guarantee
safety
in
robots,
and
by
extension
autonomous
systems,
in
unknown
and
unpredictable
environments
– and
just
maybe
provide
ethical
behaviours
too
Thank
you!
Reference
for
the
work
of
this
talk:
Winfield
AFT,
Robots
with
Internal
Models:
A
Route
to
Self-‐Aware
and
Hence
Safer
Robots,
accepted
for
The
Computer
AJer
Me,
eds.
Jeremy
Pia
and
Julia
Schaumeier,
Imperial
College
Press,
2013.