A history of Autonomous Agents: from Thinking Machines to Machines for Thinking
1. A History of Autonomous Agents
From Thinking Machines to Machines for Thinking
S. Costantini & *F. Gobbo
University of L’Aquila
CiE2013, Univ. Milano-Bicocca,
Milan, Italy, July 1-5, 2013
1 of 20
4. ...of Good Old-Fashioned Artificial Intelligence
Autonomous Agents were designed to interact mainly with humans:
their behaviour pretends to be human-like – fooling they were
humans;
4 of 20
5. ...of Good Old-Fashioned Artificial Intelligence
Autonomous Agents were designed to interact mainly with humans:
their behaviour pretends to be human-like – fooling they were
humans;
their ability to manipulate symbols is more important than their
physical implementation;
they often speak or write in a natural language;
4 of 20
6. ...of Good Old-Fashioned Artificial Intelligence
Autonomous Agents were designed to interact mainly with humans:
their behaviour pretends to be human-like – fooling they were
humans;
their ability to manipulate symbols is more important than their
physical implementation;
they often speak or write in a natural language;
they can play games – above all, chess.
4 of 20
7. ...of Good Old-Fashioned Artificial Intelligence
Autonomous Agents were designed to interact mainly with humans:
their behaviour pretends to be human-like – fooling they were
humans;
their ability to manipulate symbols is more important than their
physical implementation;
they often speak or write in a natural language;
they can play games – above all, chess.
The defining metaphor of Autonomous Agents is the thinking
machine.
4 of 20
8. What is an Autonomous Agent? The new answer...
source: Fast, Cheap & Out of Control paper by R. Brooks and A. M. Flynn (1989)
9. ...of nouvelle Artificial Intelligence
Autonomous Agents were designed to interact with the
environment:
their behaviour is action-driven, inspired by Nature (animals like
ants or bees);
6 of 20
10. ...of nouvelle Artificial Intelligence
Autonomous Agents were designed to interact with the
environment:
their behaviour is action-driven, inspired by Nature (animals like
ants or bees);
their physical implementation is important at least as their ability
of manipulate symbols;
6 of 20
11. ...of nouvelle Artificial Intelligence
Autonomous Agents were designed to interact with the
environment:
their behaviour is action-driven, inspired by Nature (animals like
ants or bees);
their physical implementation is important at least as their ability
of manipulate symbols;
they do things in the physical world;
6 of 20
12. ...of nouvelle Artificial Intelligence
Autonomous Agents were designed to interact with the
environment:
their behaviour is action-driven, inspired by Nature (animals like
ants or bees);
their physical implementation is important at least as their ability
of manipulate symbols;
they do things in the physical world;
they go where humans do not (still) go – e.g., planetary rovers.
6 of 20
13. ...of nouvelle Artificial Intelligence
Autonomous Agents were designed to interact with the
environment:
their behaviour is action-driven, inspired by Nature (animals like
ants or bees);
their physical implementation is important at least as their ability
of manipulate symbols;
they do things in the physical world;
they go where humans do not (still) go – e.g., planetary rovers.
The ‘thinking machine’ metaphor enters a crisis, while the agents’
environment assumes importance.
6 of 20
14. The word ‘agent’ is inherently ambiguous
Firstly, agent researchers do not own this term in the same way
as fuzzy logicians/AI researchers own the term fuzzy logic – it is
one that is used widely in everyday parlance as in travel
agents, estate agents, etc. Secondly, even within the software
fraternity, the word agent is really an umbrella term for a
heterogeneous body of research and development [Nwana 1996,
my emphasis].
[agent is] one who, or which, exerts power or produces an
effect [Woolridge et al. 1995, my emphasis].
7 of 20
15. The word ‘agent’ is inherently ambiguous
Firstly, agent researchers do not own this term in the same way
as fuzzy logicians/AI researchers own the term fuzzy logic – it is
one that is used widely in everyday parlance as in travel
agents, estate agents, etc. Secondly, even within the software
fraternity, the word agent is really an umbrella term for a
heterogeneous body of research and development [Nwana 1996,
my emphasis].
[agent is] one who, or which, exerts power or produces an
effect [Woolridge et al. 1995, my emphasis].
People in the field need a new defining metaphor.
7 of 20
16. A minimal but operative definition of agenthood
[Woolridge et al. 1995] restricts agenthood as a computer system,
with the following fundamental properties:
autonomy, i.e., being in control over its own actions;
8 of 20
17. A minimal but operative definition of agenthood
[Woolridge et al. 1995] restricts agenthood as a computer system,
with the following fundamental properties:
autonomy, i.e., being in control over its own actions;
reactivity, i.e. it reacts to events from the environment;
8 of 20
18. A minimal but operative definition of agenthood
[Woolridge et al. 1995] restricts agenthood as a computer system,
with the following fundamental properties:
autonomy, i.e., being in control over its own actions;
reactivity, i.e. it reacts to events from the environment;
And possibly:
proactivity, the complement of reactivity, i.e, the ability to acts on
its own initiative;
8 of 20
19. A minimal but operative definition of agenthood
[Woolridge et al. 1995] restricts agenthood as a computer system,
with the following fundamental properties:
autonomy, i.e., being in control over its own actions;
reactivity, i.e. it reacts to events from the environment;
And possibly:
proactivity, the complement of reactivity, i.e, the ability to acts on
its own initiative;
sociality, the ability to interact with other agents.
8 of 20
20. A minimal but operative definition of agenthood
[Woolridge et al. 1995] restricts agenthood as a computer system,
with the following fundamental properties:
autonomy, i.e., being in control over its own actions;
reactivity, i.e. it reacts to events from the environment;
And possibly:
proactivity, the complement of reactivity, i.e, the ability to acts on
its own initiative;
sociality, the ability to interact with other agents.
Sociality presumes also a multi-agent system!
8 of 20
22. Autonomous Agents as a programming paradigm
[Shoham 1990] is Agenthood Degree Zero. In that paper, a new
programming paradigm was defined, called agent-orientation:
agents are pieces of software – possibly but not necessarily
embodied in robots;
their behaviour is regulated by:
constraints like ‘honesty, consistency’;
parameters like ‘beliefs, commitments, capabilities, choices’.
they show a degree of autonomy in the environment:
they reactively and timely respond to changes that occur around;
they exhibit a goal-directed behaviour by taking the initiative;
they interact with other entities through a common language;
they choose a plan in order to to reach goals, preferably by learning
from past experience.
10 of 20
23. Springtime again for Artificial Intelligence?
The success of the agent-oriented paradigm is great and rapid, with
different architectures and models:
Belief, Desire, Intention (BDI) [Rao & Georgeff 1991];
Agent Logic Programming (ALP) [Kowalski & Sadri 1999];
Declarative Logic programming Agent-oriented Language (DALI)
[Costantini 1999];
Knowledge, Goals and Plans (KGP) [Kakas et al. 2004].
ALP, DALI and KGP use Computational Logic, showing that
agenthood can be successfully implemented also out of the
object-orientation programming paradigm.
11 of 20
24. How the concept of intention is re-engineered
An example from the foundation of the BDI architecture:
My desire to play basketball this afternoon is merely a
potential influencer of my conduct this afternoon. It must vie
with my other relevant desires [. . . ] before it is settled what I
will do. In contrast, once I intend to play basketball this
afternoon, the matter is settled: I normally need not continue to
weigh the pros and cons. When the afternoon arrives, I will
normally just proceed to execute my intention. [Bratman 1990,
my emphasis]
Formally, an intention is a desire which can be satisfied in practice.
12 of 20
25. Autonomus agents as machines for thinking
Desires and intentions – basic modalities of human thinking – are
clearly distinguished in BDI and put into relation in a formal way.
All agent-oriented architectures are formalisations of the human
way of thinking. No one is exhaustive of human thinking as a whole,
but they help us to understand ourselves by formalisation,
implementation and testing, especially in virtual societies formed by
many agents.
13 of 20
26. Multi-Agent Systems as simulations of societies
The human species is social, and therefore agent-based simulations of
societies through Multi-Agent Systems (MAS) become even more
interesting:
there is no global system control – agents must communicate and
coordinate their activities;
MAS can put in evidence egoistic and collective interests;
MAS are serious games (e.g., for educational purposes, or
economic simulations);
MAS emerge as a distinctive research area from separate
communities, and so they profit from different perspectives.
14 of 20
28. The emergence of hybrid environments...
Many authors (among them, Castells and Floridi) noted that the
tendency is to have hybrid environments, shared by human agents
and autonomous agents, where they meet, fight, communicate,
interact on the same level. Two cases are possibile:
Multi-User Dungeons (MUDs) or environements such as Second
Life: human agents get virtual through avatars; where players
together
Robots acting in the real world, where human agents are
present there and then.
MAS are reasonable – although rather simplified – models of Nature
and human societies, putting information first.
16 of 20
29. ...put autonomous agents as machines for thinking
The Paradox:
Distributed Artificial Intelligence
as a way to study
the Natural way of thinking!
17 of 20
30. Essential references (1/2)
Bratman, M. E.: What is intention?. In Cohen, P. R., Morgan, J. L., and Pollack,
M. E. (editors), Intentions in Communication, pages 15-32. The MIT Press:
Cambridge, MA (1990).
Costantini, S.: Towards Active Logic Programming. In: A. Brogi and P.M. Hill (eds),
Proc. of 2nd International Works. on Component-based Software Development in
Computational Logic (COCL’99), PLI’99, Indexed by CiteSeerX (1999).
Kakas, A.C., Mancarella, P., Sadri, F., Stathis, K., Toni, F.: The KGP model of
agency. In: Proc. ECAI-2004. (2004)
Kowalski, R., Sadri, F.: From Logic Programming towards Multi-agent Systems.
Annals of Mathematics and Artificial Intelligence, 25, 391–419 (1999)
18 of 20
31. Essential references (2/2)
Nwana, H., S.: Software Agents: An Overview. Knowledge Eng. Review, 11(3), 1–40
(1996)
Rao, A. S., Georgeff, M.: Modeling rational agents within a BDI-architecture. In:
Allen, J., Fikes, R., Sandewall, E. (eds). Proc. of the Second International
Conference on Principles of Knowledge Representation and Reasoning (KR’91),
473–484 (1991)
Shoham, Y.: Agent Oriented Programming. Technical Report STAN-CS-90-1335,
Computer Science Department, Stanford University (1990)
Wooldridge, M. J., Jennings, N. R.: Agent Theories, Architectures, and Languages:
a Survey. In: Wooldridge, M. J., Jennings, N. R. Intelligent Agents. Lecture Notes in
Computer Science, Volume 890, Berlin: Springer-Verlag. 1–39 (1995)
19 of 20
32. Thanks for your attention!
Questions?
For proposals, ideas & comments:
federico.gobbo@univaq.it
Download & share these slides here:
http://federicogobbo.name/en/2013.html
CC BY: $ C
Federico Gobbo 2013
20 of 20