SlideShare une entreprise Scribd logo
1  sur  84
R. Akerkar
American University of Armenia
      Yerevan, Armenia




                Multiagent Systems: R. Akerkar   1
Outline

1. History and perspectives on 
1. History and perspectives on
multiagents
2. Agent Architecture
2. Agent Architecture
3. Agent Oriented Software Engineering
4. Mobility
4. Mobility
5. Autonomy and Teaming




                   Multiagent Systems: R. Akerkar   2
Definitions
  An agent is an entity whose state is viewed as consisting 
  of mental components such as beliefs, capabilities, 
                 p                       , p        ,
  choices, and commitments. [Yoav Shoham, 1993]

. An entity is a software agent if and only if it 
  communicates correctly in an agent communication 
  language. [Genesereth and Ketchpel, 1994]
  language  [Genesereth and Ketchpel  1994]

. Intelligent agents continuously perform three functions: 
  perception of dynamic conditions in the environment; 
  action to affect conditions in the environment; and 
  reasoning to interpret perceptions, solve problems, draw 
                                          l    bl      d
  inferences, and determine actions. [Hayes‐Roth, 1995]
                             Multiagent Systems: R. Akerkar   3
Definitions

.  An agent is anything that can be viewed as 
   An agent is anything that can be viewed as 
   (a)Perceiving its environment, and (b) Acting upon 
  that environment [Russell and Norvig, 1995]
.   A computer system that is situated in some 
    environment and is capable of autonomous action in 
    its environment to meet its design objectives. 
    i   i                   i  d i   bj i          
    [Wooldridge, 1999]




                            Multiagent Systems: R. Akerkar   4
Agents: A working definition 


An agent is a computational system that interacts 
with one or more counterparts or real‐world systems  
 ith                   t     t     l       ld  t
with the following key features to varying degrees:
• Autonomy
• Reactiveness
• Pro‐activeness
• Social abilities
e.g., autonomous robots, human assistants, service agents
The need is for automation and distributed use of online resources

                                Multiagent Systems: R. Akerkar   5
Test of Agenthood [Huhns and Singh, 1998]



     “A system of distinguished agents should 
     “A  t   f di ti        i h d     t   h ld 
substantially change semantically if a distinguished 
                 agent is added.”
                       t i   dd d ”




                        Multiagent Systems: R. Akerkar   6
Agents vs. Objects
 g           j

   “Objects with attitude” [Bradshaw, 1997]
   “Obj      i h  i d ” [B d h            ]

 Agents are similar to objects since they are 
              i il       bj      i     h
  computational units that encapsulate a state 
  and communicate via message passing
    d        i t   i                 i

 Agents differ from objects since they have a 
  strong sense of autonomy and are active 
  versus passive.
             i
                       Multiagent Systems: R. Akerkar   7
Agent Oriented Programming, Yoav
Shoham
AOP principles:

1. The state of an object in OO p g
                       j          programming has no g
                                            g          generic
   structure. The state of an agent has a “mentalistic”
   structure: it consists of mental components such as beliefs
   and commitments
        commitments.

2. Messages in object-oriented programming are coded in an
         g        j            p g         g
   application-specific ad-hoc manner. A message in AOP is
   coded as a “speech act” according to a standard agent
   communication language that is application independent
                                   application-independent.

                             Multiagent Systems: R. Akerkar   8
Agent Oriented Programming 
    Extends Peter Chen’s ER model, 
    E t d P t     Ch ’ ER     d l
    Gerd Wagner
•   Different entities may belong to different epistemic categories. There are 
    agents, events, actions, commitments, claims, and objects.

•   We distinguish between physical and communicative actions/events. 
    Actions create events, but not all events are created by actions.

•   Some of these modeling concepts are indexical, that is, they depend on 
    the perspective chosen: in the perspective of a particular agent, actions 
    of other agents are viewed as events, and commitments of other agents 
    are viewed as claims against them.



                                      Multiagent Systems: R. Akerkar       9
Agent Oriented Programming 
     Extends Peter Chen’s ER model, 
     Gerd Wagner
            g
•   In the internal perspective of an agent, a commitment refers to a specific action 
    to be performed in due time, while a claim refers to a specific event that is 
    created by an action of another agent, and has to occur in due time.
•   Communication is viewed as asynchronous point‐to‐point message passing. 
    We take the expressions receiving a message and sending a message as 
    synonyms of perceiving a communication event and performing a 
    communication act.
•   There are six designated relationships in which specifically agents, but not 
    objects, participate: only an agent perceives environment events, receives and 
    sends messages, does physical actions, has Commitment to perform some 
    action in due time, and has Claim that some action event will happen in due 
    time.


                                         Multiagent Systems: R. Akerkar           10
Agent Oriented Programming 
Extends Peter Chen’s ER model, 
E t d P t     Ch ’ ER     d l
Gerd Wagner
An institutional agent consists of a certain number of (institutional, artificial and 
human) internal agents acting on behalf of it. An institutional agent can only 
perceive and act through its internal agents.
Within an institutional agent, each internal agent has certain rights and duties.
There are three kinds of duties: an internal agent may have the duty to full 
commitments of a certain type, the duty to monitor claims of a certain type, or 
                             yp ,         y                                 yp ,
the duty to react to events of a certain type on behalf of the organization.
A right refers to an action type such that the internal agent is permitted to 
p
perform actions of that type on behalf of the organization.
                          yp                      g




                                         Multiagent Systems: R. Akerkar            11
Agent Typology
Human agents: Person, Employee, Student, Nurse, or Patient
Artificial agents: owned and run by a legal entity 
Institutional agents: a bank or a hospital
Software agents: Agents designed with software
Information agent: D t b
I f       ti        t Data bases and th i t
                                     d the internet t
Autonomous agents: Non-trivial independence
Interactive/Interface agents: Designed for interaction
Adaptive agents: Non-trivial ability for change
Mobile agents: code and logic mobility
            g                 g           y




                              Multiagent Systems: R. Akerkar   12
Agent Typology
Collaborative/Coordinative agents: Non-trivial ability
for coordination, autonomy, and sociability
Reactive agents: No internal state and shallow
reasoning
Hybrid agents: a combination of deliberative and
reactive components
Heterogenous agents: A system with various agent
sub-components
   b
Intelligent/smart agents: Reasoning and intentional
notions
Wrapper agents: Facility for interaction with non-
agents

                          Multiagent Systems: R. Akerkar   13
Multi‐agency

A multi‐agent system is a system that is made up of 
multiple agents with the following key features among 
     p g                         g y                 g
agents to varying degrees of commonality and 
adaptation:
• S i l  ti
  Social rationality
                lit
• Normative patterns
• System of Values 


e.g., HVAC, eCommerce, space  missions, Soccer, Intelligent Home, 
e g  HVAC  eCommerce  space  missions  Soccer  Intelligent Home  
“talk” monitor
The motivation is coherence and distribution of resources.


                                Multiagent Systems: R. Akerkar   14
Applications of Multiagent Systems
 Electronic commerce: B2B, InfoFlow, eCRM

 N t
  Network and system management agents: E.g., The  
        k  d  t               t     t  E  Th
  telecommunications companies

 Real‐time monitoring and control of networks: ATM
  Real time monitoring and control of networks: ATM

 Modeling and control of transportation systems: Delivery

 Information retrieval: online search

 Automatic meeting scheduling

 Electronic entertainment: eDog



                                 Multiagent Systems: R. Akerkar   15
Applications        of Multiagent Systems (cont.)

 Decision and logistic support agents:Military and Utility 
   Companies

 Interest matching agents: Commercial sites like Amazon.com

 User assistance agents: E.g., MS office assistant

 Organizational structure agents: Supply‐chain ops

 Industrial manufacturing and production: manufacturing cells

 Personal agents: emails

 Investigation of complex social phenomena such as evolution of 
   roles, norms, and organizational structures


                                 Multiagent Systems: R. Akerkar     16
Summary of Business Benefits


• Modeling existing organizations and dynamics


• Modeling and Engineering E societies
  Modeling and Engineering E‐societies


• New tools for distributed knowledge‐ware




                        Multiagent Systems: R. Akerkar   17
Three views of Multi‐agency

Constructivist: Agents are rational in the sense of Newell’s principle 
of individual rationality. They only perform goals which bring them a 
of individual rationality  They only perform goals which bring them a 
positive net benefit without regard to other agents. These are self‐
interested agents.


Sociality: Agents are rational in the Jennings’ principle of social 
rationality. They perform actions whose joint benefit is greater than 
its joint loss. These are self‐less, responsible  agents. 


Reductionist: Agents which accept all goals they are capable of 
performing. These are benevolent agents.
performing  These are benevolent agents

                                  Multiagent Systems: R. Akerkar    18
Multi‐agency: allied fields

                                                                                    DAI
  MAS: (1) online social laws, (2) agents may adopt goals and adapt beyond any problem
                         laws

                                                                 DPS: offline social laws
            CPS: (1) agents are a ‘team’, (2) agents ‘know’ the shared goal



• In DAI, a problem is being automatically decomposed among 
distributed nodes, whereas in multi‐agents, each agent chooses to 
                  ,                  g      ,     g
whether to participate.
• Distributed planning is distributed and decentralized action 
selection whereas in multi‐agents, agents keep their own copies a 
selection whereas in multi agents  agents keep their own copies a 
plan that might include others.
                                             Multiagent Systems: R. Akerkar                 19
Multi‐agent assumptions and goals

• Agents have their own intentions and the system 
has distributed intentionality 
                              y
• Agents model other agents mental states in their 
own decision making   g
• Agent internals are of less central than agents 
interactions
• Agents deliberate over their interactions 
• Emergence at the agent level and at the interaction 
level are desirable
       g                    p     p           p     p
• The goals is to find some principles‐for or principled 
ways to explore interactions
                           Multiagent Systems: R. Akerkar   20
Origins of Multi‐agent systems

 • Carl Hewitt’s Actor model, 1970


 • Blackboard Systems: Hearsay (1975), BB1, GBB 


 • Distributed Vehicle Monitoring System (DVMT, 1983)


 • Di t ib t d AI
   Distributed AI


 • Distributed OS
                          Multiagent Systems: R. Akerkar   21
MAS Orientations
                 Computational
                  Organization
                    Theory                 Databases
    Sociology

                                                                 Formal AI
Economics                Distributed
                          Problem
                          Solving                               Cognitive
  Psychology
                                                                 Science

               Systems                  Distributed
               Theory                   Computing

                               Multiagent Systems: R. Akerkar                22
Multi‐agents in the large versus in the small

• In the small: (Distributed AI) A handful of “smart” 
agents with emergence in the agents
      t   ith              i  th      t


• In the large: 100+ “simple” agents with emergence in 
the group: Swarms (Bugs) http://www.swarm.org/
the group: Swarms (Bugs) http://www swarm org/




                            Multiagent Systems: R. Akerkar   23
Outline

1. History and perspectives on 
multiagents
2. Agent Architecture
3. Agent Oriented Software Engineering
4. Mobility
5. Autonomy and Teaming




                   Multiagent Systems: R. Akerkar   24
Abstract Architecture


states     action
           action                              actions




         Environment

              Multiagent Systems: R. Akerkar             25
Architectures

• Deduction/logic-based
• Reactive
• BDI
• Layered (hybrid)




                          Multiagent Systems: R. Akerkar   26
Abstract Architectures


 An abstract model: <States, Action, S*A>

 An abstract view
   S = {s1, s2, …} – environment states
        { , , }

   A = {a1, a2, …} – set of possible actions

 This allows us to view an agent as a function

                     action : S*  A
                            Multiagent Systems: R. Akerkar   27
Logic‐Based Architectures
       g
 These agents have internal state
 See and next functions and model decision making by a set of 
                                                 g y
deduction rules for inference

                                   see : S  P
                                 next : D x P  D
                                  action : D  A

 Use logical deduction to try to prove the next action to take
 Advantages
  Simple, elegant, logical semantics
      p , g , g
 Disadvatages
  Computational complexity
  Representing the real world



                                        Multiagent Systems: R. Akerkar   28
Reactive Architectures

 Reactive Architectures do not use
              h          d
   symbolic world model
   symbolic reasoning

 An example is Rod Brooks’s subsumption architecture
 Advantages
   Simplicity, computationally tractable, robust, 
     elegance
        g
 Disadvantages
   Modeling limitations, correctness, realism


                           Multiagent Systems: R. Akerkar   29
Reflexive Architectures: 
 simplest type of reactive 
 architecture
     Reflexive agents decide what to do without 
      regard to history –
      regard to history  purely reflexive
                     action : P  A

     Example ‐ thermostat

ction(s) =   {   off
                 on
                       if temp = OK
                       otherwise


                             Multiagent Systems: R. Akerkar   30
Reflex agent without state
        (Russell and Norvig, 1995)




             Multiagent Systems: R. Akerkar   31
Reflex agent with state (Russell and 
Norvig, 1995)
Norvig, 1995)




                  Multiagent Systems: R. Akerkar   32
Goal‐oriented agent: 
 a more complex reactive agent (Russell and 
           p              g    (
Norvig, 1995)




                       Multiagent Systems: R. Akerkar   33
Utility‐based agent: 
a complex reactive agent (Russell and Norvig, 
     p              g    (                 g,
1995)




                        Multiagent Systems: R. Akerkar   34
BDI: a Formal Method

• Belief: states, facts, knowledge, data
• Desire: wish, goal, motivation (these might conflict) 
• Intention: a) select actions  b) performs actions  c) 
  Intention: a) select actions, b) performs actions, c) 
explain choices of action (no conflicts)
• Commitment  persistence of intentions and trials
  Commitment: persistence of intentions and trials

• Know‐how: having the procedural knowledge for carrying out a task




                                     Multiagent Systems: R. Akerkar   35
Belief-Desire-Intention

            Environment
                    belief                  act
sense
                   revision

        Beliefs

        generate
         options


                     filter
        Desires                         Intentions


                              Multiagent Systems: R. Akerkar   36
Why is BDI a Formal Method?

• BDI is typically specified in the language of modal logic with 
p
possible world semantics.
• Possible worlds capture the various ways the world might develop.
Since the formalism in [Wooldridge 2000] assumes at least a KD
axiomatization f each of B D and I each of th sets of possible
   i   ti ti for      h f B, D, d I,      h f the t f        ibl
worlds representing B, D and I must be consistent.
• A KD45 logic with the following axioms:
   • K: BDI(a,  , t)  (BDI(a, , t)  BDI(a, , t)) 
   • D: BDI(a, t)  not BDI(a, not , t) 
   • 4: B(a, , t)  B( B(a, , t) )
   • 5: (not B(a, , t))  B( not B(a, , t))
• K&D is the normal modal system
                                        Multiagent Systems: R. Akerkar   37
A simplified BDI agent algorithm

1. B = B0;
2.
2 I := I0;
3. while true do
4.
4            get next percept ;
5.           B := brf(B, );             // belief revision
6.
6            D:=options(B,D,I);
             D:=options(B D I);        // determination of desires
7.           I := filter(B, D, I);    // determination of intentions
8.
8             := plan(B I);
                  plan(B,               // plan generation
9.       execute 
10.
10 end while

                                     Multiagent Systems: R. Akerkar    38
Correspondences


• Belief-Goal compatibility:
                     D B l
                     Des Bel
• Goal-Intention Compatibility:
                      Int  Des
• Volitional Commitment:
                    Int Do  Do
• Awareness of Goals and Intentions:
                   Des  BelDes
                    Int  BelInt
                         Multiagent Systems: R. Akerkar   39
Layered Architectures
Layered Architectures

 Layering is based on division of behaviors into automatic 
  and controlled.

 Layering might be Horizontal (I.e., I/O at each layer) or 
  Vertical (I.e., I/O is dealt with by single layer)

 Advantages are that these are popular and fairly intuitive 
  modeling of behavior

 Dis‐advantages are that these are too complex and non‐
  uniform representations

                              Multiagent Systems: R. Akerkar    40
Outline

1. History and perspectives on 
1. History and perspectives on
multiagents
2. Agent Architecture
2. Agent Architecture
3. Agent Oriented Software Engineering
4. Mobility
4. Mobility
5. Autonomy and Teaming




                Multiagent Systems: R. Akerkar   41
Agent‐Oriented Software 
   Engineering
 AOSE is an approach to developing software using 
  agent‐oriented abstractions that models high level 
  interactions  and relationships.
                               p

 Agents are used to model run‐time decisions about 
   g
  the nature and scope of interactions that are not 
  known ahead of time.




                              Multiagent Systems: R. Akerkar   42
Designing Agents:
Recommendations from H. Van Dyke Parunak’s (1996) “Go to the Ant”: Engineering Principles from Natural Multi-
Agent Systems, Annals of Operations Research, special issue on AI and Management Science.
1. Agents should correspond to things in the problem domain rather than to 
             h ld            d    h        h     bl    d           h h
     abstract functions.
2. Agents should be small in mass (a small fraction of the total system), time (able 
     to forget), scope (avoiding global knowledge and action).
            g ),    p (        gg               g             )
3. The agent community should be decentralized, without a single point of control 
     or failure.
4. Agents should be neither homogeneous nor incompatible, but diverse. 
     Randomness and repulsion are important tools for establishing and 
     maintaining this diversity. 
5. Agent communities should include a dissipative mechanism to whose flow they 
     can orient themselves, thus leaking entropy away from the macro level at 
     which they do useful work.
       hi h  h  d   f l        k
6. Agents should have ways of caching and sharing what they learn about their 
     environment, whether at the level of the individual, the generational chain, or 
                           y g
     the overall community organization.
7. Agents should plan and execute concurrently rather than sequentially.

                                                      Multiagent Systems: R. Akerkar                            43
Organizations

Human organizations are several agents, engaged in multiple 
        g                        g    , g g             p
goal‐directed tasks, with distinct knowledge, culture, 
memories, history, and capabilities, and separate legal 
        ,       y,       p         ,       p        g
standing from that of individual agents


Computational Organization Theory (COT) models information 
production and manipulation in organizations of human and 
computational agents



                                 Multiagent Systems: R. Akerkar   44
Management of Organizational 
Structure
 O
  Organizational constructs are modeled as  
       i ti    l     t t     d l d 
  entities in multiagent systems

 Multiagent systems have built in mechanisms 
  for flexibly forming, maintaining, and 
  for flexibly forming  maintaining  and 
  abandoning organizations

 Multiagent systems can provide a variety of 
  stable intermediary forms in rapid systems 
  development
                      Multiagent Systems: R. Akerkar   45
7.2.1 Agent and Agency 
7.2.1 Agent and Agency




               Multiagent Systems: R. Akerkar   46
AOSE Considerations
 What, how many, structure of agent?


 Model of the environment?


 Communication? Protocols? Relationships? 
  Coordination?




                         Multiagent Systems: R. Akerkar   47
Stages of Agent‐Oriented 
Software Engineering
  A Requirements: provided by user
  A.


  B. Analysis: objectives and invariants
  B A l i   bj ti   d i            i t

  C. Design: Agents and Interactions


  D. Implementation: Tools and techniques


                       Multiagent Systems: R. Akerkar   48
KoAS‐ Bradshaw, et al
Knowledge (Facts) represent Beliefs in which the agent has
confidence about
F t and Beliefs may b h ld privately or b shared.
  Facts d B li f           be held i t l      be h d
Desires represent goals and preferences that motivate the agent to
act
Intentions represent a commitment to perform an action.
There is no exact description of capabilities
Life cycle: birth, life and death (also a Cryogenic state)
              birth life,
Agent Types: KaOS, Mediation (KaOS and outside) , Proxy
(mediator between two KAOS agents), Domain Manager (agent
registration),
registration) and Matchmaker (mediator of services)
Omitted: Emotions, Learning, agent relationships, Fraud, Trust,
Security


                               Multiagent Systems: R. Akerkar   49
Gaia‐ Wooldridge, et al
               g ,
The Analysis phase:
 Roles model:
    -PPermissions (
            i i    (resources))
    - Responsibilities (Safety properties and Liveliness
    properties)
    -P t
      Protocols
              l
 Interactions model: purpose, initiator, responder, inputs,
    outputs, and processing of the conversation
The D i
Th Design phase:
               h
    Agent model
    Services model
    Acquaintance model

Omitted: Trust Fraud Commitment and Security
         Trust, Fraud, Commitment,  Security.

                             Multiagent Systems: R. Akerkar    50
TAEMS: Keith Decker and Victor Lesser

 The agents are simple processors.


 Internal structure of agents include (a) beliefs
(
(knowledge) about task structure, (b) states, (c) actions,
          g )                     ,( )      ,( )         ,
(d) a strategy which is constantly being updated, of what
methods the agent intends to execute at what time.

 Omitted: Roles, Skills or Resources.




                               Multiagent Systems: R. Akerkar   51
BDI based Agent-Oriented Methodology
(KGR) Kinny Georgeff and Rao
      Kinny,

   External viewpoint: the social system structure
  and dynamics.
      Agent Model + Interaction Model.
         g
      Independent of agent cognitive model and
  communication
   Internal viewpoint: the Belief Model the Goal
                                   Model,
  Model, and the Plan Model.
        Beliefs: the environment, internal state, the actions
                                 ,               ,
  repertoire
        Goals: possible goals, desired events
        Plans: state charts

                               Multiagent Systems: R. Akerkar    52
MaSE – Multi-agent Systems Engineering, DeLoach


 Domain Level Design (Use AgML for Agent type
Diagram,
Diagram Communication Hierarchy Diagram and
                                   Diagram,
Communication class Diagrams.)
 Agent Level Design (Use AgDL for agent
conversation)
 Component Design AgDL
 System Design AgML
   y          g g
 Languages:
   AgML (Agent Modeling Language- a graphical
    language)
   AgDL (Agent Definition Language- the system level
    behavior and the internal behavior of the agent)
 Rich in communication, poor in social structures
          communication
                             Multiagent Systems: R. Akerkar   53
Scott DeLoach’s MaSE
                                                       Sequence
       Roles              Tasks
                                                       Diagrams



    Agent Class                    Conversation
     Diagram                         Diagram


   Internal Agent
      Diagram
         g



                    Deployment
                     Diagram
                          Multiagent Systems: R. Akerkar          54
The TOVE Project (1998) ; Mark Fox, et al.
 • Organizational hierarchy: Divisions and sub-divisions
 • Goals, sub-goals, their hierarchy (using AND & OR)
 • Roles, their relations to skills, goals, authority, processes, policies
 • Skills, and their link to roles
 • Agents, their affiliation with teams and divisions Commitment,
 Empowerment
 • Communication links between agents: sending and receiving information.
                                                                      information
 Communication at three levels: information, intentions (ask, tell, deny…), and
 conventions
 (semantics). Levels 2 & 3 are designed using speech act.
 • Teams as temporary group of agents
 • Activities and their states, the connection to resources and the constraints.
 • Resources and their relation to activities and activities states
 • Constraints on activities (what activities can occur at a specific situation and
 a specific time)
 • Time and the duration of activities. Actions occur at a point in time and they
 have duration.
 • Situation
 Shortcomings: central d i i making
 Sh t       i          t l decision     ki

                                      Multiagent Systems: R. Akerkar           55
Agent-Oriented Programming (AOP): Yoav Shoham


• AGENT0 is the first AOP and the logical component of this
language is a quantified multi-modal logic.

• M t l state: beliefs, capabilities, and commitments (
  Mental t t b li f         biliti      d     it   t (or
obligations).

• Communication: ‘request’ (to perform an action), ‘unrequest’
(to refrain from action), and ‘inform’ (to pass information).




                                Multiagent Systems: R. Akerkar   56
The MADKIT Agent Platform Architecture: 
Olivier Gutknecht Jacques Ferber
Olivier Gutknecht Jacques Ferber
 Three core concepts : agent, group, and role.


 Interaction language


 Organizations: a set of groups




                                   Multiagent Systems: R. Akerkar   57
Outline

1. History and perspectives on 
1 History and perspectives on
multiagents
2. Agent Architecture
            hi
3. Agent Oriented Software 
Engineering
4. Mobility
4. Mobility
5. Autonomy and Teaming

                 Multiagent Systems: R. Akerkar   58
Mobile Agents
             g
[Singh, 1999] A computation that can change its location of execution (given a 
suitable underlying execution environment), both
 code
     d
 program state 

 [Papaioannou, 1999] A software agent that is able to migrate from one host to 
 [P   i             ] A  f              h  i   bl     i       f        h    
 another in a computer network is a mobile agent.

 [IBM] Mobile network agents are programs that can be dispatched from one 
 computer and transported to a remote computer for execution. Arriving at the 
 remote computer, they present their credentials and obtain access to local 
                                    p         y                       y     g g
 services and data. The remote computer may also serve as a broker by bringing 
 together agents with similar interests and compatible goals, thus providing a 
 meeting place at which agents can interact.




                                       Multiagent Systems: R. Akerkar         59
Mobile Agent Origins

‐ Batch Jobs

‐ Distributed Operating System (migration is 
    transparent to the user.)

‐ Telescript [General Magic, Inc. USA, 1994] 
    migration of an executing program for 
    use of local resources


                          Multiagent Systems: R. Akerkar   60
A paradigm shift:
        Distributed Systems versus mobile code
Instead of masking the physical location of a component, mobile code 
infrastructures make it evident.

Code mobility is geared for Internet‐scale systems ... unreliable

Programming is location aware ...location is available to the programmer
   g      g

Mobility is a choice ...migration is controlled by the programmer or at runtime by the
agent

Load balancing is not the driving force ...instead flexibility, autonomy and 
disconnected operations are key factors



                                            Multiagent Systems: R. Akerkar               61
A paradigm comparison: 
  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task
  2 Components 2 Hosts a Logic a Resource Messages a Task

Remote Computation
In remote computation, components in the system are static, 
In remote computation  components in the system are static  
whereas logic can be mobile. For example, component A, at Host 
HA, contains the required logic L to perform a particular task T, but 
does not have access to the required resources R to complete the 
                                q                        p
task. R can be found at HB, so A forwards the logic to component B, 
    k       b f      d             f    d h l
which also resides at HB. B then executes the logic before returning 
the result to A. E.g., batch entries.

                HA                                           HB
                L, T                                         R

                HA            L                              HB Compute
                L                                            R
                              result

                                   Multiagent Systems: R. Akerkar         62
A paradigm comparison: 
  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task
  2 Components 2 Hosts a Logic a Resource Messages a Task

Code on Demand
In Code on Demand, component A already has access to resource R. 
However, A (or any other components at Host A) has no idea of the 
logic required to perform task T. Thus, A sends a request to B for it to 
forward the logic L. Upon receipt, A is then able to perform T. An 
example of this abstraction is a Java applet, in which a piece of code 
example of this abstraction is a Java applet  in which a piece of code 
is downloaded from a web server by a web browser and then 
executed.

                 HA                                            HB
                 R                                             L

                 HA            Send L                          HB
   Compute       R                                             L
                                 L

                                     Multiagent Systems: R. Akerkar   63
A paradigm comparison: 
2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task
2 Components 2 Hosts a Logic a Resource Messages a Task

Mobile Agents
With the mobile agent paradigm, component A already has the logic L required 
to perform task T, but again does not have access to resource R. This resource can  
t   f        t k T  b t     i  d   t h               t           R  Thi           
be found at HB. This time however, instead of forwarding/requesting L to/from 
another component, component A itself is able to migrate to the new host and 
interact locally with R to perform T. This method is quite different to the previous 
two examples, in this instance an entire component is migrating, along with its 
two examples  in this instance an entire component is migrating  along with its 
associated data and logic. This is potentially the most interesting example of all 
the mobile code abstractions. There are currently no contemporary examples of 
this approach, but we examine its capabilities in the next section.

                   HA                                             HB
                   L                                              R

                   HA              A moves                        HB     Compute
                   L                                              R
                                    A returns

                                        Multiagent Systems: R. Akerkar             64
A paradigm comparison: 
 2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task
 2 Components 2 Hosts a Logic a Resource Messages a Task

Client/Server
Client/Server is a well known architectural abstraction that has been 
employed since the first computers began to communicate. In this 
example, B has the logic L to carry out Task T, and has access to 
resource R. Component A has none of these, and is unable to 
transport itself. Therefore, for A to obtain the result of T, it must 
t        t it lf  Th f       f  A t   bt i  th       lt  f T  it    t 
resort to sending a request to B, prompting B to carry out Task T. 
The result is then communicated back to A when completed.

                HA                                           HB
                                                             L, R

                HA             request                       HB
                                                             L, R   Compute
                                result

                                   Multiagent Systems: R. Akerkar             65
Problems in distributed 
Systems: J. Waldo
Latency: Most obvious, Least worrisome
        y
Memory: Access, Unable to use pointers, Because memory is both 
local and remote, call types have to differ, No possibility of shared 
memory
Partial Failure: Is a defining problem of distributed computing, Not 
possible in local computing, 
Concurrency: Adds significant overhead to programming model, 
No programmer control of method invocation order

we should treat local and remote objects differently. 
Waldo, J., Wyant, G., Wollrath, A., Kendall, S., “A note on distributed 
computing”, Sun Microsystems Technical Report SML 94‐29, 1994.
       i ”  S  Mi                T h i l R           SML         

                                       Multiagent Systems: R. Akerkar      66
Mobile Agent Toolkit from IBM: 
Basic concepts
 Aglet. An aglet is a mobile Java object that visits aglet‐enabled hosts in a computer 
 network. It is autonomous, since it runs in its own thread of execution after arriving at 
        ,             ,                     y       p                  g
 a host, and reactive, because of its ability to respond to incoming messages.  g
 Proxy. A proxy is a representative of an aglet. It serves as a shield for the aglet that 
 protects the aglet from direct access to its public methods. The proxy also provides 
 location transparency for the aglet; that is, it can hide the aglet’s real location of the 
  g
 aglet.
 Context. A context is an aglet's workplace. It is a stationary object that provides a 
 means for maintaining and managing running aglets in a uniform execution 
 environment where the host system is secured against malicious aglets. One node in a 
 computer network may run multiple servers and each server may host multiple 
 contexts. Contexts are named and can thus be located by the combination of their  
            C                    d  d    h  b  l          d b   h        bi i   f  h i
 server's address and their name.
 Message. A message is an object exchanged between aglets. It allows for 
 synchronous as well as asynchronous message passing between aglets. Message 
 passing can be used by aglets to collaborate and exchange information in a loosely  
     i    b   d b   l t  t   ll b t   d  h                       i f      ti  i    l    l
 coupled fashion.
 Future reply. A future reply is used in asynchronous message‐sending as a handler to 
 receive a result later asynchronously.
 Identifier. An identifier is bound to each aglet. This identifier is globally unique and 
 immutable throughout the lifetime of the aglet.
                                            Multiagent Systems: R. Akerkar             67
Mobile Agent Toolkit from 
IBM: Basic operations
Creation. The creation of an aglet takes place in a context. The new aglet is assigned 
an identifier, inserted into the context, and initialized. The aglet starts executing as 
soon as it has been successfully initialized.
         it h  b              f ll  i iti li d
Cloning. The cloning of an aglet produces an almost identical copy of the original 
aglet in the same context. The only differences are the assigned identifier and the fact 
that execution restarts in the new aglet. Note that execution threads are not cloned.
Dispatching. Dispatching an aglet from one context to another will remove it from its 
current context and insert it into the destination context, where it will restart 
execution (execution threads do not migrate). We say that the aglet has been “pushed” 
to its new context.
Retraction. The retraction of an aglet will pull (remove) it from its current context and 
insert it into the context from which the retraction was requested.
Activation and deactivation. The deactivation of an aglet is the ability to temporarily 
halt its execution and store its state in secondary storage. Activation of an aglet will 
restore it in a context.
          i  i    
Disposal. The disposal of an aglet will halt its current execution and remove it from its 
current context.
Messaging. Messaging between aglets involves sending, receiving, and handling 
messages synchronously as well as asynchronously.

                                           Multiagent Systems: R. Akerkar             68
Outline

1. History and perspectives on 
1 History and perspectives on
multiagents
2. Agent Architecture
            hi
3. Agent Oriented Software 
Engineering
4. Mobility
4. Mobility
5. Autonomy and Teaming

                Multiagent Systems: R. Akerkar   69
Autonomy
•Target and Context: Autonomy is only meaningful in terms of
specific targets and within given contexts.


•Capability: Autonomy only makes sense if an agent has a capability
 oward a target. E.g, a rock is not autonomous


•Sources of Autonomy:
       Endogenous: Self liberty, Desire, Experience, Motivations
       Exogenous: Social, Deontic liberty, Environments


•Implementations: Off-line and by design, Online with fixed cost
analysis,
anal sis Online learning

                                   Multiagent Systems: R. Akerkar     70
Perspectives on Autonomy



                                          Communication
Cognitive Science and AI




 Organizational Science
                                    Software Engineering
                                               g       g




                           Multiagent Systems: R. Akerkar   71
Autonomy and Communication
Detection and expression of autonomies requires sharing 
understanding of social roles and personal relationships among the 
participating agents, e.g., agents with positive relationships will 
would change their autonomies to accommodate one another

The form of the directive holds clues for autonomy, e.g., 
specificity in “Do x with a wrench and slowly.”

The content of the directive and the responses to it contribute to 
the autonomy, e.g., “Do x soon.”

An agent’s internal mechanism for autonomy determination 
                     , p          ,          y              ,
affects the detection, expression, and harmony of autonomies, 
e.g., an agent’s moods, drives, temperaments, …
                                   Multiagent Systems: R. Akerkar   72
Situated Autonomy and Action Selection
enablers        sensory              communications
                  data



                beliefs



                situated
                autonomy
                                       communication
physical                                    goal
  goal



 physical act                            communication
                                               i ti
   intention                               intention
                          Multiagent Systems: R. Akerkar   73
Shared Autonomy between an Air Traffic Control assistant
          agent and the human operator- 1999
           g                   p




                            Multiagent Systems: R. Akerkar   74
Autonomy Computation

Collision:
Autonomy = (CollisionPriority / 4.0) +
                                     4 0)
(((|CollisionPriority – 4.0|) * t) / T)

Landing:
If 3.0 < LandingPriority <= 4 0:
   3 0 <=                < 4.0:
Autonomy = 1.0

If LandingPriority < 3.0:
Autonomy = (LandingPriority/4.0) +
(((|LandingPriority – 4.0|) * t) / 2)
                       Multiagent Systems: R. Akerkar   75
Team- Building Intuition
•Drivers on the road are generally not a team

•Race driving in a “draft” is a team

•11 soccer players declaring to be a team are a
team

•Herding sheep is generally a team
Agents change their autonomy, roles, coordination strategies


•A String Quartet is a team
Well organized and practiced   Multiagent Systems: R. Akerkar   76
Team- Phil Cohen, et al
Phil Cohen, et al:
Shared goal and shared mental states
Communication in the form of Speech Acts is required for team formation
                               p              q

Steps to become a team:
1.
1 Weak Achievement Goal (WAG) relative to q and with respect to a team
to bring about p if either of these conditions holds:
•The agent has a normal achievement goal to bring about p; that is, the agent
does
not yet believe that p is true and has p eventually being true as a goal.
•The agent believes that p is true, will never be true, or is irrelevant (that is, q is
false), but has as a goal that the status of p be mutually believed by all the team
members.
2. Joint Persistent Goal (or JPG) relative to q to achieve p just in case
1. They mutually believe that p is currently false;
2. They mutually know they all want p to eventually be true;
         y        y          y                             y
3. It is true (and mutual knowledge) that until they come to mutually believe either
that
p is true, that p will never be true, or that q isSystems: R. Akerkar will continue to mutually
                                          Multiagent false, they                             77
Team- Phil Cohen, et al

•Requiring Speech Act Communication is too strong

•Requiring Mutual Knowledge is too strong

•Requiring agents to remain in a team until everyone knows
about the team qualifying condition is too strong
          team-qualifying




                           Multiagent Systems: R. Akerkar    78
Team- Michael Wooldridge
With respect to agent i’s desires there is potential for
cooperation iff:
1. th i
1 there is some group g such th t i b li
                              h that believes that g can j i tl
                                                th t      jointly
achieve ; and either
2. i can’t achieve  in isolation; or
3. i believes that for every action  that it can perform that
achieves , it has a desire of not performing .

i performs speech act FormTeam to form a team iff:
1. i informs team g that the team J-can ; and
2 i requests team g t perform 
2.         t t       to    f

Team g is a PreTeam iff:
1. g mutually believe that it J-can 
2. g mutually intends 
                               Multiagent Systems: R. Akerkar       79
Team- Michael Wooldridge

•Onset of cooperative attitude is independent of knowing about
specific individuals

•Assuming agent knows about g is hard too simplistic
 Assuming

•Requiring Speech Act Communication is too strong

•Requiring Mutual Knowledge is too strong




                               Multiagent Systems: R. Akerkar    80
Team- Munindar Singh
                                  g

<agents, social commitments, coordination relationships>

Social commitments: <debtor, creditor, context, discharge
condition>
Operators: Create, Discharge, Cancel, Release, Delegate, Assign

Coordination relationships about events:
e is required by f
e disables f
e feeds or enables f
e conditionally feeds f
…

                              Multiagent Systems: R. Akerkar      81
Agent as a member of a group...
  g                     g p
                                                agent



                                                    honors

                                        handles

                                        roles            obligations
                partakes


                 specifies                                     goals

                                                                              plans            member of

                       institution                         norms
                                                                                      s a s
                                                                                      shares
                                                                     relies on
            partakes         inherits            set/
values                                          borrow           contains
(terminal
goals)           organization
                                                                                                           group
                                                partakes
                                                    Multiagent Systems: R. Akerkar                                 82
The big picture
           Norms                  Values


          Obligationsab (i.e., responsibility)
     consent      perfect
                      f t
                 agreement
                                      Autonomyb
Dependenceba                     Autonomyb + Autonomya
      Delegationba
      D l   ti
   coordnation     weak
                 agreement
                             coordnation          Controlab
               Trustba
               definciency
               d fi i

               Powerab
                                  Multiagent Systems: R. Akerkar   83
Concluding Remarks
Concluding Remarks

 Th    
  There are many uses for
                      f
   Agents 
   Agent‐based Systems
   Agent Frameworks
 Many open problems area available
   Theoretical issues for modeling social elements 
    such as autonomy, power, trust, dependency, 
    norms, preference, responsibilities, security, …
             f                 ibili i        i  
   Adaptation and learning issues
   Communication and conversation issues

                         Multiagent Systems: R. Akerkar   84

Contenu connexe

Tendances

Induction and Decision Tree Learning (Part 1)
Induction and Decision Tree Learning (Part 1)Induction and Decision Tree Learning (Part 1)
Induction and Decision Tree Learning (Part 1)butest
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agentsMegha Sharma
 
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCEIntelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCEKhushboo Pal
 
Agents in Artificial intelligence
Agents in Artificial intelligence Agents in Artificial intelligence
Agents in Artificial intelligence Lalit Birla
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)mufassirin
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environmentVajira Thambawita
 
Genetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceGenetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceSahil Kumar
 
Useful Techniques in Artificial Intelligence
Useful Techniques in Artificial IntelligenceUseful Techniques in Artificial Intelligence
Useful Techniques in Artificial IntelligenceIla Group
 
Knowledge Representation & Reasoning
Knowledge Representation & ReasoningKnowledge Representation & Reasoning
Knowledge Representation & ReasoningSajid Marwat
 
Simplified Fuzzy ARTMAP
Simplified Fuzzy ARTMAPSimplified Fuzzy ARTMAP
Simplified Fuzzy ARTMAPPradipBankar
 
Reinforcement learning, Q-Learning
Reinforcement learning, Q-LearningReinforcement learning, Q-Learning
Reinforcement learning, Q-LearningKuppusamy P
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searchingLuigi Ceccaroni
 
Artificial intelligence agents and environment
Artificial intelligence agents and environmentArtificial intelligence agents and environment
Artificial intelligence agents and environmentMinakshi Atre
 

Tendances (20)

Induction and Decision Tree Learning (Part 1)
Induction and Decision Tree Learning (Part 1)Induction and Decision Tree Learning (Part 1)
Induction and Decision Tree Learning (Part 1)
 
The structure of agents
The structure of agentsThe structure of agents
The structure of agents
 
Problem solving agents
Problem solving agentsProblem solving agents
Problem solving agents
 
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCEIntelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
Intelligent Agent PPT ON SLIDESHARE IN ARTIFICIAL INTELLIGENCE
 
Agents in Artificial intelligence
Agents in Artificial intelligence Agents in Artificial intelligence
Agents in Artificial intelligence
 
Intelligent Agents
Intelligent Agents Intelligent Agents
Intelligent Agents
 
Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)Intelligence Agent - Artificial Intelligent (AI)
Intelligence Agent - Artificial Intelligent (AI)
 
Lecture 2 agent and environment
Lecture 2   agent and environmentLecture 2   agent and environment
Lecture 2 agent and environment
 
Genetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial IntelligenceGenetic Algorithms - Artificial Intelligence
Genetic Algorithms - Artificial Intelligence
 
Useful Techniques in Artificial Intelligence
Useful Techniques in Artificial IntelligenceUseful Techniques in Artificial Intelligence
Useful Techniques in Artificial Intelligence
 
Knowledge Representation & Reasoning
Knowledge Representation & ReasoningKnowledge Representation & Reasoning
Knowledge Representation & Reasoning
 
Simplified Fuzzy ARTMAP
Simplified Fuzzy ARTMAPSimplified Fuzzy ARTMAP
Simplified Fuzzy ARTMAP
 
Truth management system
Truth  management systemTruth  management system
Truth management system
 
AI: Learning in AI
AI: Learning in AI AI: Learning in AI
AI: Learning in AI
 
Reinforcement learning, Q-Learning
Reinforcement learning, Q-LearningReinforcement learning, Q-Learning
Reinforcement learning, Q-Learning
 
Solving problems by searching
Solving problems by searchingSolving problems by searching
Solving problems by searching
 
AI: AI & Searching
AI: AI & SearchingAI: AI & Searching
AI: AI & Searching
 
Artificial intelligence agents and environment
Artificial intelligence agents and environmentArtificial intelligence agents and environment
Artificial intelligence agents and environment
 
Artificial Intelligence
Artificial IntelligenceArtificial Intelligence
Artificial Intelligence
 
AI Lecture 4 (informed search and exploration)
AI Lecture 4 (informed search and exploration)AI Lecture 4 (informed search and exploration)
AI Lecture 4 (informed search and exploration)
 

En vedette

Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...
Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...
Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...farshad33
 
Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...
Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...
Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...farshad33
 
Lecture 4- Agent types
Lecture 4- Agent typesLecture 4- Agent types
Lecture 4- Agent typesAntonio Moreno
 
Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...
Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...
Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...Andrea Omicini
 
Interactions in Multi Agent Systems
Interactions in Multi Agent SystemsInteractions in Multi Agent Systems
Interactions in Multi Agent SystemsSSA KPI
 
Multiagent System Communication
Multiagent System Communication Multiagent System Communication
Multiagent System Communication Ahsan Rahim
 
T0. Multiagent Systems and Electronic Institutions
T0. Multiagent Systems and Electronic InstitutionsT0. Multiagent Systems and Electronic Institutions
T0. Multiagent Systems and Electronic InstitutionsEASSS 2012
 
Foundations of Multi-Agent Systems
Foundations of Multi-Agent SystemsFoundations of Multi-Agent Systems
Foundations of Multi-Agent SystemsAndrea Omicini
 
Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...
Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...
Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...farshad33
 
|.doc|
|.doc||.doc|
|.doc|butest
 
Topic 1 lecture 3-application imapct of mas&t
Topic 1 lecture 3-application imapct of mas&tTopic 1 lecture 3-application imapct of mas&t
Topic 1 lecture 3-application imapct of mas&tfarshad33
 
Topic 1 lecture 2
Topic 1 lecture 2Topic 1 lecture 2
Topic 1 lecture 2farshad33
 
Chapter 5 design patterns for mas
Chapter 5 design patterns for masChapter 5 design patterns for mas
Chapter 5 design patterns for masfarshad33
 
Chapter 6 agent communications--agent communications
Chapter 6 agent communications--agent communicationsChapter 6 agent communications--agent communications
Chapter 6 agent communications--agent communicationsfarshad33
 
Topic 1 lecture 1
Topic 1 lecture 1Topic 1 lecture 1
Topic 1 lecture 1farshad33
 
Multiagent systems (and their use in industry)
Multiagent systems (and their use in industry)Multiagent systems (and their use in industry)
Multiagent systems (and their use in industry)Marc-Philippe Huget
 
Study and development of methods and tools for testing, validation and verif...
 Study and development of methods and tools for testing, validation and verif... Study and development of methods and tools for testing, validation and verif...
Study and development of methods and tools for testing, validation and verif...Emilio Serrano
 
Multi agency working
Multi agency workingMulti agency working
Multi agency workingNathan Loynes
 

En vedette (20)

Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...
Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...
Topic 4 -software architecture viewpoint-multi-agent systems-a software archi...
 
Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...
Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...
Chapter 7 agent-oriented software engineering ch7-agent methodology-agent met...
 
Lecture 4- Agent types
Lecture 4- Agent typesLecture 4- Agent types
Lecture 4- Agent types
 
Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...
Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...
Event-Based vs. Multi-Agent Systems: Towards a Unified Conceptual Framework. ...
 
Interactions in Multi Agent Systems
Interactions in Multi Agent SystemsInteractions in Multi Agent Systems
Interactions in Multi Agent Systems
 
Multiagent System Communication
Multiagent System Communication Multiagent System Communication
Multiagent System Communication
 
T0. Multiagent Systems and Electronic Institutions
T0. Multiagent Systems and Electronic InstitutionsT0. Multiagent Systems and Electronic Institutions
T0. Multiagent Systems and Electronic Institutions
 
Foundations of Multi-Agent Systems
Foundations of Multi-Agent SystemsFoundations of Multi-Agent Systems
Foundations of Multi-Agent Systems
 
Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...
Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...
Chapter 8 agent-oriented software engineering ch8-prometheus research methodo...
 
|.doc|
|.doc||.doc|
|.doc|
 
Topic 1 lecture 3-application imapct of mas&t
Topic 1 lecture 3-application imapct of mas&tTopic 1 lecture 3-application imapct of mas&t
Topic 1 lecture 3-application imapct of mas&t
 
Topic 1 lecture 2
Topic 1 lecture 2Topic 1 lecture 2
Topic 1 lecture 2
 
Chapter 5 design patterns for mas
Chapter 5 design patterns for masChapter 5 design patterns for mas
Chapter 5 design patterns for mas
 
Auctions
AuctionsAuctions
Auctions
 
Chapter 6 agent communications--agent communications
Chapter 6 agent communications--agent communicationsChapter 6 agent communications--agent communications
Chapter 6 agent communications--agent communications
 
Presentation
PresentationPresentation
Presentation
 
Topic 1 lecture 1
Topic 1 lecture 1Topic 1 lecture 1
Topic 1 lecture 1
 
Multiagent systems (and their use in industry)
Multiagent systems (and their use in industry)Multiagent systems (and their use in industry)
Multiagent systems (and their use in industry)
 
Study and development of methods and tools for testing, validation and verif...
 Study and development of methods and tools for testing, validation and verif... Study and development of methods and tools for testing, validation and verif...
Study and development of methods and tools for testing, validation and verif...
 
Multi agency working
Multi agency workingMulti agency working
Multi agency working
 

Similaire à Multi-agent systems

Unit 4 Artificial Intelligent Agent.pptx
Unit 4 Artificial Intelligent Agent.pptxUnit 4 Artificial Intelligent Agent.pptx
Unit 4 Artificial Intelligent Agent.pptxssuser40ae5e
 
Artificial intelligence(03)
Artificial intelligence(03)Artificial intelligence(03)
Artificial intelligence(03)Nazir Ahmed
 
A.i lecture 04
A.i lecture 04A.i lecture 04
A.i lecture 04yarafghani
 
leewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdf
leewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdfleewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdf
leewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdfKristiLBurns
 
Lecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxLecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxbk996051
 
Advanced user agent v clean
Advanced user agent v cleanAdvanced user agent v clean
Advanced user agent v cleanSTIinnsbruck
 
Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...
Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...
Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...Prateek Soni
 
Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya
 Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya
Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriyaVijiPriya Jeyamani
 
Intelligent agent
Intelligent agent Intelligent agent
Intelligent agent Arvind sahu
 

Similaire à Multi-agent systems (20)

Agent-based System - Introduction
Agent-based System - IntroductionAgent-based System - Introduction
Agent-based System - Introduction
 
Agent-based System - Introduction
Agent-based System - IntroductionAgent-based System - Introduction
Agent-based System - Introduction
 
Unit 4 Artificial Intelligent Agent.pptx
Unit 4 Artificial Intelligent Agent.pptxUnit 4 Artificial Intelligent Agent.pptx
Unit 4 Artificial Intelligent Agent.pptx
 
Artificial intelligence(03)
Artificial intelligence(03)Artificial intelligence(03)
Artificial intelligence(03)
 
Agents(1).ppt
Agents(1).pptAgents(1).ppt
Agents(1).ppt
 
Software agents
Software agentsSoftware agents
Software agents
 
Agent uml
Agent umlAgent uml
Agent uml
 
Norms Brmas08 V2
Norms Brmas08 V2Norms Brmas08 V2
Norms Brmas08 V2
 
Intro to Agent-based System
Intro to Agent-based SystemIntro to Agent-based System
Intro to Agent-based System
 
Introductionto agents
Introductionto agentsIntroductionto agents
Introductionto agents
 
A.i lecture 04
A.i lecture 04A.i lecture 04
A.i lecture 04
 
Ao03302460251
Ao03302460251Ao03302460251
Ao03302460251
 
leewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdf
leewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdfleewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdf
leewayhertz.com-Auto-GPT Unleashing the power of autonomous AI agents.pdf
 
Presentation_DAI
Presentation_DAIPresentation_DAI
Presentation_DAI
 
Lecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptxLecture 1 about the Agents in AI & .pptx
Lecture 1 about the Agents in AI & .pptx
 
Advanced user agent v clean
Advanced user agent v cleanAdvanced user agent v clean
Advanced user agent v clean
 
Lecture 2 Agents.pptx
Lecture 2 Agents.pptxLecture 2 Agents.pptx
Lecture 2 Agents.pptx
 
Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...
Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...
Group 1 (3009, 01, 02, 03, 04) interacting with agents, direct manipulation t...
 
Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya
 Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya
Expert System Lecture Notes Chapter 1,2,3,4,5 - Dr.J.VijiPriya
 
Intelligent agent
Intelligent agent Intelligent agent
Intelligent agent
 

Plus de R A Akerkar

Rajendraakerkar lemoproject
Rajendraakerkar lemoprojectRajendraakerkar lemoproject
Rajendraakerkar lemoprojectR A Akerkar
 
Big Data and Harvesting Data from Social Media
Big Data and Harvesting Data from Social MediaBig Data and Harvesting Data from Social Media
Big Data and Harvesting Data from Social MediaR A Akerkar
 
Can You Really Make Best Use of Big Data?
Can You Really Make Best Use of Big Data?Can You Really Make Best Use of Big Data?
Can You Really Make Best Use of Big Data?R A Akerkar
 
Big data in Business Innovation
Big data in Business Innovation   Big data in Business Innovation
Big data in Business Innovation R A Akerkar
 
What is Big Data ?
What is Big Data ?What is Big Data ?
What is Big Data ?R A Akerkar
 
Connecting and Exploiting Big Data
Connecting and Exploiting Big DataConnecting and Exploiting Big Data
Connecting and Exploiting Big DataR A Akerkar
 
Linked open data
Linked open dataLinked open data
Linked open dataR A Akerkar
 
Semi structure data extraction
Semi structure data extractionSemi structure data extraction
Semi structure data extractionR A Akerkar
 
Big data: analyzing large data sets
Big data: analyzing large data setsBig data: analyzing large data sets
Big data: analyzing large data setsR A Akerkar
 
Description logics
Description logicsDescription logics
Description logicsR A Akerkar
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligenceR A Akerkar
 
Case Based Reasoning
Case Based ReasoningCase Based Reasoning
Case Based ReasoningR A Akerkar
 
Semantic Markup
Semantic Markup Semantic Markup
Semantic Markup R A Akerkar
 
Intelligent natural language system
Intelligent natural language systemIntelligent natural language system
Intelligent natural language systemR A Akerkar
 
Knowledge Organization Systems
Knowledge Organization SystemsKnowledge Organization Systems
Knowledge Organization SystemsR A Akerkar
 
Rational Unified Process for User Interface Design
Rational Unified Process for User Interface DesignRational Unified Process for User Interface Design
Rational Unified Process for User Interface DesignR A Akerkar
 
Unified Modelling Language
Unified Modelling LanguageUnified Modelling Language
Unified Modelling LanguageR A Akerkar
 

Plus de R A Akerkar (20)

Rajendraakerkar lemoproject
Rajendraakerkar lemoprojectRajendraakerkar lemoproject
Rajendraakerkar lemoproject
 
Big Data and Harvesting Data from Social Media
Big Data and Harvesting Data from Social MediaBig Data and Harvesting Data from Social Media
Big Data and Harvesting Data from Social Media
 
Can You Really Make Best Use of Big Data?
Can You Really Make Best Use of Big Data?Can You Really Make Best Use of Big Data?
Can You Really Make Best Use of Big Data?
 
Big data in Business Innovation
Big data in Business Innovation   Big data in Business Innovation
Big data in Business Innovation
 
What is Big Data ?
What is Big Data ?What is Big Data ?
What is Big Data ?
 
Connecting and Exploiting Big Data
Connecting and Exploiting Big DataConnecting and Exploiting Big Data
Connecting and Exploiting Big Data
 
Linked open data
Linked open dataLinked open data
Linked open data
 
Semi structure data extraction
Semi structure data extractionSemi structure data extraction
Semi structure data extraction
 
Big data: analyzing large data sets
Big data: analyzing large data setsBig data: analyzing large data sets
Big data: analyzing large data sets
 
Description logics
Description logicsDescription logics
Description logics
 
Data Mining
Data MiningData Mining
Data Mining
 
Link analysis
Link analysisLink analysis
Link analysis
 
artificial intelligence
artificial intelligenceartificial intelligence
artificial intelligence
 
Case Based Reasoning
Case Based ReasoningCase Based Reasoning
Case Based Reasoning
 
Semantic Markup
Semantic Markup Semantic Markup
Semantic Markup
 
Intelligent natural language system
Intelligent natural language systemIntelligent natural language system
Intelligent natural language system
 
Data mining
Data miningData mining
Data mining
 
Knowledge Organization Systems
Knowledge Organization SystemsKnowledge Organization Systems
Knowledge Organization Systems
 
Rational Unified Process for User Interface Design
Rational Unified Process for User Interface DesignRational Unified Process for User Interface Design
Rational Unified Process for User Interface Design
 
Unified Modelling Language
Unified Modelling LanguageUnified Modelling Language
Unified Modelling Language
 

Dernier

Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentInMediaRes1
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptxPoojaSen20
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdfssuser54595a
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991RKavithamani
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptxVS Mahajan Coaching Centre
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Educationpboyjonauth
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)eniolaolutunde
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsKarinaGenton
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingTechSoup
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdfSoniaTolstoy
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Celine George
 

Dernier (20)

Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Alper Gobel In Media Res Media Component
Alper Gobel In Media Res Media ComponentAlper Gobel In Media Res Media Component
Alper Gobel In Media Res Media Component
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
PSYCHIATRIC History collection FORMAT.pptx
PSYCHIATRIC   History collection FORMAT.pptxPSYCHIATRIC   History collection FORMAT.pptx
PSYCHIATRIC History collection FORMAT.pptx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdfTataKelola dan KamSiber Kecerdasan Buatan v022.pdf
TataKelola dan KamSiber Kecerdasan Buatan v022.pdf
 
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
18-04-UA_REPORT_MEDIALITERAСY_INDEX-DM_23-1-final-eng.pdf
 
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
Industrial Policy - 1948, 1956, 1973, 1977, 1980, 1991
 
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions  for the students and aspirants of Chemistry12th.pptxOrganic Name Reactions  for the students and aspirants of Chemistry12th.pptx
Organic Name Reactions for the students and aspirants of Chemistry12th.pptx
 
Introduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher EducationIntroduction to ArtificiaI Intelligence in Higher Education
Introduction to ArtificiaI Intelligence in Higher Education
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)Software Engineering Methodologies (overview)
Software Engineering Methodologies (overview)
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
Science 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its CharacteristicsScience 7 - LAND and SEA BREEZE and its Characteristics
Science 7 - LAND and SEA BREEZE and its Characteristics
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdfBASLIQ CURRENT LOOKBOOK  LOOKBOOK(1) (1).pdf
BASLIQ CURRENT LOOKBOOK LOOKBOOK(1) (1).pdf
 
Staff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSDStaff of Color (SOC) Retention Efforts DDSD
Staff of Color (SOC) Retention Efforts DDSD
 
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
Incoming and Outgoing Shipments in 1 STEP Using Odoo 17
 

Multi-agent systems

  • 1. R. Akerkar American University of Armenia Yerevan, Armenia Multiagent Systems: R. Akerkar 1
  • 2. Outline 1. History and perspectives on  1. History and perspectives on multiagents 2. Agent Architecture 2. Agent Architecture 3. Agent Oriented Software Engineering 4. Mobility 4. Mobility 5. Autonomy and Teaming Multiagent Systems: R. Akerkar 2
  • 3. Definitions An agent is an entity whose state is viewed as consisting  of mental components such as beliefs, capabilities,  p , p , choices, and commitments. [Yoav Shoham, 1993] . An entity is a software agent if and only if it  communicates correctly in an agent communication  language. [Genesereth and Ketchpel, 1994] language  [Genesereth and Ketchpel  1994] . Intelligent agents continuously perform three functions:  perception of dynamic conditions in the environment;  action to affect conditions in the environment; and  reasoning to interpret perceptions, solve problems, draw  l bl d inferences, and determine actions. [Hayes‐Roth, 1995] Multiagent Systems: R. Akerkar 3
  • 4. Definitions .  An agent is anything that can be viewed as    An agent is anything that can be viewed as  (a)Perceiving its environment, and (b) Acting upon  that environment [Russell and Norvig, 1995] .   A computer system that is situated in some  environment and is capable of autonomous action in  its environment to meet its design objectives.  i   i      i  d i   bj i   [Wooldridge, 1999] Multiagent Systems: R. Akerkar 4
  • 5. Agents: A working definition  An agent is a computational system that interacts  with one or more counterparts or real‐world systems   ith        t t     l ld  t with the following key features to varying degrees: • Autonomy • Reactiveness • Pro‐activeness • Social abilities e.g., autonomous robots, human assistants, service agents The need is for automation and distributed use of online resources Multiagent Systems: R. Akerkar 5
  • 6. Test of Agenthood [Huhns and Singh, 1998] “A system of distinguished agents should  “A  t   f di ti i h d  t   h ld  substantially change semantically if a distinguished  agent is added.” t i   dd d ” Multiagent Systems: R. Akerkar 6
  • 7. Agents vs. Objects g j “Objects with attitude” [Bradshaw, 1997] “Obj   i h  i d ” [B d h   ]  Agents are similar to objects since they are  i il bj i h computational units that encapsulate a state  and communicate via message passing d  i t   i     i  Agents differ from objects since they have a  strong sense of autonomy and are active  versus passive.   i Multiagent Systems: R. Akerkar 7
  • 8. Agent Oriented Programming, Yoav Shoham AOP principles: 1. The state of an object in OO p g j programming has no g g generic structure. The state of an agent has a “mentalistic” structure: it consists of mental components such as beliefs and commitments commitments. 2. Messages in object-oriented programming are coded in an g j p g g application-specific ad-hoc manner. A message in AOP is coded as a “speech act” according to a standard agent communication language that is application independent application-independent. Multiagent Systems: R. Akerkar 8
  • 9. Agent Oriented Programming  Extends Peter Chen’s ER model,  E t d P t Ch ’ ER d l Gerd Wagner • Different entities may belong to different epistemic categories. There are  agents, events, actions, commitments, claims, and objects. • We distinguish between physical and communicative actions/events.  Actions create events, but not all events are created by actions. • Some of these modeling concepts are indexical, that is, they depend on  the perspective chosen: in the perspective of a particular agent, actions  of other agents are viewed as events, and commitments of other agents  are viewed as claims against them. Multiagent Systems: R. Akerkar 9
  • 10. Agent Oriented Programming  Extends Peter Chen’s ER model,  Gerd Wagner g • In the internal perspective of an agent, a commitment refers to a specific action  to be performed in due time, while a claim refers to a specific event that is  created by an action of another agent, and has to occur in due time. • Communication is viewed as asynchronous point‐to‐point message passing.  We take the expressions receiving a message and sending a message as  synonyms of perceiving a communication event and performing a  communication act. • There are six designated relationships in which specifically agents, but not  objects, participate: only an agent perceives environment events, receives and  sends messages, does physical actions, has Commitment to perform some  action in due time, and has Claim that some action event will happen in due  time. Multiagent Systems: R. Akerkar 10
  • 11. Agent Oriented Programming  Extends Peter Chen’s ER model,  E t d P t Ch ’ ER d l Gerd Wagner An institutional agent consists of a certain number of (institutional, artificial and  human) internal agents acting on behalf of it. An institutional agent can only  perceive and act through its internal agents. Within an institutional agent, each internal agent has certain rights and duties. There are three kinds of duties: an internal agent may have the duty to full  commitments of a certain type, the duty to monitor claims of a certain type, or  yp , y yp , the duty to react to events of a certain type on behalf of the organization. A right refers to an action type such that the internal agent is permitted to  p perform actions of that type on behalf of the organization. yp g Multiagent Systems: R. Akerkar 11
  • 12. Agent Typology Human agents: Person, Employee, Student, Nurse, or Patient Artificial agents: owned and run by a legal entity  Institutional agents: a bank or a hospital Software agents: Agents designed with software Information agent: D t b I f ti t Data bases and th i t d the internet t Autonomous agents: Non-trivial independence Interactive/Interface agents: Designed for interaction Adaptive agents: Non-trivial ability for change Mobile agents: code and logic mobility g g y Multiagent Systems: R. Akerkar 12
  • 13. Agent Typology Collaborative/Coordinative agents: Non-trivial ability for coordination, autonomy, and sociability Reactive agents: No internal state and shallow reasoning Hybrid agents: a combination of deliberative and reactive components Heterogenous agents: A system with various agent sub-components b Intelligent/smart agents: Reasoning and intentional notions Wrapper agents: Facility for interaction with non- agents Multiagent Systems: R. Akerkar 13
  • 14. Multi‐agency A multi‐agent system is a system that is made up of  multiple agents with the following key features among  p g g y g agents to varying degrees of commonality and  adaptation: • S i l  ti Social rationality lit • Normative patterns • System of Values  e.g., HVAC, eCommerce, space  missions, Soccer, Intelligent Home,  e g  HVAC  eCommerce  space  missions  Soccer  Intelligent Home   “talk” monitor The motivation is coherence and distribution of resources. Multiagent Systems: R. Akerkar 14
  • 15. Applications of Multiagent Systems  Electronic commerce: B2B, InfoFlow, eCRM  N t Network and system management agents: E.g., The   k  d  t   t  t  E  Th telecommunications companies  Real‐time monitoring and control of networks: ATM Real time monitoring and control of networks: ATM  Modeling and control of transportation systems: Delivery  Information retrieval: online search  Automatic meeting scheduling  Electronic entertainment: eDog Multiagent Systems: R. Akerkar 15
  • 16. Applications of Multiagent Systems (cont.)  Decision and logistic support agents:Military and Utility  Companies  Interest matching agents: Commercial sites like Amazon.com  User assistance agents: E.g., MS office assistant  Organizational structure agents: Supply‐chain ops  Industrial manufacturing and production: manufacturing cells  Personal agents: emails  Investigation of complex social phenomena such as evolution of  roles, norms, and organizational structures Multiagent Systems: R. Akerkar 16
  • 17. Summary of Business Benefits • Modeling existing organizations and dynamics • Modeling and Engineering E societies Modeling and Engineering E‐societies • New tools for distributed knowledge‐ware Multiagent Systems: R. Akerkar 17
  • 18. Three views of Multi‐agency Constructivist: Agents are rational in the sense of Newell’s principle  of individual rationality. They only perform goals which bring them a  of individual rationality  They only perform goals which bring them a  positive net benefit without regard to other agents. These are self‐ interested agents. Sociality: Agents are rational in the Jennings’ principle of social  rationality. They perform actions whose joint benefit is greater than  its joint loss. These are self‐less, responsible  agents.  Reductionist: Agents which accept all goals they are capable of  performing. These are benevolent agents. performing  These are benevolent agents Multiagent Systems: R. Akerkar 18
  • 19. Multi‐agency: allied fields DAI MAS: (1) online social laws, (2) agents may adopt goals and adapt beyond any problem laws DPS: offline social laws CPS: (1) agents are a ‘team’, (2) agents ‘know’ the shared goal • In DAI, a problem is being automatically decomposed among  distributed nodes, whereas in multi‐agents, each agent chooses to  , g , g whether to participate. • Distributed planning is distributed and decentralized action  selection whereas in multi‐agents, agents keep their own copies a  selection whereas in multi agents  agents keep their own copies a  plan that might include others. Multiagent Systems: R. Akerkar 19
  • 20. Multi‐agent assumptions and goals • Agents have their own intentions and the system  has distributed intentionality  y • Agents model other agents mental states in their  own decision making  g • Agent internals are of less central than agents  interactions • Agents deliberate over their interactions  • Emergence at the agent level and at the interaction  level are desirable g p p p p • The goals is to find some principles‐for or principled  ways to explore interactions Multiagent Systems: R. Akerkar 20
  • 21. Origins of Multi‐agent systems • Carl Hewitt’s Actor model, 1970 • Blackboard Systems: Hearsay (1975), BB1, GBB  • Distributed Vehicle Monitoring System (DVMT, 1983) • Di t ib t d AI Distributed AI • Distributed OS Multiagent Systems: R. Akerkar 21
  • 22. MAS Orientations Computational Organization Theory Databases Sociology Formal AI Economics Distributed Problem Solving Cognitive Psychology Science Systems Distributed Theory Computing Multiagent Systems: R. Akerkar 22
  • 23. Multi‐agents in the large versus in the small • In the small: (Distributed AI) A handful of “smart”  agents with emergence in the agents t   ith   i  th   t • In the large: 100+ “simple” agents with emergence in  the group: Swarms (Bugs) http://www.swarm.org/ the group: Swarms (Bugs) http://www swarm org/ Multiagent Systems: R. Akerkar 23
  • 25. Abstract Architecture states action action actions Environment Multiagent Systems: R. Akerkar 25
  • 26. Architectures • Deduction/logic-based • Reactive • BDI • Layered (hybrid) Multiagent Systems: R. Akerkar 26
  • 27. Abstract Architectures  An abstract model: <States, Action, S*A>  An abstract view  S = {s1, s2, …} – environment states { , , }  A = {a1, a2, …} – set of possible actions  This allows us to view an agent as a function action : S*  A Multiagent Systems: R. Akerkar 27
  • 28. Logic‐Based Architectures g  These agents have internal state  See and next functions and model decision making by a set of  g y deduction rules for inference see : S  P next : D x P  D action : D  A  Use logical deduction to try to prove the next action to take  Advantages Simple, elegant, logical semantics p , g , g  Disadvatages Computational complexity Representing the real world Multiagent Systems: R. Akerkar 28
  • 29. Reactive Architectures  Reactive Architectures do not use h d  symbolic world model  symbolic reasoning  An example is Rod Brooks’s subsumption architecture  Advantages  Simplicity, computationally tractable, robust,  elegance g  Disadvantages  Modeling limitations, correctness, realism Multiagent Systems: R. Akerkar 29
  • 30. Reflexive Architectures:  simplest type of reactive  architecture  Reflexive agents decide what to do without  regard to history – regard to history  purely reflexive action : P  A  Example ‐ thermostat ction(s) = { off on if temp = OK otherwise Multiagent Systems: R. Akerkar 30
  • 31. Reflex agent without state (Russell and Norvig, 1995) Multiagent Systems: R. Akerkar 31
  • 33. Goal‐oriented agent:  a more complex reactive agent (Russell and  p g ( Norvig, 1995) Multiagent Systems: R. Akerkar 33
  • 35. BDI: a Formal Method • Belief: states, facts, knowledge, data • Desire: wish, goal, motivation (these might conflict)  • Intention: a) select actions  b) performs actions  c)  Intention: a) select actions, b) performs actions, c)  explain choices of action (no conflicts) • Commitment  persistence of intentions and trials Commitment: persistence of intentions and trials • Know‐how: having the procedural knowledge for carrying out a task Multiagent Systems: R. Akerkar 35
  • 36. Belief-Desire-Intention Environment belief act sense revision Beliefs generate options filter Desires Intentions Multiagent Systems: R. Akerkar 36
  • 37. Why is BDI a Formal Method? • BDI is typically specified in the language of modal logic with  p possible world semantics. • Possible worlds capture the various ways the world might develop. Since the formalism in [Wooldridge 2000] assumes at least a KD axiomatization f each of B D and I each of th sets of possible i ti ti for h f B, D, d I, h f the t f ibl worlds representing B, D and I must be consistent. • A KD45 logic with the following axioms: • K: BDI(a,  , t)  (BDI(a, , t)  BDI(a, , t))  • D: BDI(a, t)  not BDI(a, not , t)  • 4: B(a, , t)  B( B(a, , t) ) • 5: (not B(a, , t))  B( not B(a, , t)) • K&D is the normal modal system Multiagent Systems: R. Akerkar 37
  • 38. A simplified BDI agent algorithm 1. B = B0; 2. 2 I := I0; 3. while true do 4. 4 get next percept ; 5. B := brf(B, ); // belief revision 6. 6 D:=options(B,D,I); D:=options(B D I); // determination of desires 7. I := filter(B, D, I); // determination of intentions 8. 8  := plan(B I); plan(B, // plan generation 9. execute  10. 10 end while Multiagent Systems: R. Akerkar 38
  • 39. Correspondences • Belief-Goal compatibility: D B l Des Bel • Goal-Intention Compatibility: Int  Des • Volitional Commitment: Int Do  Do • Awareness of Goals and Intentions: Des  BelDes Int  BelInt Multiagent Systems: R. Akerkar 39
  • 40. Layered Architectures Layered Architectures  Layering is based on division of behaviors into automatic  and controlled.  Layering might be Horizontal (I.e., I/O at each layer) or  Vertical (I.e., I/O is dealt with by single layer)  Advantages are that these are popular and fairly intuitive  modeling of behavior  Dis‐advantages are that these are too complex and non‐ uniform representations Multiagent Systems: R. Akerkar 40
  • 41. Outline 1. History and perspectives on  1. History and perspectives on multiagents 2. Agent Architecture 2. Agent Architecture 3. Agent Oriented Software Engineering 4. Mobility 4. Mobility 5. Autonomy and Teaming Multiagent Systems: R. Akerkar 41
  • 42. Agent‐Oriented Software  Engineering  AOSE is an approach to developing software using  agent‐oriented abstractions that models high level  interactions  and relationships. p  Agents are used to model run‐time decisions about  g the nature and scope of interactions that are not  known ahead of time. Multiagent Systems: R. Akerkar 42
  • 43. Designing Agents: Recommendations from H. Van Dyke Parunak’s (1996) “Go to the Ant”: Engineering Principles from Natural Multi- Agent Systems, Annals of Operations Research, special issue on AI and Management Science. 1. Agents should correspond to things in the problem domain rather than to  h ld d h h bl d h h abstract functions. 2. Agents should be small in mass (a small fraction of the total system), time (able  to forget), scope (avoiding global knowledge and action). g ), p ( gg g ) 3. The agent community should be decentralized, without a single point of control  or failure. 4. Agents should be neither homogeneous nor incompatible, but diverse.  Randomness and repulsion are important tools for establishing and  maintaining this diversity.  5. Agent communities should include a dissipative mechanism to whose flow they  can orient themselves, thus leaking entropy away from the macro level at  which they do useful work. hi h  h  d   f l  k 6. Agents should have ways of caching and sharing what they learn about their  environment, whether at the level of the individual, the generational chain, or  y g the overall community organization. 7. Agents should plan and execute concurrently rather than sequentially. Multiagent Systems: R. Akerkar 43
  • 44. Organizations Human organizations are several agents, engaged in multiple  g g , g g p goal‐directed tasks, with distinct knowledge, culture,  memories, history, and capabilities, and separate legal  , y, p , p g standing from that of individual agents Computational Organization Theory (COT) models information  production and manipulation in organizations of human and  computational agents Multiagent Systems: R. Akerkar 44
  • 45. Management of Organizational  Structure  O Organizational constructs are modeled as   i ti l  t t     d l d  entities in multiagent systems  Multiagent systems have built in mechanisms  for flexibly forming, maintaining, and  for flexibly forming  maintaining  and  abandoning organizations  Multiagent systems can provide a variety of  stable intermediary forms in rapid systems  development Multiagent Systems: R. Akerkar 45
  • 46. 7.2.1 Agent and Agency  7.2.1 Agent and Agency Multiagent Systems: R. Akerkar 46
  • 47. AOSE Considerations  What, how many, structure of agent?  Model of the environment?  Communication? Protocols? Relationships?  Coordination? Multiagent Systems: R. Akerkar 47
  • 48. Stages of Agent‐Oriented  Software Engineering A Requirements: provided by user A. B. Analysis: objectives and invariants B A l i   bj ti   d i i t C. Design: Agents and Interactions D. Implementation: Tools and techniques Multiagent Systems: R. Akerkar 48
  • 49. KoAS‐ Bradshaw, et al Knowledge (Facts) represent Beliefs in which the agent has confidence about F t and Beliefs may b h ld privately or b shared. Facts d B li f be held i t l be h d Desires represent goals and preferences that motivate the agent to act Intentions represent a commitment to perform an action. There is no exact description of capabilities Life cycle: birth, life and death (also a Cryogenic state) birth life, Agent Types: KaOS, Mediation (KaOS and outside) , Proxy (mediator between two KAOS agents), Domain Manager (agent registration), registration) and Matchmaker (mediator of services) Omitted: Emotions, Learning, agent relationships, Fraud, Trust, Security Multiagent Systems: R. Akerkar 49
  • 50. Gaia‐ Wooldridge, et al g , The Analysis phase: Roles model: -PPermissions ( i i (resources)) - Responsibilities (Safety properties and Liveliness properties) -P t Protocols l Interactions model: purpose, initiator, responder, inputs, outputs, and processing of the conversation The D i Th Design phase: h Agent model Services model Acquaintance model Omitted: Trust Fraud Commitment and Security Trust, Fraud, Commitment, Security. Multiagent Systems: R. Akerkar 50
  • 51. TAEMS: Keith Decker and Victor Lesser  The agents are simple processors.  Internal structure of agents include (a) beliefs ( (knowledge) about task structure, (b) states, (c) actions, g ) ,( ) ,( ) , (d) a strategy which is constantly being updated, of what methods the agent intends to execute at what time.  Omitted: Roles, Skills or Resources. Multiagent Systems: R. Akerkar 51
  • 52. BDI based Agent-Oriented Methodology (KGR) Kinny Georgeff and Rao Kinny,  External viewpoint: the social system structure and dynamics.  Agent Model + Interaction Model. g  Independent of agent cognitive model and communication  Internal viewpoint: the Belief Model the Goal Model, Model, and the Plan Model.  Beliefs: the environment, internal state, the actions , , repertoire  Goals: possible goals, desired events  Plans: state charts Multiagent Systems: R. Akerkar 52
  • 53. MaSE – Multi-agent Systems Engineering, DeLoach  Domain Level Design (Use AgML for Agent type Diagram, Diagram Communication Hierarchy Diagram and Diagram, Communication class Diagrams.)  Agent Level Design (Use AgDL for agent conversation)  Component Design AgDL  System Design AgML y g g  Languages: AgML (Agent Modeling Language- a graphical language) AgDL (Agent Definition Language- the system level behavior and the internal behavior of the agent)  Rich in communication, poor in social structures communication Multiagent Systems: R. Akerkar 53
  • 54. Scott DeLoach’s MaSE Sequence Roles Tasks Diagrams Agent Class Conversation Diagram Diagram Internal Agent Diagram g Deployment Diagram Multiagent Systems: R. Akerkar 54
  • 55. The TOVE Project (1998) ; Mark Fox, et al. • Organizational hierarchy: Divisions and sub-divisions • Goals, sub-goals, their hierarchy (using AND & OR) • Roles, their relations to skills, goals, authority, processes, policies • Skills, and their link to roles • Agents, their affiliation with teams and divisions Commitment, Empowerment • Communication links between agents: sending and receiving information. information Communication at three levels: information, intentions (ask, tell, deny…), and conventions (semantics). Levels 2 & 3 are designed using speech act. • Teams as temporary group of agents • Activities and their states, the connection to resources and the constraints. • Resources and their relation to activities and activities states • Constraints on activities (what activities can occur at a specific situation and a specific time) • Time and the duration of activities. Actions occur at a point in time and they have duration. • Situation Shortcomings: central d i i making Sh t i t l decision ki Multiagent Systems: R. Akerkar 55
  • 56. Agent-Oriented Programming (AOP): Yoav Shoham • AGENT0 is the first AOP and the logical component of this language is a quantified multi-modal logic. • M t l state: beliefs, capabilities, and commitments ( Mental t t b li f biliti d it t (or obligations). • Communication: ‘request’ (to perform an action), ‘unrequest’ (to refrain from action), and ‘inform’ (to pass information). Multiagent Systems: R. Akerkar 56
  • 57. The MADKIT Agent Platform Architecture:  Olivier Gutknecht Jacques Ferber Olivier Gutknecht Jacques Ferber  Three core concepts : agent, group, and role.  Interaction language  Organizations: a set of groups Multiagent Systems: R. Akerkar 57
  • 58. Outline 1. History and perspectives on  1 History and perspectives on multiagents 2. Agent Architecture hi 3. Agent Oriented Software  Engineering 4. Mobility 4. Mobility 5. Autonomy and Teaming Multiagent Systems: R. Akerkar 58
  • 59. Mobile Agents g [Singh, 1999] A computation that can change its location of execution (given a  suitable underlying execution environment), both code d program state  [Papaioannou, 1999] A software agent that is able to migrate from one host to  [P i   ] A  f     h  i   bl     i  f    h     another in a computer network is a mobile agent. [IBM] Mobile network agents are programs that can be dispatched from one  computer and transported to a remote computer for execution. Arriving at the  remote computer, they present their credentials and obtain access to local  p y y g g services and data. The remote computer may also serve as a broker by bringing  together agents with similar interests and compatible goals, thus providing a  meeting place at which agents can interact. Multiagent Systems: R. Akerkar 59
  • 60. Mobile Agent Origins ‐ Batch Jobs ‐ Distributed Operating System (migration is  transparent to the user.) ‐ Telescript [General Magic, Inc. USA, 1994]  migration of an executing program for  use of local resources Multiagent Systems: R. Akerkar 60
  • 61. A paradigm shift: Distributed Systems versus mobile code Instead of masking the physical location of a component, mobile code  infrastructures make it evident. Code mobility is geared for Internet‐scale systems ... unreliable Programming is location aware ...location is available to the programmer g g Mobility is a choice ...migration is controlled by the programmer or at runtime by the agent Load balancing is not the driving force ...instead flexibility, autonomy and  disconnected operations are key factors Multiagent Systems: R. Akerkar 61
  • 62. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a Task Remote Computation In remote computation, components in the system are static,  In remote computation  components in the system are static   whereas logic can be mobile. For example, component A, at Host  HA, contains the required logic L to perform a particular task T, but  does not have access to the required resources R to complete the  q p task. R can be found at HB, so A forwards the logic to component B,  k b f d f d h l which also resides at HB. B then executes the logic before returning  the result to A. E.g., batch entries. HA HB L, T R HA L HB Compute L R result Multiagent Systems: R. Akerkar 62
  • 63. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a Task Code on Demand In Code on Demand, component A already has access to resource R.  However, A (or any other components at Host A) has no idea of the  logic required to perform task T. Thus, A sends a request to B for it to  forward the logic L. Upon receipt, A is then able to perform T. An  example of this abstraction is a Java applet, in which a piece of code  example of this abstraction is a Java applet  in which a piece of code  is downloaded from a web server by a web browser and then  executed. HA HB R L HA Send L HB Compute R L L Multiagent Systems: R. Akerkar 63
  • 64. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a Task Mobile Agents With the mobile agent paradigm, component A already has the logic L required  to perform task T, but again does not have access to resource R. This resource can   t   f  t k T  b t  i  d   t h    t    R  Thi     be found at HB. This time however, instead of forwarding/requesting L to/from  another component, component A itself is able to migrate to the new host and  interact locally with R to perform T. This method is quite different to the previous  two examples, in this instance an entire component is migrating, along with its  two examples  in this instance an entire component is migrating  along with its  associated data and logic. This is potentially the most interesting example of all  the mobile code abstractions. There are currently no contemporary examples of  this approach, but we examine its capabilities in the next section. HA HB L R HA A moves HB Compute L R A returns Multiagent Systems: R. Akerkar 64
  • 65. A paradigm comparison:  2 Components, 2 Hosts, a Logic, a Resource, Messages, a Task 2 Components 2 Hosts a Logic a Resource Messages a Task Client/Server Client/Server is a well known architectural abstraction that has been  employed since the first computers began to communicate. In this  example, B has the logic L to carry out Task T, and has access to  resource R. Component A has none of these, and is unable to  transport itself. Therefore, for A to obtain the result of T, it must  t t it lf  Th f  f  A t   bt i  th   lt  f T  it  t  resort to sending a request to B, prompting B to carry out Task T.  The result is then communicated back to A when completed. HA HB L, R HA request HB L, R Compute result Multiagent Systems: R. Akerkar 65
  • 66. Problems in distributed  Systems: J. Waldo Latency: Most obvious, Least worrisome y Memory: Access, Unable to use pointers, Because memory is both  local and remote, call types have to differ, No possibility of shared  memory Partial Failure: Is a defining problem of distributed computing, Not  possible in local computing,  Concurrency: Adds significant overhead to programming model,  No programmer control of method invocation order we should treat local and remote objects differently.  Waldo, J., Wyant, G., Wollrath, A., Kendall, S., “A note on distributed  computing”, Sun Microsystems Technical Report SML 94‐29, 1994. i ”  S  Mi  T h i l R  SML    Multiagent Systems: R. Akerkar 66
  • 67. Mobile Agent Toolkit from IBM:  Basic concepts Aglet. An aglet is a mobile Java object that visits aglet‐enabled hosts in a computer  network. It is autonomous, since it runs in its own thread of execution after arriving at  , , y p g a host, and reactive, because of its ability to respond to incoming messages. g Proxy. A proxy is a representative of an aglet. It serves as a shield for the aglet that  protects the aglet from direct access to its public methods. The proxy also provides  location transparency for the aglet; that is, it can hide the aglet’s real location of the  g aglet. Context. A context is an aglet's workplace. It is a stationary object that provides a  means for maintaining and managing running aglets in a uniform execution  environment where the host system is secured against malicious aglets. One node in a  computer network may run multiple servers and each server may host multiple  contexts. Contexts are named and can thus be located by the combination of their    C     d  d    h  b  l d b   h   bi i   f  h i server's address and their name. Message. A message is an object exchanged between aglets. It allows for  synchronous as well as asynchronous message passing between aglets. Message  passing can be used by aglets to collaborate and exchange information in a loosely   i    b   d b   l t  t   ll b t   d  h  i f ti  i    l l coupled fashion. Future reply. A future reply is used in asynchronous message‐sending as a handler to  receive a result later asynchronously. Identifier. An identifier is bound to each aglet. This identifier is globally unique and  immutable throughout the lifetime of the aglet. Multiagent Systems: R. Akerkar 67
  • 68. Mobile Agent Toolkit from  IBM: Basic operations Creation. The creation of an aglet takes place in a context. The new aglet is assigned  an identifier, inserted into the context, and initialized. The aglet starts executing as  soon as it has been successfully initialized.    it h  b   f ll  i iti li d Cloning. The cloning of an aglet produces an almost identical copy of the original  aglet in the same context. The only differences are the assigned identifier and the fact  that execution restarts in the new aglet. Note that execution threads are not cloned. Dispatching. Dispatching an aglet from one context to another will remove it from its  current context and insert it into the destination context, where it will restart  execution (execution threads do not migrate). We say that the aglet has been “pushed”  to its new context. Retraction. The retraction of an aglet will pull (remove) it from its current context and  insert it into the context from which the retraction was requested. Activation and deactivation. The deactivation of an aglet is the ability to temporarily  halt its execution and store its state in secondary storage. Activation of an aglet will  restore it in a context.  i  i     Disposal. The disposal of an aglet will halt its current execution and remove it from its  current context. Messaging. Messaging between aglets involves sending, receiving, and handling  messages synchronously as well as asynchronously. Multiagent Systems: R. Akerkar 68
  • 69. Outline 1. History and perspectives on  1 History and perspectives on multiagents 2. Agent Architecture hi 3. Agent Oriented Software  Engineering 4. Mobility 4. Mobility 5. Autonomy and Teaming Multiagent Systems: R. Akerkar 69
  • 70. Autonomy •Target and Context: Autonomy is only meaningful in terms of specific targets and within given contexts. •Capability: Autonomy only makes sense if an agent has a capability oward a target. E.g, a rock is not autonomous •Sources of Autonomy: Endogenous: Self liberty, Desire, Experience, Motivations Exogenous: Social, Deontic liberty, Environments •Implementations: Off-line and by design, Online with fixed cost analysis, anal sis Online learning Multiagent Systems: R. Akerkar 70
  • 71. Perspectives on Autonomy Communication Cognitive Science and AI Organizational Science Software Engineering g g Multiagent Systems: R. Akerkar 71
  • 72. Autonomy and Communication Detection and expression of autonomies requires sharing  understanding of social roles and personal relationships among the  participating agents, e.g., agents with positive relationships will  would change their autonomies to accommodate one another The form of the directive holds clues for autonomy, e.g.,  specificity in “Do x with a wrench and slowly.” The content of the directive and the responses to it contribute to  the autonomy, e.g., “Do x soon.” An agent’s internal mechanism for autonomy determination  , p , y , affects the detection, expression, and harmony of autonomies,  e.g., an agent’s moods, drives, temperaments, … Multiagent Systems: R. Akerkar 72
  • 73. Situated Autonomy and Action Selection enablers sensory communications data beliefs situated autonomy communication physical goal goal physical act communication i ti intention intention Multiagent Systems: R. Akerkar 73
  • 74. Shared Autonomy between an Air Traffic Control assistant agent and the human operator- 1999 g p Multiagent Systems: R. Akerkar 74
  • 75. Autonomy Computation Collision: Autonomy = (CollisionPriority / 4.0) + 4 0) (((|CollisionPriority – 4.0|) * t) / T) Landing: If 3.0 < LandingPriority <= 4 0: 3 0 <= < 4.0: Autonomy = 1.0 If LandingPriority < 3.0: Autonomy = (LandingPriority/4.0) + (((|LandingPriority – 4.0|) * t) / 2) Multiagent Systems: R. Akerkar 75
  • 76. Team- Building Intuition •Drivers on the road are generally not a team •Race driving in a “draft” is a team •11 soccer players declaring to be a team are a team •Herding sheep is generally a team Agents change their autonomy, roles, coordination strategies •A String Quartet is a team Well organized and practiced Multiagent Systems: R. Akerkar 76
  • 77. Team- Phil Cohen, et al Phil Cohen, et al: Shared goal and shared mental states Communication in the form of Speech Acts is required for team formation p q Steps to become a team: 1. 1 Weak Achievement Goal (WAG) relative to q and with respect to a team to bring about p if either of these conditions holds: •The agent has a normal achievement goal to bring about p; that is, the agent does not yet believe that p is true and has p eventually being true as a goal. •The agent believes that p is true, will never be true, or is irrelevant (that is, q is false), but has as a goal that the status of p be mutually believed by all the team members. 2. Joint Persistent Goal (or JPG) relative to q to achieve p just in case 1. They mutually believe that p is currently false; 2. They mutually know they all want p to eventually be true; y y y y 3. It is true (and mutual knowledge) that until they come to mutually believe either that p is true, that p will never be true, or that q isSystems: R. Akerkar will continue to mutually Multiagent false, they 77
  • 78. Team- Phil Cohen, et al •Requiring Speech Act Communication is too strong •Requiring Mutual Knowledge is too strong •Requiring agents to remain in a team until everyone knows about the team qualifying condition is too strong team-qualifying Multiagent Systems: R. Akerkar 78
  • 79. Team- Michael Wooldridge With respect to agent i’s desires there is potential for cooperation iff: 1. th i 1 there is some group g such th t i b li h that believes that g can j i tl th t jointly achieve ; and either 2. i can’t achieve  in isolation; or 3. i believes that for every action  that it can perform that achieves , it has a desire of not performing . i performs speech act FormTeam to form a team iff: 1. i informs team g that the team J-can ; and 2 i requests team g t perform  2. t t to f Team g is a PreTeam iff: 1. g mutually believe that it J-can  2. g mutually intends  Multiagent Systems: R. Akerkar 79
  • 80. Team- Michael Wooldridge •Onset of cooperative attitude is independent of knowing about specific individuals •Assuming agent knows about g is hard too simplistic Assuming •Requiring Speech Act Communication is too strong •Requiring Mutual Knowledge is too strong Multiagent Systems: R. Akerkar 80
  • 81. Team- Munindar Singh g <agents, social commitments, coordination relationships> Social commitments: <debtor, creditor, context, discharge condition> Operators: Create, Discharge, Cancel, Release, Delegate, Assign Coordination relationships about events: e is required by f e disables f e feeds or enables f e conditionally feeds f … Multiagent Systems: R. Akerkar 81
  • 82. Agent as a member of a group... g g p agent honors handles roles obligations partakes specifies goals plans member of institution norms s a s shares relies on partakes inherits set/ values borrow contains (terminal goals) organization group partakes Multiagent Systems: R. Akerkar 82
  • 83. The big picture Norms Values Obligationsab (i.e., responsibility) consent perfect f t agreement Autonomyb Dependenceba Autonomyb + Autonomya Delegationba D l ti coordnation weak agreement coordnation Controlab Trustba definciency d fi i Powerab Multiagent Systems: R. Akerkar 83
  • 84. Concluding Remarks Concluding Remarks  Th     There are many uses for    f  Agents   Agent‐based Systems  Agent Frameworks  Many open problems area available  Theoretical issues for modeling social elements  such as autonomy, power, trust, dependency,  norms, preference, responsibilities, security, …   f   ibili i   i    Adaptation and learning issues  Communication and conversation issues Multiagent Systems: R. Akerkar 84