SlideShare une entreprise Scribd logo
1  sur  172
Télécharger pour lire hors ligne
Trust & Reputation
    in Multi-Agent Systems

Dr. Jordi Sabater Mir                              Dr. Javier Carbó
jsabater@iiia.csic.es                           jcarbo@inf.uc3m.es



                  EASSS 2012, Valencia, Spain                         1
Dr. Jordi Sabater-Mir

IIIA – Artificial Intelligence Research Institute
   CSIC – Spanish National Research Council
Outline
• Introduction
• Approaches to control the interaction
• Computational reputation models
   – eBay
   – ReGreT
• A cognitive perspective to computational reputation
  models
   – A cognitive view on Reputation
   – Repage, a computational cognitive reputation model
   – [Properly] Integrating a [cognitive] reputation model into a
     [cognitive] agent architecture
   – Arguing about reputation concepts
Trust


 “A complete absence of trust
would prevent [one] even getting
      up in the morning.”
                      Niklas Luhman - 1979
Trust
A couple of definitions that I like:


“Trust begins where knowledge [certainty] ends: trust provides a
basis dealing with uncertain, complex, and threatening images of
the future.” (Luhmann,1979)


“Trust is the outcome of observations leading to the belief that the
actions of another may be relied upon, without explicit guarantee,
to achieve a goal in a risky situation.” (Elofson, 2001)
Trust                          Epistemic
“The subjective probability by which an individual, A,
expects that another individual, B, performs a given
action on which its welfare depends” [Gambetta]


                    “An expectation about an uncertain behaviour” [Marsh]




“The decision and the act of relying on, counting on,
depending on [the trustee]” [Castelfranchi & Falcone]


                                                 Motivational
6
Reputation


"After death, a tiger leaves behind
  his skin, a man his reputation"

                         Vietnamese proverb
Reputation
“What a social entity says about a target regarding his/her behavior”


                               It is always associated to a specific
                               behaviour/property



                               • The social evaluation linked to the reputation is
                               not necessarily a belief of the issuer.
                               • Reputation cannot exist without communication.



                     Set of individuals plus a set of social relations among
                     these individuals or properties that identify them as a
                     group in front of its own members and the society at
                     large.
What is reputation good for?

• Reputation is one of the elements that allows
  us to build trust.
• Reputation has also a social dimension. It is
  not only useful for the individual but also for
  the society as a mechanism for social order.
But... why we need computational
   models of those concepts?
What we are talking about...




Mr. Yellow
What we are talking about...
Two years ago...        Trust based on...   Direct experiences




           Mr. Yellow
What we are talking about...
                        Trust based on...   Third party information




             Mr. Pink




Mr. Yellow
What we are talking about...
                                    Trust based on...   Third party information




Mr. Green                Mr. Pink




            Mr. Yellow
What we are talking about...
             Trust based on...   Reputation




Mr. Yellow
What we are talking about...




Mr. Yellow
What we are talking about...




?
Characteristics of computational trust and
        reputation mechanisms
• Each agent is a norm enforcer and is also under
  surveillance by the others. No central authority
  needed.

• Their nature allows to arrive where laws and central
  authorities cannot.

• Punishment is based usually in ostracism. Therefore,
  exclusion must be a punishment for the outsider.
Characteristics of computational trust and
        reputation mechanisms

• Bootstrap problem.

• Not all kind of environments are suitable to apply
  these mechanisms. It is necessary a social
  environment.
Approaches to control the
      interaction
Different approaches to control the
            interaction




             Security approach
Different approaches to control the
                interaction

• Security approach
            Agent identity validation.
            Integrity, authenticity of messages.
            ...
Different approaches to control the
            interaction



            Institutional approach



              Security approach
Different approaches to control the
                interaction

• Institutional approach
Different approaches to control the
            interaction

                                 Trust and reputation
       Social approach           mechanisms are at this
                                 level.


    Institutional approach



      Security approach




                             They are complementary
                             and cover different aspects
                             of interaction.
Computational reputation
       models
Classification dimensions

• Paradigm type                   • Model’s granularity
   • Mathematical approach           • Single context
   • Cognitive approach              • Multi context

• Information sources             • Agent behaviour assumptions
                                     • Cheating is not considered
   •   Direct experiences
                                     • Agents can hide or bias the
   •   Witness information
                                     information but they never lie
   •   Sociological information
                                  • Type of exchanged information
   •   Prejudice
• Visibility types
   • Subjective
   • Global
Subjective vs Global
• Global
    • The reputation is maintained as a centralized resource.
    • All the agents in that society have access to the same reputation values.

    Advantages:
    • Reputation information is available even if you are a newcomer and do not
    depend on how well connected or good informants you have.
    • Agents can be simpler because they don’t need to calculate reputation
    values, just use them.

    Disadvantages:
    • Particular mental states of the agent or its singular situation are not taken
    into account when reputation is calculated. Therefore, a global view it is only
    possible when we can assume that all the agents think and behave similar.
    • Not always is desireable for an agent to make public information about the
    direct experiences or submit that information to an external authority.
    • Therefore, a high trust on the central institution managing reputation is
    essential.
Subjective vs Global
• Subjective
    • The reputation is maintained by each agent and is calculated according to its
    own direct experiences, information from its contacts, its social relations...

    Advantages:
    • Reputation values can be calculated taking into account the current state of
    the agent and its individual particularities.

    Disadvantages:
    • The models are more complex, usually because they can use extra sources of
    information.
    • Each agent has to worry about getting the information to build reputation
    values.
    • Less information is available so the models have to be more accurate to
    avoid noise.
A global reputation model: eBay

Model oriented to support trust between buyer and seller.

• Completely centralized.

• Buyers and sellers may leave comments about each other
after transactions.

• Comment: a line of text + numeric evaluation (-1,0,1)

• Each eBay member has a Feedback score that is the
summation of the numerical evaluations.
eBay model
eBay model
 Specifically oriented to scenarios with the following
characteristics:
   • A lot of users (we are talking about milions)
   • Few chances of repeating interaction with the same partner
   • Easy to change identity
   • Human oriented

• Considers reputation as a global property and uses a single
value that is not dependent on the context.

• A great number of opinions that “dilute” false or biased
information is the only way to increase the reliability of the
reputation value.
A subjective reputation model: ReGreT



    What is the ReGreT system?

    It is a modular trust and reputation system
    oriented to complex e-commerce environments
    where social relations among individuals play
    an important role.
The ReGreT
ODB          IDB                  SDB                    system




         Credibility
                                                           Neigh-
                                Witness
                                                         bourhood
                               reputation
                                                         reputation

Direct                                      Reputation
Trust                                         model



                                              System
                                            reputation
                       Trust
The ReGreT
ODB          IDB                  SDB                    system




         Credibility
                                                           Neigh-
                                Witness
                                                         bourhood
                               reputation
                                                         reputation

Direct                                      Reputation
Trust                                         model



                                              System
                                            reputation
                       Trust
Outcomes and Impressions
  Outcome:
  The initial contract
   – to take a particular course of actions
   – to establish the terms and conditions of a transaction.
                              AND
  The actual result of the contract.

Example:              Prize =c 2000
                      Quality =c A            Contract
                      Quantity =c 300
   Outcome
                       Prize =f 2000
                       Quality =f C          Fulfillment
                       Quantity =f 295
Outcomes and Impressions



   Outcome
Prize =c 2000
                     offers_good_prices
Quality =c A
Quantity =c 300
                     maintains_agreed_quantities
Prize =f 2000
Quality =f C
Quantity =f 295
Outcomes and Impressions
 Impression:
 The subjective evaluation of an outcome from a specific
 point of view.

                                        Imp(o, 1 )
        Outcome
     Prize =c 2000
     Quality =c A
     Quantity =c 300                     Imp(o,  2 )
     Prize =f 2000
     Quality =f C
     Quantity =f 295
                                       Imp(o,  3 )
The ReGreT
ODB          IDB                     SDB                    system




         Credibility
                                                              Neigh-
                                  Witness
                                                            bourhood
                                 reputation
                                                            reputation
              Reliability of the value based on:
Direct                                        Reputation
Trust         • Number of outcomes               model

              • Deviation: The greater the variability in
              the rating values the more volatile will be
                                                System
              the other agent in the fulfillment of its
                                              reputation
              agreements.
                    Trust
Direct Trust
   Trust relationship calculated directly from an agent’s
   outcomes database.


    DTa b ( )                (t , t )  Imp(o ,  )
                                         i          i
                      oi ODB gr,b )
                              a
                                 (




                   f (ti , t )
 (t , ti ) 
              o IDBa ,b f (t j , t )
                       gr (  )                               ti
                                                f (ti , t ) 
                  j



                                                              t
Direct Trust
DT reliability
                                   a ,b                   a ,b
 DTRLab ( )  No ( ODBgr (  ) )  (1  Dv ( ODBgr (  ) )



       Number of                            Deviation
       outcomes                               (Dv)
         (No)
                                  The greater the variability in
                                   the rating values the more
                                    volatile will be the other
                                  agent in the fulfillment of its
                                           agreements.
             a ,b
  No ( ODB gr (  ) ), itm  10
The ReGreT
ODB          IDB                  SDB                    system




         Credibility
                                                           Neigh-
                                Witness
                                                         bourhood
                               reputation
                                                         reputation

Direct                                      Reputation
Trust                                         model



                                              System
                                            reputation
                       Trust
Witness reputation

  Reputation that an agent builds on another agent based
  on the beliefs gathered from society members (witnesses).

Problems of witness information:
   • Can be false.
   • Can be incomplete.
   • It may suffer from the “correlated evidence” problem.
B             o                  C      o
A         o                               #        u7               +                  D           +
          +                                                      a1 #                              ^
                                                                   o^
                                 c1
                                      o+
                                       b2               c1
                            b1                          u6
              a2                 o             #             o+ c2 #^ d1
               u3
                 +                                                   d1 +
    a1                               u2                       u8       +  d2
         o                                         u4
                                                                                           ^
                       u1                                         u5
             c2                      u2
     u9
                  #^                      b2                      u9              u1
                                               #         u3                      u8

                       u6 u5
                                                    u4 u7
                                                                        d2
                                                                             ^
         o                  a2            b1                                                   o
                                 +             o    +                                          #
         #
         +                                    trade ^
B          o                  C        o
     A           o                            #                           +            D            +
                 +                                                        #                         ^
                                                                          ^

                                              b2            c1
                                   b1 u1           #                 c2
                                                                 o+ u4 #^
                     a2              o                                        d1
                          +                                 u9                     +
          a1                                                                           d2
               o                                       u4
                                   u3                                                       ^
                              u1                  u2                 u5
                                                                     u5
                                         u2
            u9                                               u3               u8
                                        u6
                                                       u7             u8
                              u6
                                                                u7
                                              cooperation
               o                                                                                o
 Big exchange of sincere infor-
               #                                            +                                   #
mation and some kind of predispo-
               +
  sition to help if it is possible.                         ^
B             o                   C        o
     A            o                               #                            +            D            +
                  +                                                            #                         ^
                                                                               ^

                                                  b2             c1
                                    b1                 #              o+ c2u3
                                                                      u9
                      a2                 o                                  #^     d1
                           +                           u1                               +
           a1                                                                               d2
                o                                           u4
                                                                 u2                              ^
                               u1                                         u5
                                             u2
             u9                                                  u7        u8
                                                                 u3                u8
                                                      u5
                               u6                                         u4
                                                                 u6
                                                                  u7
                                                  competition
               o                                                                                     o
Agents tend to use all the available
               #                                                 +                                   #
mechanisms to take some advantage
               +
     from their competitors.                                     ^
Witness                                                      u7

     reputation                         #                                         a1
                                                                                       o
                                                 c1
                                                      o+
Step 1: Identifying                                                     u6
        the witnesses              u3
                                                                                           d1
• Initial set of witnesses:                          u2                      u8                 +
                                            ?
     Agents that have had
     a trade Relation with
     the target agent         c2
                                   #^                     b2                       u9               u1
                                                               #

                                            u5
                                                                    u4
                                                                                           d2
                                                          b1                                    ^
                                            a2
                                                 +             o
                                                           trade
Witness                              u7
            Grouping agents with frequent interactions
    reputation them and considering each one of these
            among
                groups as a single source of reputation values:
Step 1: Identifying                              u6
                • Minimizes u3 correlated evidence problem.
        the witnesses       the
• Initial set of witnesses:                   u2             u8
                     • Reduces the number of queries to agents that
     Agents that have had
                       probably will give us more or less the same
     a trade Relation with
     the target agent information.               b2
                                                #
             To group agents ReGreT relies on sociograms.
                                 u5
                                                    u4




                                                trade
Witness reputation                                            u7


Heuristic to identify groups and        Central-point
the best agents to represent
them:                                                                 u6
                                   u3
1. Identify the components of
                                                  u2                       u8
   the graph.
2. For each component, find the
   set of cut-points.                                   b2
                                                             #
3. For each component that
   does not have any cut-point,            u5
                                                                  u4
   select a central point (node
   with larger degree).                          Cut-point


                                                cooperation
Witness                                         u7

     reputation

Step 1: Identifying                                        u6
        the witnesses         u3
• Initial set of witnesses:             u2                      u8
     Agents that have had
     a trade Relation with
     the target agent                        b2
• Grouping and selecting
                                                  #
  the most representative
  witnesses                        u5
                                                       u4




                                              trade
Witness
     reputation

Step 1: Identifying
        the witnesses         u3
• Initial set of witnesses:             u2
     Agents that have had
     a trade Relation with
     the target agent                        b2
• Grouping and selecting                          #
  the most representative
  witnesses                        u5




                                              trade
Witness
    reputation           Trustu 2b 2 ( ), TrustRLu 2b 2 ( ) 
                                                   u2

Step 1: Identifying           u3
        the witnesses                               u5
Step 2: Who can I
                         Trustu 5b 2 ( ), TrustRLu 5b 2 ( ) 
        trust?
The ReGreT
ODB          IDB                  SDB                    system




         Credibility
                                                           Neigh-
                                Witness
                                                         bourhood
                               reputation
                                                         reputation

Direct                                      Reputation
Trust                                         model



                                              System
                                            reputation
                       Trust
Credibility model
   Two methods are used to evaluate the credibility of
witnesses:


                         Credibility
                        (witnessCr)




    Social relations                        Past history
      (socialCr)                             (infoCr)
Credibility model
• socialCr(a,w,b): credibility that agent a assigns to agent w when
w is giving information about b and considering the social structure
among w, b and himself.
                     a                      a                 a
             w                  w                     w
                     b                      b                 b

                     a                      a                 a
             w                  w                     w
                     b                      b                 b

                     a                      a                 a
             w                  w                     w
                     b                      b                 b


                                                w - witness
                     competitive relation       b - target agent
                     cooperative relation       a - source agent
Credibility model
    Regret uses fuzzy rules to calculate how the structure of
    social relations influences the credibility on the information.

               IF coop(w,b) is h
               THEN socialCr(a,w,b) is vl



1                                    1




0                                    0
    0                            1       0                                  1
        low    moderate   high       very_low   low    moderate   high   very_high
         (l)     (m)       (h)          (vl)     (l)     (m)       (h)      (vh)
The ReGreT
ODB          IDB                  SDB                    system




         Credibility
                                                           Neigh-
                                Witness
                                                         bourhood
                               reputation
                                                         reputation

Direct                                      Reputation
Trust                                         model



                                              System
                                            reputation
                       Trust
Neighbourhood reputation

 The trust on the agents that are in the “neighbourhood” of
 the target agent and their relation with it are the elements
 used to calculate what we call the Neighbourhood reputation.


  ReGreT uses fuzzy rules to model this reputation.


  IF DTan (offers_good_quality ) is X AND coop(b,ni)  low
          i

  THEN Rab (offers_good_quality) is X
          n   i




  IF DTRLan (offers_good_quality) is X’ AND coop(b,ni) is Y’
                      i

  THEN RLab (offers_good_quality) is T(X’,Y’)
          n       i
The ReGreT
ODB          IDB                  SDB                    system




         Credibility
                                                           Neigh-
                                Witness
                                                         bourhood
                               reputation
                                                         reputation

Direct                                      Reputation
Trust                                         model



                                              System
                                            reputation
                       Trust
System reputation
 The idea behind the System reputation is to use the
 common knowledge about social groups and the role that
 the agent is playing in the society as a mechanism to assign
 reputation values to other agents.

 The knowledge necessary to calculate a system reputation
 is usually inherited from the group or groups to which the
 agent belongs to.
Trust
  If the agent has a reliable direct trust value, it will use that
  as a measure of trust. If that value is not so reliable then it
  will use reputation.

                                                                Neigh-
                                     Witness
                                                              bourhood
                                    reputation
                                                              reputation

        Direct                                   Reputation
        Trust                                      model



                                                   System
                                                 reputation
                          Trust
A cognitive perspective to computational
               reputation models

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

•    [Properly] Integrating a [cognitive] reputation model into a
    [cognitive] agent architecture



• Arguing about reputation concepts
Social evaluation
• A social evaluation, as the name suggests, is the evaluation by a social
entity of a property related to a social aspect.

• Social evaluations may concern physical, mental, and social properties of
targets.

• A social evaluation includes at least three sets of agents:
      a set E of agents who share the evaluation (evaluators)
      a set T of evaluation targets
      a set B of beneficiaries

We can find examples where the different sets intersect totally, partially,
etc...

e (e in E) may evaluate t (t in T) with regard to a state of the world that is in
b’s (b in B) interest, but of which b not necessarily is aware.
     Example: quality of TV programs during children’s timeshare
Image and Reputation
• Both are social evaluations.

• They concern other agents' (targets) attitudes toward socially desirable
behaviour but...

...whereas image consists of a set of evaluative beliefs about the
characteristics of a target,

reputation concerns the voice that is circulating on the same target.




                                         Reputation in artificial societies
                                         [Rosaria Conte, Mario Paolucci]
Image
“An evaluative belief; it tells whether the target is good or bad with respect
to a given behaviour” [Conte & Paolucci]


                                        Is the result of an internal reasoning on
                                        different sources of information that leads the
                                        agent to create a belief about the behaviour
                                        of another agent.


   Beliefs                              The agent has accepted φ as something true
                                        and its decisions from now on will take this
   B                                 into account.



                                      Social evaluation      
Reputation
• A voice is something that “it is said”, a piece of information that is being
transmitted.

• Reputation: a voice about a social evaluation that is recognised by the
members of a group to be circulating among them.


               Beliefs              • The agent believes that the social
              B(S(f))               evaluation f is communicated.

                                    • This does not imply that the agent
                                    believes that f is true.
Reputation
Implications:

    • The agent that spreads a reputation, because it is not implicit that it
    believes the associated social evaluation, takes no responsibility about
    that social evaluation (another thing is the responsibility associated to
    the action of spreading that reputation).

    • This fact allows reputation to circulate more easily than image
    (less/no fear of retaliation).

    • Notice that if an agent believes “what people say”, image and
    reputation colapse.

    • This distinction has important advantages from a technical point of
    view.
Gossip
• In order for reputation to exist, it has to be transmitted. We cannot have
reputation without communication.

• Gossip currently has the meaning of an idle talk or rumour, especially
about the personal or private affairs of others. Usually has a bad
connotation. But in fact is an essential element in human nature.

• The antecedents of gossip is grooming.

• Studies from evolutionary psicology have found gossip to be very
important as a mechanism to spread reputation [Sommerfeld et al. 07, Dunbar 04]

• Gossip and reputation complement social norms: Reputation evolves
along with implicit norms to encourage socially desirable conducts, such as
benevolence or altruism and discourage socially unacceptable ones, like
cheating.
Outline

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

•   [Properly] Integrating  a [cognitive] reputation model into a
    [cognitive] agent architecture



• Arguing about reputation concepts
RepAge

What is the RepAge model?

It is a reputation model evolved from a
cognitive theory by Conte and Paolucci.
The model is designed with an special
attention to the internal representation of the
elements used to build images and
reputations as well as the inter-relations of
these elements.
RepAge memory

                                                  Value:
            Rep           Img              P

            P              P            Strength: 0.6



    P                 P         P



P       P         P        P        P
RepAge memory
Outline

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

•   [Properly] Integrating  a [cognitive] reputation model into a
    [cognitive] agent architecture



• Arguing about reputation concepts
What do you mean by “properly”?

 Current models


                    Planner              Trust & Reputation
                                               system
                                  ?

                                              Inputs
                       Decision
                      mechanism
                                      Comm

Black box                                         Agent
Reactive
What do you mean by “properly”?

 Current models


                    Planner              Trust & Reputation
                                               system
                              Value


                                              Inputs
                       Decision
                      mechanism
                                      Comm

Black box                                         Agent
Reactive
What do you mean by “properly”?

The next generation?


                       Planner          Trust & Reputation
                                              system




                                             Inputs
                          Decision
                         mechanism
                                     Comm


                                                 Agent
What do you mean by “properly”?

The next generation?


                       Planner




                                            Inputs
                          Decision
                         mechanism
                                     Comm


                                               Agent
Not only reactive...
... proactive
BDI model
• Very popular model in the multiagent community.

• Has the origins in the theory of human practical reasoning
  [Bratman] and the notion of intentional systems [Dennett].

• The main idea is that we can talk about computer programs as if
  they have a “mental state”.

• Specifically, the BDI model is based on three mental attitudes:
      Beliefs - what the agent thinks it is true about the world.

      Desires - world states the agent would like to achieve.

      Intentions - world states the agent is putting efforts to achieve.
BDI model
• The agent is described in terms of these mental attitudes.

• The decision-making model underlying the BDI model is known as
  practical reasoning.


• In short, practical reasoning is what allows the agent to go from
  beliefs, desires and intentions to actions.
Multicontext systems
               • Declarative languages, each with a set of
  Logics        axioms amd a number of rules of inference.



               • Structural entities representing the main
                architecture components. Each unit has a
  UNITS         single logic associated with it.




               • Rules of inference wich relate formulae
Bridge Rules    in different units.



               • Sets of formulae written in the logic
 Theories       associated with a unit
U1                                 U2




                               d


     U1:b    ,     U2:d

            U3:a




                          U3
U1                                     U2



     b                             d


         U1:b    ,     U2:d

                U3:a




                              U3
U1                                     U2



     b                             d


         U1:b    ,     U2:d

                U3:a




                              U3
U1                                     U2



     b                             d


         U1:b    ,     U2:d

                U3:a




                              U3


                a
Multicontext
Repage integration in a BDI architecture
BC-LOGIC
Grounding Image and Reputation to BC-Logic
Repage integration in a BDI architecture
Desire and Intention context
Generating Realistic Desires
Generating Intentions
Repage integration in a BDI architecture
Outline

• A cognitive view on Reputation

• Repage, a computational cognitive reputation model

•   [Properly] Integrating  a [cognitive] reputation model into a
    [cognitive] agent architecture



• Arguing about reputation concepts
Arguing about Reputation Concepts
Goal: Allow agents to participate in argumentation-based dialogs regarding
reputation elements in order to:

    - Decide on the acceptance of a communicated social evaluation based
    on its reliability.
        “Is the argument associated to a communicated social evaluation (and according to
        my knowledge) strong enough to consider its inclusion in the knwoledge base of my
        reputation model?”
    - Help in the process of trust alignment.

What we need:
   • A language that allows the exchange of reputation-related
   information.
   • An argumentation framework that fits the requirements imposed by
   the particular nature of reputation.
   • A dialog protocol to allow agents establish information seeking
   dialogs.
The language: LRep

  LREP : First-order sorted languange with
  special predicates representing the
  typology of social evaluations we use:
  Img, Rep, ShV, ShE, DE, Comm.                         Ex 2: Linguistic Labels


  •SF: Set of constant formulas

  Allows LREP formulas to be nested in communications

  • SV: Set of evaluative values
                                                            f:
                                                             { 0 , 1, 2 , 3 , 4 }
The reputation argumentation framework

• Given the nature of social evaluations (the values of a social evaluation
are graded) we need an argumentation framework that allows to weight
the attacks.
         Example: We have to be able to differentiate between Img(j,seller,VG)
         being attacked by Img(j,seller,G) or being attacked by Img(j,seller,VB).

• Specifically we instantiate the Weighted Abstract Argumentation
Framework defined in

         P.E. Dunne, A. Hunter, P. McBurney, S. Parsons, and M. Wooldridge,
         ‘Inconsistency tolerance in weighted argument systems’, in
         AAMAS’09, pp. 851–858, (2009).

• Basically, this framework introduces the notions of strength and
inconsistency budgets (defined as the amount of “inconsistency” that the
system can tolerate regarding attacks) in a classical Dung’s framework.
Building Argumentative Theories


      Argumentative theory
          (Build from the                  Simple shared consequence relation
        reputation theory)



                                               Argumentation level
             ?                                                                  ?
                                          Reputation-related information




                                                 Consequence relation
  Reputation theory: set of ground
                                                  (Reputation model)
  elements (expressed in LREP) gathered
                                                 Specific to each agent
  by j through interactions and
  communications.
Attack and Strength

                                        f:
                                         { 0 , 1, 2 , 3 , 4 }




                      Strength of the attack
Example of argumentative dialog


                                           Role: seller              Role: Inf
                                                                     informant

                                 Role: sell(q)      Role: sell(dt)
• Agent i: proponent             quality            delivery time
• Agent j: opponent




                   j                                                  i
      • Each agent is equipped with a Reputation Weighted Argument System
Example of argumentative dialog

j          i
Example of argumentative dialog

j          i




                     Strength of the attack
Example of argumentative dialog

j          i
Example of argumentative dialog

j          i
Example of argumentative dialog

j          i
Example of argumentative dialog

j          i
Using Inconsistency Budgets

j          i
Outline
+ PART II:
   Trust   Computing Approaches
     Security
     Institutional
     Social
   Evaluation of     Trust and Reputation Models




                       EASSS 2010, Saint-Etienne, France   111
Dr. Javier Carbó

GIAA – Group of Applied Artificial Intelligence
         Univ. Carlos III de Madrid
Trust in Information Security
           Same Word, Different World

Security approach tackles “hard” problems of trust.
They view trust as an objective, universal and
  verifiable property of agents.
Their trust problems have solutions:
• False identity
• Reading/modification of messages by third parties
• Repudiation of messages
• Certificates of accomplishing tasks/services
  according to standards
                    EASSS 2010, Saint-Etienne, France   113
An example,
                     Public Key Infrastructure
                                                                      LDAP directory
                 Certificate authority         4. Publication of
                                                   certificate


     3. Public key
                                                     5. Certificate
         sent
                                                         sent



                             2. Private key sent



                              1. Client identity
Registration authority

                                  EASSS 2010, Saint-Etienne, France                    114
Trust in I&S, limitations
Their trust relies on central entities:
   – Authorities, Trust Third Parties
   – Partially solved using hierarchies of TTPs.
They ignore part of the problem:
- Top authority should be trusted by any other way
Their scope is far away from Real Life Trust issues:
   – lies, defection, collusions, social norm violations, …




                       EASSS 2010, Saint-Etienne, France      115
Institutional approach
Institutions have proved to successfully regulate human
   societies for a long time:
- created to achieve particular goals while complying norms.
- responsible for defining the rules of the game (norms), to
   enforce them and assess penalties in case of violation.
Examples: auction houses, parliaments, stock exchange
   markets,.…
Institutional approach is focused on the existence of
   organizations:
• Providing an execution infrastructure
• Controlling the resources access
• Sanctionning/rewarding agents’ behaviors
                      EASSS 2010, Saint-Etienne, France        116
An example: e-institutions




        EASSS 2010, Saint-Etienne, France   117
Institutional approach, limitations
They view trust as an partially objective, local and verifiable
   property of agents.
Intrusive control on the agents (modification on the
   execution resources, process killing, …)
They require a shared agreeement to define of what is
   expected (norm compliance, case laws…)
They require a central entity and global supervision
    – Repositories, access control entities should be
      centralised
    – Low scalability if every agent is observed by the
      institution
Assumes that the institution itself is trusted

                       EASSS 2010, Saint-Etienne, France     118
Social approach
Social approach consists in the idea of an auto-organized society
  (Adam Smith’s invisible hand)
Each agent has its own evaluation criteria of what is expected:
  no social norms, just individual norms
Each agent is in charge of rewards and punishments (often in
  terms of more/less future cooperative interactions)
No central entity at all, it consists of a completely distributed
  social control of malicious agents.
Trust as an emergent property
Avoids Privacy issues caused by centralized approaches

                       EASSS 2010, Saint-Etienne, France    119
Social approach, limitations
Unlimited, but undefined and unexpected trust scope:
We view trust as a subjective, local and unverifiable
   property of agents.
Exclusion/Isolation is the typical punishment for the
   malicious agents  Difficult to enforce it in open and
   dynamical societies of agents
Malicious behaviors may occur, they are supposed to be
   prevented due to the lack of incentives and punishments.
Difficult to define which domain and society is appropriate
   to test this social approach.


                     EASSS 2010, Saint-Etienne, France   120
Ways to evaluate any system

 Integration on real applications
 Using real data from public datasets
 Using realistic data generated artificially
 Using ad-hoc simulated data with no
  justification/motivation
 None of above
Ways to evaluate T&R in agent systems

 Integration of T&R on real agent applications
 Using real T&R data from public datasets
 Using realistic T&R data generated artificially
 Using ad-hoc simulated data with no
  justification/motivation
 None of above
Real Applications using T&R in an agent
                  system
• What real application are we looking for?
• Trust and reputation:
   – System that uses (for something) and exchanges
     subjective opinions about other participants 
     Recommender Systems
• Agent System:
   – Distributed view, no central entity collects, aggregates
     and publishes a final valuation  ???
Real Applications using T&R in an agent
                  system
• Desiderata of application domains:
            (To be filled by students)
Real data & public datasets
• Assuming real agent applications exists, would data
  be publicly available?
   – Privacy concerns
   – Lack of incentives to save data along time
   – Distribution of data.Heisenberg uncertainty
     principle: If users knew their subjective opinions
     would be collected by a central entity, they would
     not be as if their opinions had just a private
     (supposed-to-be friendly) reader.
• No agents, no distribution  public dataset from
  recomender systems
A view on privacy concerns

• Anonymity: use of arbitrary/secure pseudonysms
• Using concordance: similarity between users within a
  single context. Mean of differences rating a set of items.
  Users tend to agree. (Private Collaborative Filtering using
  estimated concordance measures, N. Lathia, S. Hailes, L.
  Capra, 2007)
• Secure Pair-wise comparison of fuzzy ratings
  (Introducing newcomers into a fuzzy reputation agent
  system, J. Carbo, J.M. Molina, J. Davila, 2002)
Real Data & Public Datasets

• MovieLens, www.grouplens.org: Two datasets:
   – 100,000 ratings for 1682 movies by 943 users.
   – 1 million ratings for 3900 movies by 6040 users.
• These are the “standard” datasets that many
  recommendation system papers use in their evaluation
My paper with MovieLens

• I selected users among those who had rated 70 or more
  movies, and we also selected the movies that were
  evaluated more than 35 times in order to avoid the
  sparsity problem.
• Finally we had 53 users and 28 movies.
• The average votes per user is approximately 18. So the
  sparsity of the selected set of users and movies is under
  35%
     “Agent-based collaborative filtering based on fuzzy
    recommendations” J. Carbó, J.M. Molina, IJWET v1 n4,
                             2004
Real Data & Public Datasets

BookCrossing (BX) dataset:
• www.informatik.uni-freiburg.de/~cziegler/BX
• collected by Cai-Nicolas Ziegler in a 4-week crawl (August
  / September 2004) from the Book-Crossing community.
• It contains 278,858 users providing 1,149,780 ratings
  (explicit / implicit) about 271,379 books.
Real Data & Public Datasets

Last.fm Dataset
• top artists played by all users:
   – contains <user, artist-mbid, artist-name, total-plays>
   – tuples for ~360,000 users about 186,642 artists.
• full listening history of 1000 users:
   – Tuples of <user-id, timestamp, artist-mbid, artist-
      name, song-mbid, song-title>
• Collected by Oscar Celma, Univ. Pompeu Fabra
• www.dtic.upf.edu/~ocelma/MusicRecommendationDatas
  et
Real Data & Public Datasets

Jester Joke Data Set:
• Ken Goldberg from UC Berkeley released a dataset from
   Jester Joke Recommender System.
• 4.1 million continuous ratings (-10.00 to +10.00) of 100
   jokes from 73,496 users.
• www.ieor.berkeley.edu/~goldberg/jester-data/
• It differentiates itself from other datasets by having a
   much smaller number of rateable items.
Real Data & Public Datasets

Epinions dataset, collected by P. Massa:
• in a 5-week crawl (November/December 2003) from the
  Epinions.com
• Not just ratings about items, also trust statements:
   – 49,290 users who rated a total of
   – 139,738 different items at least once, writing 664,824
      reviews.
   – 487,181 issued trust statements.
• only positive trust statements and not negative ones
Real Data & Public Datasets

Advogato: www.trustlet.org
• a weighted dataset. Opinions aggregated (centrally) on a
  3 levels base, Apprentice, Journeyer, and Master
• Tuples of: minami -> polo [level="Journeyer"];
• Used to test trust propagation in social networks
  (asuming trust transitivity).
• Trust metric (by P. Massa) uses this information in order
  to assign to every user a final certification level
  aggregating weighted opinions.
Real Data & Public Datasets

MoviePilot dataset: www.moviepilot.com
• this dataset contains information related to concepts
  from the world of cinema, e.g. single movies, movie
  universes (such as the world of Harry Potter movies),
  upcoming details (trailers, teasers, news, etc
• RecSysChallenge: live evaluation session will take place
  where algorithms trained on offline data will be
  evaluated online, on real users.
Mendeley dataset: www.mendeley.com
• recommendations to users about scientific papers that
  they might be interested in.
Real Data & Public Datasets

• No agents, no distribution  public dataset from
  recomender systems
• Authors have to distribute opinions to participants in
  some way.
• Ratings about items, not trust statements.
• Relationship between # of ratings / # of items too low
• Relationship between # of ratings / # of users too low
• No time-stamps
• Papers intend to be based on real data, but required
  transformation from centralized to distributed
  aggregation distort reality of these data.
Realistic Data

• We need to generate realistic data to test trust and
  reputation in agent systems.
• Several technical/design problems arise:
   – Which # of users, ratings and items we need?
   – How much dynamic would be the society of agents?
• But the hardest part is the pshichological/sociological
  one:
   – How individuals take trust decisions? Which types of
     individuals?
   – How real society of humans trust? How many of each
     individual type belong to real human society?
Realistic Data

• Large-scale simulation with Netlogo
  (http://ccl.northwestern.edu/netlogo/)
• Others: MASON (https://mason.dev.java.net/), RePast
  (http://repast.sourceforge.net/)
• But there are mainly adhoc simulations which are
  difficult to repeat by third parties.
• Many of them are unrealistic agents with binary
  behaviour altruist/egoist based on game theory views.
Examples of AdHoc Simulations

• Convergence of reputation image to real behaviour of
  agents. Static behaviours, no recomendations, just
  consume/provide services. Worst case.
• Maximum Influence of cooperation. Free and honest
  recomendations from every agent based on consumed
  services. Best case.
• Inclusion of dynamic behaviours, different % of malicious
  agents in society, collusions between recommenders and
  providers, etc. Compare results with the previous ones.
“Avoiding malicious agents using fuzzy recommendations” J.
   Carbo, J. M. Molina, J. Dávila. Journal of Organizational
     Computing & Electronic Commerce, vol. 17, num. 1
Technical/Design Problems to generate
             simulated data
• Lessons learned from the ART testbed experience.
• http://megatron.iiia.csic.es/art-testbed/
• A testbed would help to compute fair comparisons:
  “Researchers can perform easily-repeatable experiments
  in a common environment against accepted
  benchmarks”
• Relative Success:
   – 3 international competitions jointly with AAMAS 06-
      08.
   – Over 15 participants in each competition.
   – Several journal and conference publications use it.
Art Domain
the ART testbed
ART Interface




The agent system is displayed as a topology in the left, while
in the left two panels show the details of particular agent
statistics and of global system statistics.
The ART testbed

• The simulation creates opinions according to an error
  distribution of zero mean and a standard deviation s:
                       s = (s∗ + α / cg) t
• where s∗, unique for each era, is assigned to an appraiser
  from a uniform distribution.
• t is the true value of the painting to be appraised
• α is a hidden value fixed for all appraisers that balances
  opinion-generation cost and final accuracy.
• cg, the cost an appraiser decides to pay to generate an
  opinion. Therefore, the minimum achievable error
  distribution standard deviation is s∗ · t
The ART testbed

• Each appraiser a’s actual client share ra takes into
  account the appraiser’s client share from the previous
  timestep:
                   ra = q · ra’ + (1 − q) · ˜ra
• where ra’ is appraiser a’s client share in the previous
  timestep.
• q is a value that reflects the influence of previous client
  share size on next client share size (thus the volatility in
  client share magnitudes due to frequent accuracy
  oscillations may be reduced)
2006 ART Competition

2006 Competition setup:
• Clients per agent: 20, Painting eras: 10, games with 5
  agents
• Costs 100/10/1, Sensing-Cost-Accuracy=0.5, Winner iam
  from Southampton Univ.
Post competition discussion notes:
• Larger number of agents required, Definition of dummy
  agents, Relate # of eras with # of agents, More fair
  distribution of expertise (just uniform), More abrupt
  change in # of clients (greater q), Improving expertise
  over time?
2006 ART Winner conclusions
    “The ART of IAM: The Winning Strategy for the 2006
      Competition”, Luke Teacy et al, Trust WS, AAMAS 07.
• It is generally more economical for an agent to purchase
  opinions from a number of third parties than it is to
  invest heavily in its own opinion
• There is little apparent advantage to reputation sharing.
  reputation is most valuable in cases where direct
  experience is relatively more difficult to acquire
• The final lesson is that although trust can be viewed as a
  sociological concept, and inspiration for computational
  models of trust can be drawn from multiple disciplines,
  the problem of combining estimates of unknown
  variables (such as trustee behaviour) is fundamentally a
  statistical one.
2007 ART Competition
2007 Competition Setup:
• Costs 100/10/0.1, All agents have equal sum of expertise
  values, Painting eras: static but unknown, Expertise
  assignments may change during the course of the game,
  Include dummy agents, games with 25 agents
2007 Competition Discussion Notes:
• it need sto facilitate reputation exchange
• It doesn’t have to produce all changes at the same time,
  Gradual changes
• Studying barriers to entry; how a new agent joins an
  existing MAS: Cold start vs. Hot start (exploration vs
  explotation)
• More competitive dummy agents
• relationship between opinion generation cost and accuracy
2008 ART Competition

2008 Competition Setup:
• limited in the number of certainty and opinion requests
  that he can send.
• Certainty request has cost.
• deny the use of self opinions
• Wider range of expertise values
• Every time step, select randomly a number of eras to
  change, and add a given amount of positive change
  (increase value). For every positive change, apply also a
  negative change of the same amount, so that the average
  expertise of the agent is not modified
Evaluation criteria

• Lack of criteria on which and how the very different trust
  decisions should be considered
Conte and Paolucci 02:
• epistemic decisions: those about about updating and
  generating trust opinions from received reputations
• pragmatic-strategic decisions are decisions of how to
  behave with partners using these reputation-based trust
• memetic decisions stand for the decisions of how and
  when to share reputation with others.
Main Evaluation Criteria of The ART
                  testbed
• The winning agent is selected as the appraiser with the
  highest bank account balance in the direct confrontation
  of appraiser agents repeated X times.
• In other words, the appraiser who is able to:
   – estimate the value of its paintings most accurately
   – purchase information most prudently.
• Where an ART iteration involves 19 steps (11 decisions, 8
  interactions) to be taken by an agent.
Trust decisions in ART testbed
1. How our agent should aggregate reputation information
   about others?
2. How our agent should trust weights of providers and
   recommenders are updated afterwards?
3. How many agents our agent should ask for reputation
   information about other agents?
4. How many reputations and opinions requests from other
   agents should our agent answer?
5. How many agents our agent should ask for opinions about our
   assigned paintings?
6. How much time (economic value) our agent should spend
   building requested opinions about the paintings of the other
   agents?
7. How much time (economic value) our agent should spend
   building the appraisals of the own paintings?
   (AUTOPROVIDER!)
…
Limitations of Main Evaluation Criteria
                of ART testbed
From my point of view:
• Evaluates all trust decisions jointly: should participants
  play provider and consumer roles jointly of just the role
  of opinion consumers?
• Is the direct confrontation of competitor agents the right
  scenario to compare them?
Providers vs. Consumers

• Playing games with two participants of 2007 competition
  (iam2 and afras) and other 8 dummy agents.
• Dummy agents implemented ad hoc to be the solely
  opinion providers, they do not ask for any service to 2007
  participants.
• None of both 2007 participants will ever provide
  opinions/reputations, they are just consumers.
•  Differences between both agents were much less than
  the official competition stated (absolutely and relatively).
“An extension of a fuzzy reputation agent trust model in the
  ART testbed” Soft Computing v14, issue 8, 2010
Trust Strategies in
             Evolutive Agent Societies
• An evolutionarily stable strategy (ESS) is a strategy which,
  if adopted by a population of players, cannot be invaded
  by any alternative strategy
• An evolutionarily stable trust strategy is a strategy which,
  if becomes dominant (adopted by a majority of agents)
  can not be defeated by any alternative trust strategy.
• Justification: The goal of trust strategies is to establish
  some kind of social control over malicious/distrustful
  agents
• Assumption: agents may change of trust strategy. Agents
  with a failing trust strategy would get rid of it and they
  would adopt a successful trust strategy in the future.
An evolutive view of ART games

• We consider a failing trust strategy the one who lost
  (earning less money than the others) the last ART game.
• We consider the successful trust strategy to the one who
  won the last ART game (earning more money than the
  others).
• By this way replacing in consecutive games the
  participant who lost the game by the one who won it.
• We have applied it to the 16 participant agents of 2007
  ART competition
and so on…


16 participants                                 Winner
in 2007 competition         Winner




                                                               ART gam
                                     ART game
                 ART game
                                                Loser
                            Loser
Game      Winner       Earnings        Loser      Earnings
  1        iam2         17377         xerxes        -8610
  2        iam2         14321         lesmes       -13700
  3        iam2         10360          reneil      -14757
  4        iam2         10447        blizzard       -7093
  5    agentevicente    8975            Rex         -5495
  6        iam2         8512         alatriste       -999
  7      artgente       8994      agentevicente      2011
  8      artgente       10611     agentevicente      1322
  9      artgente       8932           novel          424
 10        iam2         9017           IMM           1392
 11      artgente       7715         marmota         1445
 12      artgente       8722         spartan         2083
 13      artgente       8966       zecariocales      1324
 14      artgente       8372           iam2          2599
 15      artgente       7475           iam2          2298
 16      artgente       8384           UNO           2719
 17      artgente       7639           iam2          2878
 18        iam2         6279           JAM           3486
 19        iam2         14674        artgente        2811
 20      artgente       8035           iam2          3395
Results of repeated games
     2007 winner is not a Evolutionarily Stable Strategy.
• Although the strategy of the winner of the 2007 spreads
  in the society of agents (until 6 iam2 agents out of 16), it
  never becomes dominant (no majority of iam2 agents).
• iam2 strategy is defeated by artgente strategy, which
  becomes dominant (11 artgente agents out of 16).
  Therefore its superiority as winner of 2007 competition
  is, at least, relative.
• The right equilibrium of trust strategies that form an
  evolutionarily stable society is composed by 10-11
  Artgente agents and 6-5 iam2 agents.
CompetitionRank   EvolutionRank      Agent        ExcludedInGame
      6                1            artgente            -
      1                2              iam2              -
      2                3              JAM               18
      7                4             UNO                16
      4                5          zecariocales          13
      5                6             spartan            12
      9                7            marmota             11
      13               8              IMM               10
      10               9             novel              9
      15               10         agentevicente         8
      11               11           alatriste           6
      12               12              rex              5
      3                13           Blizzard            4
      8                14            reneil             3
      14               15            lesmes             2
      16               16            xerxes             1
Other Evaluation Criteria of the ART
                   testbed
• The testbed also provides functionality to compute:
   – the average accuracy of the appraiser’s final appraisals
      (final appraisal error mean)
   – the consistency of that accuracy (final appraisal error
      standard deviation)
   – the quantities of each type of message passed
      between appraisers are recorded.
• We could take into account other relevant evaluation
  criteria?
Evaluation criteria from the agent-based
                     view
 Characterization and Evaluation of Multi-agent System, P.
      Davidsson, S. Johanson, M. Svahnberg In Software
  Engineering for Multi-Agent Systems IV, LNCS 3914, 2006.
9 Quality atributes:
1. Reactivity: How fast are opinions re-evaluated when
   there are changes in expertise?
2. Load balancing: How evenly is the load balanced
   between the appraisals?
3. Fairness: Are all the providers treated equally?
4. Utilization of resources: Are the available
   abilities/information utilized as much as is possible?
Evaluation criteria from the agent-based
                     view
5. Responsiveness: How long does it take for the
   appraisals to get response to an individual request?
6. Communication overhead: How much extra
   communication is needed for the appraisals?
7. Robustness: How vulnerable is the agent to the absence
   of responses?
8. Modifiability: How easy is it to change the behaviour of
   the agent in very different conditions?
9. Scalability: How good is the system at handling large
   numbers of providers and consumers)?
Evaluation criteria from the agent-based
                     view
Evaluation of Multi-Agent Systems: The case of Interaction,
   H. Joumaa, Y. Demazeau, J.M. Vincent, 3rd Int. Conf. on
     Information & Communication Technologies: from
     Theory to Applications. IEEE Computer Society, Los
                      Alamitos (2008)

• An evaluation at the interaction level, based on the
  weight of the information brought by a message.
• A function Φ is defined in order to calculate the weight of
  pertinent messages.
Evaluation criteria from the agent-based
                     view
• The relation between the received message m and the
  effects on the agent is studied in order to calculate the
  Φ(m) value. According to the model, two kinds of
  functions are considered:
   – A function that associates weight to the message
     according to its type.
   – A function that associates weight to the message
     according to the change provoked on the internal
     state and the actions triggered by its reception.
Consciousness Scale

• Too much quantification (AI is not just statistics…)
• Compare agents qualitatively  Measure their level of
  consciusness
• A scale of 13 conscious levels according to the cognitive
  skills of an agent, the “Cognitive Power” of an agent.
• The higher the level obtained, the more the behavior of
  the agent resembles humans
• www.consscale.com
Bio-inspired order of Cognitive Skills

• From the point of view of emotions (Damasio, 1999):




                          “Emotion”

                           “Feeling”

                      “Feeling of a Feeling”

                       “Fake Emotions”
Bio-inspired order of Cognitive Skills

• From the point of view of perception and action (Perner,
  1999):



                         “Perception”

                         “Adaptation”

                          “Attention”

                         “Set Shifting”

                          “Planning”
                        “Imagination”
Bio-inspired order of Cognitive Skills

• From the point of view of Theory of Mind (Lewis 2003):




                           “I Know”

                        “I Know I Know”

                        “I Know You Know”

                    “I Know You Know I Know”
Consciousness Levels
      Super-Conscious

     Human-like

     Social

     Empathic


Self-Conscious

     Emotional

     Executive

     Attentional

     Adaptive

     Reactive
Evaluating agents with ConsScale
Thank you !




 EASSS 2010, Saint-Etienne, France   172

Contenu connexe

Tendances

Expert systems Artificial Intelligence
Expert systems Artificial IntelligenceExpert systems Artificial Intelligence
Expert systems Artificial Intelligence
itti rehan
 
Issues in knowledge representation
Issues in knowledge representationIssues in knowledge representation
Issues in knowledge representation
Sravanthi Emani
 

Tendances (20)

Expert systems Artificial Intelligence
Expert systems Artificial IntelligenceExpert systems Artificial Intelligence
Expert systems Artificial Intelligence
 
Architecture of Mobile Computing
Architecture of Mobile ComputingArchitecture of Mobile Computing
Architecture of Mobile Computing
 
knowledge representation using rules
knowledge representation using rulesknowledge representation using rules
knowledge representation using rules
 
Corba concepts & corba architecture
Corba concepts & corba architectureCorba concepts & corba architecture
Corba concepts & corba architecture
 
The structure of agents
The structure of agentsThe structure of agents
The structure of agents
 
Spell checker using Natural language processing
Spell checker using Natural language processing Spell checker using Natural language processing
Spell checker using Natural language processing
 
State Space Search in ai
State Space Search in aiState Space Search in ai
State Space Search in ai
 
Principle source of optimazation
Principle source of optimazationPrinciple source of optimazation
Principle source of optimazation
 
Ontology engineering
Ontology engineering Ontology engineering
Ontology engineering
 
Issues in knowledge representation
Issues in knowledge representationIssues in knowledge representation
Issues in knowledge representation
 
Goal based and utility based agents
Goal based and utility based agentsGoal based and utility based agents
Goal based and utility based agents
 
State Space Representation and Search
State Space Representation and SearchState Space Representation and Search
State Space Representation and Search
 
NEGOTIATION AND BARGAINING.pptx
NEGOTIATION AND BARGAINING.pptxNEGOTIATION AND BARGAINING.pptx
NEGOTIATION AND BARGAINING.pptx
 
Predicate logic
 Predicate logic Predicate logic
Predicate logic
 
Replication in Distributed Systems
Replication in Distributed SystemsReplication in Distributed Systems
Replication in Distributed Systems
 
Taxonomy for bugs
Taxonomy for bugsTaxonomy for bugs
Taxonomy for bugs
 
Artificial Intelligence: Knowledge Acquisition
Artificial Intelligence: Knowledge AcquisitionArtificial Intelligence: Knowledge Acquisition
Artificial Intelligence: Knowledge Acquisition
 
ELEMENTS OF TRANSPORT PROTOCOL
ELEMENTS OF TRANSPORT PROTOCOLELEMENTS OF TRANSPORT PROTOCOL
ELEMENTS OF TRANSPORT PROTOCOL
 
AI: AI & Problem Solving
AI: AI & Problem SolvingAI: AI & Problem Solving
AI: AI & Problem Solving
 
Knowledge representation In Artificial Intelligence
Knowledge representation In Artificial IntelligenceKnowledge representation In Artificial Intelligence
Knowledge representation In Artificial Intelligence
 

En vedette

Trust and reputation in mobile environments
Trust and reputation in mobile environmentsTrust and reputation in mobile environments
Trust and reputation in mobile environments
Andrada Astefanoaie
 
Trust Based Routing In wireless sensor Network
  Trust Based  Routing In wireless sensor Network  Trust Based  Routing In wireless sensor Network
Trust Based Routing In wireless sensor Network
Anjan Mondal
 
Electronic Negotiation and Mediation Support
Electronic Negotiation and Mediation SupportElectronic Negotiation and Mediation Support
Electronic Negotiation and Mediation Support
Matteo Damiani
 

En vedette (13)

Trust and reputation in mobile environments
Trust and reputation in mobile environmentsTrust and reputation in mobile environments
Trust and reputation in mobile environments
 
Less Conservative Consensus of Multi-agent Systems with Generalized Lipschitz...
Less Conservative Consensus of Multi-agent Systems with Generalized Lipschitz...Less Conservative Consensus of Multi-agent Systems with Generalized Lipschitz...
Less Conservative Consensus of Multi-agent Systems with Generalized Lipschitz...
 
Trust Based Routing In wireless sensor Network
  Trust Based  Routing In wireless sensor Network  Trust Based  Routing In wireless sensor Network
Trust Based Routing In wireless sensor Network
 
Using coaching in a leadership role
Using coaching in a leadership roleUsing coaching in a leadership role
Using coaching in a leadership role
 
ISA2008
ISA2008ISA2008
ISA2008
 
BBL multi agent systems
BBL multi agent systemsBBL multi agent systems
BBL multi agent systems
 
Electronic Negotiation and Mediation Support
Electronic Negotiation and Mediation SupportElectronic Negotiation and Mediation Support
Electronic Negotiation and Mediation Support
 
Automated Negotiation
Automated NegotiationAutomated Negotiation
Automated Negotiation
 
Introduction to Agents and Multi-agent Systems (lecture slides)
Introduction to Agents and Multi-agent Systems (lecture slides)Introduction to Agents and Multi-agent Systems (lecture slides)
Introduction to Agents and Multi-agent Systems (lecture slides)
 
MAS course - Lect 9
MAS course - Lect 9 MAS course - Lect 9
MAS course - Lect 9
 
MAS course Lect13 industrial applications
MAS course Lect13 industrial applicationsMAS course Lect13 industrial applications
MAS course Lect13 industrial applications
 
Lect7MAS-Coordination
Lect7MAS-CoordinationLect7MAS-Coordination
Lect7MAS-Coordination
 
Introduction to agents and multi-agent systems
Introduction to agents and multi-agent systemsIntroduction to agents and multi-agent systems
Introduction to agents and multi-agent systems
 

Similaire à T9. Trust and reputation in multi-agent systems

Irreversibility of communication syndicate3
Irreversibility of communication syndicate3Irreversibility of communication syndicate3
Irreversibility of communication syndicate3
ankita_slide
 
A key contribution for leveraging trustful interactions
A key contribution for leveraging trustful interactionsA key contribution for leveraging trustful interactions
A key contribution for leveraging trustful interactions
Sónia
 
20 06-2014
20 06-201420 06-2014
20 06-2014
Sónia
 
Speed Of Trust
Speed Of TrustSpeed Of Trust
Speed Of Trust
GMR Group
 
Trust, Justice, & EthicsMost Trusted Companies100 Best.docx
Trust, Justice, & EthicsMost Trusted Companies100 Best.docxTrust, Justice, & EthicsMost Trusted Companies100 Best.docx
Trust, Justice, & EthicsMost Trusted Companies100 Best.docx
turveycharlyn
 
Neron India Values Introduction
Neron India Values IntroductionNeron India Values Introduction
Neron India Values Introduction
Neron
 
Building transactional trust quick guide
Building transactional trust quick guideBuilding transactional trust quick guide
Building transactional trust quick guide
Dave Neuman
 

Similaire à T9. Trust and reputation in multi-agent systems (20)

Social life in digital societies: Trust, Reputation and Privacy EINS summer s...
Social life in digital societies: Trust, Reputation and Privacy EINS summer s...Social life in digital societies: Trust, Reputation and Privacy EINS summer s...
Social life in digital societies: Trust, Reputation and Privacy EINS summer s...
 
Tireless and Talkable Campaigns
Tireless and Talkable CampaignsTireless and Talkable Campaigns
Tireless and Talkable Campaigns
 
Trust from a Human Computer Interaction perspective
Trust from a Human Computer Interaction perspective Trust from a Human Computer Interaction perspective
Trust from a Human Computer Interaction perspective
 
CVS Surveyors |Hows build-up trust in Business | Presentation
CVS Surveyors |Hows build-up trust in Business | PresentationCVS Surveyors |Hows build-up trust in Business | Presentation
CVS Surveyors |Hows build-up trust in Business | Presentation
 
Irreversibility of communication syndicate3
Irreversibility of communication syndicate3Irreversibility of communication syndicate3
Irreversibility of communication syndicate3
 
Communicating Vision and Value
Communicating Vision and ValueCommunicating Vision and Value
Communicating Vision and Value
 
Developing Social Engagement Strategy
Developing Social Engagement StrategyDeveloping Social Engagement Strategy
Developing Social Engagement Strategy
 
Future Workplace
Future WorkplaceFuture Workplace
Future Workplace
 
Hr roundtable 120323
Hr roundtable 120323Hr roundtable 120323
Hr roundtable 120323
 
The Architecture of Social Websites: Reputation
The Architecture of Social Websites: ReputationThe Architecture of Social Websites: Reputation
The Architecture of Social Websites: Reputation
 
A key contribution for leveraging trustful interactions
A key contribution for leveraging trustful interactionsA key contribution for leveraging trustful interactions
A key contribution for leveraging trustful interactions
 
20 06-2014
20 06-201420 06-2014
20 06-2014
 
Speed Of Trust
Speed Of TrustSpeed Of Trust
Speed Of Trust
 
Trust, Justice, & EthicsMost Trusted Companies100 Best.docx
Trust, Justice, & EthicsMost Trusted Companies100 Best.docxTrust, Justice, & EthicsMost Trusted Companies100 Best.docx
Trust, Justice, & EthicsMost Trusted Companies100 Best.docx
 
How to study trust online
How to study trust onlineHow to study trust online
How to study trust online
 
Neron India Values Introduction
Neron India Values IntroductionNeron India Values Introduction
Neron India Values Introduction
 
Recoding black-mirror2017-irrefutable-history-of-you
Recoding black-mirror2017-irrefutable-history-of-youRecoding black-mirror2017-irrefutable-history-of-you
Recoding black-mirror2017-irrefutable-history-of-you
 
Building transactional trust quick guide
Building transactional trust quick guideBuilding transactional trust quick guide
Building transactional trust quick guide
 
Credibility Advantage 2.0
Credibility Advantage 2.0Credibility Advantage 2.0
Credibility Advantage 2.0
 
SNCR new comm forum 2010
SNCR new comm forum 2010SNCR new comm forum 2010
SNCR new comm forum 2010
 

Plus de EASSS 2012 (9)

T7 Embodied conversational agents and affective computing
T7 Embodied conversational agents and affective computingT7 Embodied conversational agents and affective computing
T7 Embodied conversational agents and affective computing
 
T14 Argumentation for agent societies
T14	Argumentation for agent societiesT14	Argumentation for agent societies
T14 Argumentation for agent societies
 
T12 Distributed search and constraint handling
T12	Distributed search and constraint handlingT12	Distributed search and constraint handling
T12 Distributed search and constraint handling
 
T4 Introduction to the modelling and verification of, and reasoning about mul...
T4 Introduction to the modelling and verification of, and reasoning about mul...T4 Introduction to the modelling and verification of, and reasoning about mul...
T4 Introduction to the modelling and verification of, and reasoning about mul...
 
T13 Agent coordination in planning and scheduling
T13	Agent coordination in planning and schedulingT13	Agent coordination in planning and scheduling
T13 Agent coordination in planning and scheduling
 
T3 Agent oriented programming languages
T3 Agent oriented programming languagesT3 Agent oriented programming languages
T3 Agent oriented programming languages
 
T0. Multiagent Systems and Electronic Institutions
T0. Multiagent Systems and Electronic InstitutionsT0. Multiagent Systems and Electronic Institutions
T0. Multiagent Systems and Electronic Institutions
 
T2. Organization and Environment oriented programming
T2. Organization and Environment oriented programmingT2. Organization and Environment oriented programming
T2. Organization and Environment oriented programming
 
T11. Normative multi-agent systems
T11. Normative multi-agent systemsT11. Normative multi-agent systems
T11. Normative multi-agent systems
 

Dernier

Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
ZurliaSoop
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
QucHHunhnh
 

Dernier (20)

Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
 
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptxSKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
SKILL OF INTRODUCING THE LESSON MICRO SKILLS.pptx
 
Spatium Project Simulation student brief
Spatium Project Simulation student briefSpatium Project Simulation student brief
Spatium Project Simulation student brief
 
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
Jual Obat Aborsi Hongkong ( Asli No.1 ) 085657271886 Obat Penggugur Kandungan...
 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
 
On National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan FellowsOn National Teacher Day, meet the 2024-25 Kenan Fellows
On National Teacher Day, meet the 2024-25 Kenan Fellows
 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
 
PROCESS RECORDING FORMAT.docx
PROCESS      RECORDING        FORMAT.docxPROCESS      RECORDING        FORMAT.docx
PROCESS RECORDING FORMAT.docx
 
Food safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdfFood safety_Challenges food safety laboratories_.pdf
Food safety_Challenges food safety laboratories_.pdf
 
Micro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdfMicro-Scholarship, What it is, How can it help me.pdf
Micro-Scholarship, What it is, How can it help me.pdf
 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
 
1029-Danh muc Sach Giao Khoa khoi 6.pdf
1029-Danh muc Sach Giao Khoa khoi  6.pdf1029-Danh muc Sach Giao Khoa khoi  6.pdf
1029-Danh muc Sach Giao Khoa khoi 6.pdf
 
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
Explore beautiful and ugly buildings. Mathematics helps us create beautiful d...
 
Key note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdfKey note speaker Neum_Admir Softic_ENG.pdf
Key note speaker Neum_Admir Softic_ENG.pdf
 
Asian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptxAsian American Pacific Islander Month DDSD 2024.pptx
Asian American Pacific Islander Month DDSD 2024.pptx
 
Unit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptxUnit-IV; Professional Sales Representative (PSR).pptx
Unit-IV; Professional Sales Representative (PSR).pptx
 
Grant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy ConsultingGrant Readiness 101 TechSoup and Remy Consulting
Grant Readiness 101 TechSoup and Remy Consulting
 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
 
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17  How to Extend Models Using Mixin ClassesMixin Classes in Odoo 17  How to Extend Models Using Mixin Classes
Mixin Classes in Odoo 17 How to Extend Models Using Mixin Classes
 

T9. Trust and reputation in multi-agent systems

  • 1. Trust & Reputation in Multi-Agent Systems Dr. Jordi Sabater Mir Dr. Javier Carbó jsabater@iiia.csic.es jcarbo@inf.uc3m.es EASSS 2012, Valencia, Spain 1
  • 2. Dr. Jordi Sabater-Mir IIIA – Artificial Intelligence Research Institute CSIC – Spanish National Research Council
  • 3. Outline • Introduction • Approaches to control the interaction • Computational reputation models – eBay – ReGreT • A cognitive perspective to computational reputation models – A cognitive view on Reputation – Repage, a computational cognitive reputation model – [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture – Arguing about reputation concepts
  • 4. Trust “A complete absence of trust would prevent [one] even getting up in the morning.” Niklas Luhman - 1979
  • 5. Trust A couple of definitions that I like: “Trust begins where knowledge [certainty] ends: trust provides a basis dealing with uncertain, complex, and threatening images of the future.” (Luhmann,1979) “Trust is the outcome of observations leading to the belief that the actions of another may be relied upon, without explicit guarantee, to achieve a goal in a risky situation.” (Elofson, 2001)
  • 6. Trust Epistemic “The subjective probability by which an individual, A, expects that another individual, B, performs a given action on which its welfare depends” [Gambetta] “An expectation about an uncertain behaviour” [Marsh] “The decision and the act of relying on, counting on, depending on [the trustee]” [Castelfranchi & Falcone] Motivational 6
  • 7. Reputation "After death, a tiger leaves behind his skin, a man his reputation" Vietnamese proverb
  • 8. Reputation “What a social entity says about a target regarding his/her behavior” It is always associated to a specific behaviour/property • The social evaluation linked to the reputation is not necessarily a belief of the issuer. • Reputation cannot exist without communication. Set of individuals plus a set of social relations among these individuals or properties that identify them as a group in front of its own members and the society at large.
  • 9. What is reputation good for? • Reputation is one of the elements that allows us to build trust. • Reputation has also a social dimension. It is not only useful for the individual but also for the society as a mechanism for social order.
  • 10. But... why we need computational models of those concepts?
  • 11. What we are talking about... Mr. Yellow
  • 12. What we are talking about... Two years ago... Trust based on... Direct experiences Mr. Yellow
  • 13. What we are talking about... Trust based on... Third party information Mr. Pink Mr. Yellow
  • 14. What we are talking about... Trust based on... Third party information Mr. Green Mr. Pink Mr. Yellow
  • 15. What we are talking about... Trust based on... Reputation Mr. Yellow
  • 16. What we are talking about... Mr. Yellow
  • 17. What we are talking about... ?
  • 18. Characteristics of computational trust and reputation mechanisms • Each agent is a norm enforcer and is also under surveillance by the others. No central authority needed. • Their nature allows to arrive where laws and central authorities cannot. • Punishment is based usually in ostracism. Therefore, exclusion must be a punishment for the outsider.
  • 19. Characteristics of computational trust and reputation mechanisms • Bootstrap problem. • Not all kind of environments are suitable to apply these mechanisms. It is necessary a social environment.
  • 20. Approaches to control the interaction
  • 21. Different approaches to control the interaction Security approach
  • 22. Different approaches to control the interaction • Security approach Agent identity validation. Integrity, authenticity of messages. ...
  • 23. Different approaches to control the interaction Institutional approach Security approach
  • 24. Different approaches to control the interaction • Institutional approach
  • 25. Different approaches to control the interaction Trust and reputation Social approach mechanisms are at this level. Institutional approach Security approach They are complementary and cover different aspects of interaction.
  • 27. Classification dimensions • Paradigm type • Model’s granularity • Mathematical approach • Single context • Cognitive approach • Multi context • Information sources • Agent behaviour assumptions • Cheating is not considered • Direct experiences • Agents can hide or bias the • Witness information information but they never lie • Sociological information • Type of exchanged information • Prejudice • Visibility types • Subjective • Global
  • 28. Subjective vs Global • Global • The reputation is maintained as a centralized resource. • All the agents in that society have access to the same reputation values. Advantages: • Reputation information is available even if you are a newcomer and do not depend on how well connected or good informants you have. • Agents can be simpler because they don’t need to calculate reputation values, just use them. Disadvantages: • Particular mental states of the agent or its singular situation are not taken into account when reputation is calculated. Therefore, a global view it is only possible when we can assume that all the agents think and behave similar. • Not always is desireable for an agent to make public information about the direct experiences or submit that information to an external authority. • Therefore, a high trust on the central institution managing reputation is essential.
  • 29. Subjective vs Global • Subjective • The reputation is maintained by each agent and is calculated according to its own direct experiences, information from its contacts, its social relations... Advantages: • Reputation values can be calculated taking into account the current state of the agent and its individual particularities. Disadvantages: • The models are more complex, usually because they can use extra sources of information. • Each agent has to worry about getting the information to build reputation values. • Less information is available so the models have to be more accurate to avoid noise.
  • 30. A global reputation model: eBay Model oriented to support trust between buyer and seller. • Completely centralized. • Buyers and sellers may leave comments about each other after transactions. • Comment: a line of text + numeric evaluation (-1,0,1) • Each eBay member has a Feedback score that is the summation of the numerical evaluations.
  • 32. eBay model Specifically oriented to scenarios with the following characteristics: • A lot of users (we are talking about milions) • Few chances of repeating interaction with the same partner • Easy to change identity • Human oriented • Considers reputation as a global property and uses a single value that is not dependent on the context. • A great number of opinions that “dilute” false or biased information is the only way to increase the reliability of the reputation value.
  • 33. A subjective reputation model: ReGreT What is the ReGreT system? It is a modular trust and reputation system oriented to complex e-commerce environments where social relations among individuals play an important role.
  • 34. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 35. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 36. Outcomes and Impressions Outcome: The initial contract – to take a particular course of actions – to establish the terms and conditions of a transaction. AND The actual result of the contract. Example: Prize =c 2000 Quality =c A Contract Quantity =c 300 Outcome Prize =f 2000 Quality =f C Fulfillment Quantity =f 295
  • 37. Outcomes and Impressions Outcome Prize =c 2000 offers_good_prices Quality =c A Quantity =c 300 maintains_agreed_quantities Prize =f 2000 Quality =f C Quantity =f 295
  • 38. Outcomes and Impressions Impression: The subjective evaluation of an outcome from a specific point of view. Imp(o, 1 ) Outcome Prize =c 2000 Quality =c A Quantity =c 300 Imp(o,  2 ) Prize =f 2000 Quality =f C Quantity =f 295 Imp(o,  3 )
  • 39. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Reliability of the value based on: Direct Reputation Trust • Number of outcomes model • Deviation: The greater the variability in the rating values the more volatile will be System the other agent in the fulfillment of its reputation agreements. Trust
  • 40. Direct Trust Trust relationship calculated directly from an agent’s outcomes database. DTa b ( )    (t , t )  Imp(o ,  ) i i oi ODB gr,b ) a ( f (ti , t )  (t , ti )  o IDBa ,b f (t j , t ) gr (  ) ti f (ti , t )  j t
  • 41. Direct Trust DT reliability a ,b a ,b DTRLab ( )  No ( ODBgr (  ) )  (1  Dv ( ODBgr (  ) ) Number of Deviation outcomes (Dv) (No) The greater the variability in the rating values the more volatile will be the other agent in the fulfillment of its agreements. a ,b No ( ODB gr (  ) ), itm  10
  • 42. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 43. Witness reputation Reputation that an agent builds on another agent based on the beliefs gathered from society members (witnesses). Problems of witness information: • Can be false. • Can be incomplete. • It may suffer from the “correlated evidence” problem.
  • 44. B o C o A o # u7 + D + + a1 # ^ o^ c1 o+ b2 c1 b1 u6 a2 o # o+ c2 #^ d1 u3 + d1 + a1 u2 u8 + d2 o u4 ^ u1 u5 c2 u2 u9 #^ b2 u9 u1 # u3 u8 u6 u5 u4 u7 d2 ^ o a2 b1 o + o + # # + trade ^
  • 45. B o C o A o # + D + + # ^ ^ b2 c1 b1 u1 # c2 o+ u4 #^ a2 o d1 + u9 + a1 d2 o u4 u3 ^ u1 u2 u5 u5 u2 u9 u3 u8 u6 u7 u8 u6 u7 cooperation o o Big exchange of sincere infor- # + # mation and some kind of predispo- + sition to help if it is possible. ^
  • 46. B o C o A o # + D + + # ^ ^ b2 c1 b1 # o+ c2u3 u9 a2 o #^ d1 + u1 + a1 d2 o u4 u2 ^ u1 u5 u2 u9 u7 u8 u3 u8 u5 u6 u4 u6 u7 competition o o Agents tend to use all the available # + # mechanisms to take some advantage + from their competitors. ^
  • 47. Witness u7 reputation # a1 o c1 o+ Step 1: Identifying u6 the witnesses u3 d1 • Initial set of witnesses: u2 u8 + ? Agents that have had a trade Relation with the target agent c2 #^ b2 u9 u1 # u5 u4 d2 b1 ^ a2 + o trade
  • 48. Witness u7 Grouping agents with frequent interactions reputation them and considering each one of these among groups as a single source of reputation values: Step 1: Identifying u6 • Minimizes u3 correlated evidence problem. the witnesses the • Initial set of witnesses: u2 u8 • Reduces the number of queries to agents that Agents that have had probably will give us more or less the same a trade Relation with the target agent information. b2 # To group agents ReGreT relies on sociograms. u5 u4 trade
  • 49. Witness reputation u7 Heuristic to identify groups and Central-point the best agents to represent them: u6 u3 1. Identify the components of u2 u8 the graph. 2. For each component, find the set of cut-points. b2 # 3. For each component that does not have any cut-point, u5 u4 select a central point (node with larger degree). Cut-point cooperation
  • 50. Witness u7 reputation Step 1: Identifying u6 the witnesses u3 • Initial set of witnesses: u2 u8 Agents that have had a trade Relation with the target agent b2 • Grouping and selecting # the most representative witnesses u5 u4 trade
  • 51. Witness reputation Step 1: Identifying the witnesses u3 • Initial set of witnesses: u2 Agents that have had a trade Relation with the target agent b2 • Grouping and selecting # the most representative witnesses u5 trade
  • 52. Witness reputation  Trustu 2b 2 ( ), TrustRLu 2b 2 ( )  u2 Step 1: Identifying u3 the witnesses u5 Step 2: Who can I  Trustu 5b 2 ( ), TrustRLu 5b 2 ( )  trust?
  • 53. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 54. Credibility model Two methods are used to evaluate the credibility of witnesses: Credibility (witnessCr) Social relations Past history (socialCr) (infoCr)
  • 55. Credibility model • socialCr(a,w,b): credibility that agent a assigns to agent w when w is giving information about b and considering the social structure among w, b and himself. a a a w w w b b b a a a w w w b b b a a a w w w b b b w - witness competitive relation b - target agent cooperative relation a - source agent
  • 56. Credibility model Regret uses fuzzy rules to calculate how the structure of social relations influences the credibility on the information. IF coop(w,b) is h THEN socialCr(a,w,b) is vl 1 1 0 0 0 1 0 1 low moderate high very_low low moderate high very_high (l) (m) (h) (vl) (l) (m) (h) (vh)
  • 57. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 58. Neighbourhood reputation The trust on the agents that are in the “neighbourhood” of the target agent and their relation with it are the elements used to calculate what we call the Neighbourhood reputation. ReGreT uses fuzzy rules to model this reputation. IF DTan (offers_good_quality ) is X AND coop(b,ni)  low i THEN Rab (offers_good_quality) is X n i IF DTRLan (offers_good_quality) is X’ AND coop(b,ni) is Y’ i THEN RLab (offers_good_quality) is T(X’,Y’) n i
  • 59. The ReGreT ODB IDB SDB system Credibility Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 60. System reputation The idea behind the System reputation is to use the common knowledge about social groups and the role that the agent is playing in the society as a mechanism to assign reputation values to other agents. The knowledge necessary to calculate a system reputation is usually inherited from the group or groups to which the agent belongs to.
  • 61. Trust If the agent has a reliable direct trust value, it will use that as a measure of trust. If that value is not so reliable then it will use reputation. Neigh- Witness bourhood reputation reputation Direct Reputation Trust model System reputation Trust
  • 62. A cognitive perspective to computational reputation models • A cognitive view on Reputation • Repage, a computational cognitive reputation model • [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture • Arguing about reputation concepts
  • 63. Social evaluation • A social evaluation, as the name suggests, is the evaluation by a social entity of a property related to a social aspect. • Social evaluations may concern physical, mental, and social properties of targets. • A social evaluation includes at least three sets of agents:  a set E of agents who share the evaluation (evaluators)  a set T of evaluation targets  a set B of beneficiaries We can find examples where the different sets intersect totally, partially, etc... e (e in E) may evaluate t (t in T) with regard to a state of the world that is in b’s (b in B) interest, but of which b not necessarily is aware. Example: quality of TV programs during children’s timeshare
  • 64. Image and Reputation • Both are social evaluations. • They concern other agents' (targets) attitudes toward socially desirable behaviour but... ...whereas image consists of a set of evaluative beliefs about the characteristics of a target, reputation concerns the voice that is circulating on the same target. Reputation in artificial societies [Rosaria Conte, Mario Paolucci]
  • 65. Image “An evaluative belief; it tells whether the target is good or bad with respect to a given behaviour” [Conte & Paolucci] Is the result of an internal reasoning on different sources of information that leads the agent to create a belief about the behaviour of another agent. Beliefs The agent has accepted φ as something true and its decisions from now on will take this B into account. Social evaluation 
  • 66. Reputation • A voice is something that “it is said”, a piece of information that is being transmitted. • Reputation: a voice about a social evaluation that is recognised by the members of a group to be circulating among them. Beliefs • The agent believes that the social B(S(f)) evaluation f is communicated. • This does not imply that the agent believes that f is true.
  • 67. Reputation Implications: • The agent that spreads a reputation, because it is not implicit that it believes the associated social evaluation, takes no responsibility about that social evaluation (another thing is the responsibility associated to the action of spreading that reputation). • This fact allows reputation to circulate more easily than image (less/no fear of retaliation). • Notice that if an agent believes “what people say”, image and reputation colapse. • This distinction has important advantages from a technical point of view.
  • 68. Gossip • In order for reputation to exist, it has to be transmitted. We cannot have reputation without communication. • Gossip currently has the meaning of an idle talk or rumour, especially about the personal or private affairs of others. Usually has a bad connotation. But in fact is an essential element in human nature. • The antecedents of gossip is grooming. • Studies from evolutionary psicology have found gossip to be very important as a mechanism to spread reputation [Sommerfeld et al. 07, Dunbar 04] • Gossip and reputation complement social norms: Reputation evolves along with implicit norms to encourage socially desirable conducts, such as benevolence or altruism and discourage socially unacceptable ones, like cheating.
  • 69. Outline • A cognitive view on Reputation • Repage, a computational cognitive reputation model • [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture • Arguing about reputation concepts
  • 70. RepAge What is the RepAge model? It is a reputation model evolved from a cognitive theory by Conte and Paolucci. The model is designed with an special attention to the internal representation of the elements used to build images and reputations as well as the inter-relations of these elements.
  • 71. RepAge memory Value: Rep Img P P P Strength: 0.6 P P P P P P P P
  • 73.
  • 74. Outline • A cognitive view on Reputation • Repage, a computational cognitive reputation model • [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture • Arguing about reputation concepts
  • 75. What do you mean by “properly”? Current models Planner Trust & Reputation system ? Inputs Decision mechanism Comm Black box Agent Reactive
  • 76. What do you mean by “properly”? Current models Planner Trust & Reputation system Value Inputs Decision mechanism Comm Black box Agent Reactive
  • 77. What do you mean by “properly”? The next generation? Planner Trust & Reputation system Inputs Decision mechanism Comm Agent
  • 78. What do you mean by “properly”? The next generation? Planner Inputs Decision mechanism Comm Agent Not only reactive... ... proactive
  • 79. BDI model • Very popular model in the multiagent community. • Has the origins in the theory of human practical reasoning [Bratman] and the notion of intentional systems [Dennett]. • The main idea is that we can talk about computer programs as if they have a “mental state”. • Specifically, the BDI model is based on three mental attitudes: Beliefs - what the agent thinks it is true about the world. Desires - world states the agent would like to achieve. Intentions - world states the agent is putting efforts to achieve.
  • 80. BDI model • The agent is described in terms of these mental attitudes. • The decision-making model underlying the BDI model is known as practical reasoning. • In short, practical reasoning is what allows the agent to go from beliefs, desires and intentions to actions.
  • 81. Multicontext systems • Declarative languages, each with a set of Logics axioms amd a number of rules of inference. • Structural entities representing the main architecture components. Each unit has a UNITS single logic associated with it. • Rules of inference wich relate formulae Bridge Rules in different units. • Sets of formulae written in the logic Theories associated with a unit
  • 82. U1 U2 d U1:b , U2:d U3:a U3
  • 83. U1 U2 b d U1:b , U2:d U3:a U3
  • 84. U1 U2 b d U1:b , U2:d U3:a U3
  • 85. U1 U2 b d U1:b , U2:d U3:a U3 a
  • 87. Repage integration in a BDI architecture
  • 89. Grounding Image and Reputation to BC-Logic
  • 90.
  • 91. Repage integration in a BDI architecture
  • 95.
  • 96. Repage integration in a BDI architecture
  • 97. Outline • A cognitive view on Reputation • Repage, a computational cognitive reputation model • [Properly] Integrating a [cognitive] reputation model into a [cognitive] agent architecture • Arguing about reputation concepts
  • 98. Arguing about Reputation Concepts Goal: Allow agents to participate in argumentation-based dialogs regarding reputation elements in order to: - Decide on the acceptance of a communicated social evaluation based on its reliability. “Is the argument associated to a communicated social evaluation (and according to my knowledge) strong enough to consider its inclusion in the knwoledge base of my reputation model?” - Help in the process of trust alignment. What we need: • A language that allows the exchange of reputation-related information. • An argumentation framework that fits the requirements imposed by the particular nature of reputation. • A dialog protocol to allow agents establish information seeking dialogs.
  • 99. The language: LRep LREP : First-order sorted languange with special predicates representing the typology of social evaluations we use: Img, Rep, ShV, ShE, DE, Comm. Ex 2: Linguistic Labels •SF: Set of constant formulas Allows LREP formulas to be nested in communications • SV: Set of evaluative values f: { 0 , 1, 2 , 3 , 4 }
  • 100. The reputation argumentation framework • Given the nature of social evaluations (the values of a social evaluation are graded) we need an argumentation framework that allows to weight the attacks. Example: We have to be able to differentiate between Img(j,seller,VG) being attacked by Img(j,seller,G) or being attacked by Img(j,seller,VB). • Specifically we instantiate the Weighted Abstract Argumentation Framework defined in P.E. Dunne, A. Hunter, P. McBurney, S. Parsons, and M. Wooldridge, ‘Inconsistency tolerance in weighted argument systems’, in AAMAS’09, pp. 851–858, (2009). • Basically, this framework introduces the notions of strength and inconsistency budgets (defined as the amount of “inconsistency” that the system can tolerate regarding attacks) in a classical Dung’s framework.
  • 101. Building Argumentative Theories Argumentative theory (Build from the Simple shared consequence relation reputation theory) Argumentation level ? ? Reputation-related information Consequence relation Reputation theory: set of ground (Reputation model) elements (expressed in LREP) gathered Specific to each agent by j through interactions and communications.
  • 102. Attack and Strength f: { 0 , 1, 2 , 3 , 4 } Strength of the attack
  • 103. Example of argumentative dialog Role: seller Role: Inf informant Role: sell(q) Role: sell(dt) • Agent i: proponent quality delivery time • Agent j: opponent j i • Each agent is equipped with a Reputation Weighted Argument System
  • 105. Example of argumentative dialog j i Strength of the attack
  • 111. Outline + PART II:  Trust Computing Approaches Security Institutional Social  Evaluation of Trust and Reputation Models EASSS 2010, Saint-Etienne, France 111
  • 112. Dr. Javier Carbó GIAA – Group of Applied Artificial Intelligence Univ. Carlos III de Madrid
  • 113. Trust in Information Security Same Word, Different World Security approach tackles “hard” problems of trust. They view trust as an objective, universal and verifiable property of agents. Their trust problems have solutions: • False identity • Reading/modification of messages by third parties • Repudiation of messages • Certificates of accomplishing tasks/services according to standards EASSS 2010, Saint-Etienne, France 113
  • 114. An example, Public Key Infrastructure LDAP directory Certificate authority 4. Publication of certificate 3. Public key 5. Certificate sent sent 2. Private key sent 1. Client identity Registration authority EASSS 2010, Saint-Etienne, France 114
  • 115. Trust in I&S, limitations Their trust relies on central entities: – Authorities, Trust Third Parties – Partially solved using hierarchies of TTPs. They ignore part of the problem: - Top authority should be trusted by any other way Their scope is far away from Real Life Trust issues: – lies, defection, collusions, social norm violations, … EASSS 2010, Saint-Etienne, France 115
  • 116. Institutional approach Institutions have proved to successfully regulate human societies for a long time: - created to achieve particular goals while complying norms. - responsible for defining the rules of the game (norms), to enforce them and assess penalties in case of violation. Examples: auction houses, parliaments, stock exchange markets,.… Institutional approach is focused on the existence of organizations: • Providing an execution infrastructure • Controlling the resources access • Sanctionning/rewarding agents’ behaviors EASSS 2010, Saint-Etienne, France 116
  • 117. An example: e-institutions EASSS 2010, Saint-Etienne, France 117
  • 118. Institutional approach, limitations They view trust as an partially objective, local and verifiable property of agents. Intrusive control on the agents (modification on the execution resources, process killing, …) They require a shared agreeement to define of what is expected (norm compliance, case laws…) They require a central entity and global supervision – Repositories, access control entities should be centralised – Low scalability if every agent is observed by the institution Assumes that the institution itself is trusted EASSS 2010, Saint-Etienne, France 118
  • 119. Social approach Social approach consists in the idea of an auto-organized society (Adam Smith’s invisible hand) Each agent has its own evaluation criteria of what is expected: no social norms, just individual norms Each agent is in charge of rewards and punishments (often in terms of more/less future cooperative interactions) No central entity at all, it consists of a completely distributed social control of malicious agents. Trust as an emergent property Avoids Privacy issues caused by centralized approaches EASSS 2010, Saint-Etienne, France 119
  • 120. Social approach, limitations Unlimited, but undefined and unexpected trust scope: We view trust as a subjective, local and unverifiable property of agents. Exclusion/Isolation is the typical punishment for the malicious agents  Difficult to enforce it in open and dynamical societies of agents Malicious behaviors may occur, they are supposed to be prevented due to the lack of incentives and punishments. Difficult to define which domain and society is appropriate to test this social approach. EASSS 2010, Saint-Etienne, France 120
  • 121. Ways to evaluate any system  Integration on real applications  Using real data from public datasets  Using realistic data generated artificially  Using ad-hoc simulated data with no justification/motivation  None of above
  • 122. Ways to evaluate T&R in agent systems  Integration of T&R on real agent applications  Using real T&R data from public datasets  Using realistic T&R data generated artificially  Using ad-hoc simulated data with no justification/motivation  None of above
  • 123. Real Applications using T&R in an agent system • What real application are we looking for? • Trust and reputation: – System that uses (for something) and exchanges subjective opinions about other participants  Recommender Systems • Agent System: – Distributed view, no central entity collects, aggregates and publishes a final valuation  ???
  • 124. Real Applications using T&R in an agent system • Desiderata of application domains: (To be filled by students)
  • 125. Real data & public datasets • Assuming real agent applications exists, would data be publicly available? – Privacy concerns – Lack of incentives to save data along time – Distribution of data.Heisenberg uncertainty principle: If users knew their subjective opinions would be collected by a central entity, they would not be as if their opinions had just a private (supposed-to-be friendly) reader. • No agents, no distribution  public dataset from recomender systems
  • 126. A view on privacy concerns • Anonymity: use of arbitrary/secure pseudonysms • Using concordance: similarity between users within a single context. Mean of differences rating a set of items. Users tend to agree. (Private Collaborative Filtering using estimated concordance measures, N. Lathia, S. Hailes, L. Capra, 2007) • Secure Pair-wise comparison of fuzzy ratings (Introducing newcomers into a fuzzy reputation agent system, J. Carbo, J.M. Molina, J. Davila, 2002)
  • 127. Real Data & Public Datasets • MovieLens, www.grouplens.org: Two datasets: – 100,000 ratings for 1682 movies by 943 users. – 1 million ratings for 3900 movies by 6040 users. • These are the “standard” datasets that many recommendation system papers use in their evaluation
  • 128. My paper with MovieLens • I selected users among those who had rated 70 or more movies, and we also selected the movies that were evaluated more than 35 times in order to avoid the sparsity problem. • Finally we had 53 users and 28 movies. • The average votes per user is approximately 18. So the sparsity of the selected set of users and movies is under 35% “Agent-based collaborative filtering based on fuzzy recommendations” J. Carbó, J.M. Molina, IJWET v1 n4, 2004
  • 129. Real Data & Public Datasets BookCrossing (BX) dataset: • www.informatik.uni-freiburg.de/~cziegler/BX • collected by Cai-Nicolas Ziegler in a 4-week crawl (August / September 2004) from the Book-Crossing community. • It contains 278,858 users providing 1,149,780 ratings (explicit / implicit) about 271,379 books.
  • 130. Real Data & Public Datasets Last.fm Dataset • top artists played by all users: – contains <user, artist-mbid, artist-name, total-plays> – tuples for ~360,000 users about 186,642 artists. • full listening history of 1000 users: – Tuples of <user-id, timestamp, artist-mbid, artist- name, song-mbid, song-title> • Collected by Oscar Celma, Univ. Pompeu Fabra • www.dtic.upf.edu/~ocelma/MusicRecommendationDatas et
  • 131. Real Data & Public Datasets Jester Joke Data Set: • Ken Goldberg from UC Berkeley released a dataset from Jester Joke Recommender System. • 4.1 million continuous ratings (-10.00 to +10.00) of 100 jokes from 73,496 users. • www.ieor.berkeley.edu/~goldberg/jester-data/ • It differentiates itself from other datasets by having a much smaller number of rateable items.
  • 132. Real Data & Public Datasets Epinions dataset, collected by P. Massa: • in a 5-week crawl (November/December 2003) from the Epinions.com • Not just ratings about items, also trust statements: – 49,290 users who rated a total of – 139,738 different items at least once, writing 664,824 reviews. – 487,181 issued trust statements. • only positive trust statements and not negative ones
  • 133. Real Data & Public Datasets Advogato: www.trustlet.org • a weighted dataset. Opinions aggregated (centrally) on a 3 levels base, Apprentice, Journeyer, and Master • Tuples of: minami -> polo [level="Journeyer"]; • Used to test trust propagation in social networks (asuming trust transitivity). • Trust metric (by P. Massa) uses this information in order to assign to every user a final certification level aggregating weighted opinions.
  • 134. Real Data & Public Datasets MoviePilot dataset: www.moviepilot.com • this dataset contains information related to concepts from the world of cinema, e.g. single movies, movie universes (such as the world of Harry Potter movies), upcoming details (trailers, teasers, news, etc • RecSysChallenge: live evaluation session will take place where algorithms trained on offline data will be evaluated online, on real users. Mendeley dataset: www.mendeley.com • recommendations to users about scientific papers that they might be interested in.
  • 135. Real Data & Public Datasets • No agents, no distribution  public dataset from recomender systems • Authors have to distribute opinions to participants in some way. • Ratings about items, not trust statements. • Relationship between # of ratings / # of items too low • Relationship between # of ratings / # of users too low • No time-stamps • Papers intend to be based on real data, but required transformation from centralized to distributed aggregation distort reality of these data.
  • 136. Realistic Data • We need to generate realistic data to test trust and reputation in agent systems. • Several technical/design problems arise: – Which # of users, ratings and items we need? – How much dynamic would be the society of agents? • But the hardest part is the pshichological/sociological one: – How individuals take trust decisions? Which types of individuals? – How real society of humans trust? How many of each individual type belong to real human society?
  • 137. Realistic Data • Large-scale simulation with Netlogo (http://ccl.northwestern.edu/netlogo/) • Others: MASON (https://mason.dev.java.net/), RePast (http://repast.sourceforge.net/) • But there are mainly adhoc simulations which are difficult to repeat by third parties. • Many of them are unrealistic agents with binary behaviour altruist/egoist based on game theory views.
  • 138. Examples of AdHoc Simulations • Convergence of reputation image to real behaviour of agents. Static behaviours, no recomendations, just consume/provide services. Worst case. • Maximum Influence of cooperation. Free and honest recomendations from every agent based on consumed services. Best case. • Inclusion of dynamic behaviours, different % of malicious agents in society, collusions between recommenders and providers, etc. Compare results with the previous ones. “Avoiding malicious agents using fuzzy recommendations” J. Carbo, J. M. Molina, J. Dávila. Journal of Organizational Computing & Electronic Commerce, vol. 17, num. 1
  • 139. Technical/Design Problems to generate simulated data • Lessons learned from the ART testbed experience. • http://megatron.iiia.csic.es/art-testbed/ • A testbed would help to compute fair comparisons: “Researchers can perform easily-repeatable experiments in a common environment against accepted benchmarks” • Relative Success: – 3 international competitions jointly with AAMAS 06- 08. – Over 15 participants in each competition. – Several journal and conference publications use it.
  • 142.
  • 143. ART Interface The agent system is displayed as a topology in the left, while in the left two panels show the details of particular agent statistics and of global system statistics.
  • 144. The ART testbed • The simulation creates opinions according to an error distribution of zero mean and a standard deviation s: s = (s∗ + α / cg) t • where s∗, unique for each era, is assigned to an appraiser from a uniform distribution. • t is the true value of the painting to be appraised • α is a hidden value fixed for all appraisers that balances opinion-generation cost and final accuracy. • cg, the cost an appraiser decides to pay to generate an opinion. Therefore, the minimum achievable error distribution standard deviation is s∗ · t
  • 145. The ART testbed • Each appraiser a’s actual client share ra takes into account the appraiser’s client share from the previous timestep: ra = q · ra’ + (1 − q) · ˜ra • where ra’ is appraiser a’s client share in the previous timestep. • q is a value that reflects the influence of previous client share size on next client share size (thus the volatility in client share magnitudes due to frequent accuracy oscillations may be reduced)
  • 146. 2006 ART Competition 2006 Competition setup: • Clients per agent: 20, Painting eras: 10, games with 5 agents • Costs 100/10/1, Sensing-Cost-Accuracy=0.5, Winner iam from Southampton Univ. Post competition discussion notes: • Larger number of agents required, Definition of dummy agents, Relate # of eras with # of agents, More fair distribution of expertise (just uniform), More abrupt change in # of clients (greater q), Improving expertise over time?
  • 147. 2006 ART Winner conclusions “The ART of IAM: The Winning Strategy for the 2006 Competition”, Luke Teacy et al, Trust WS, AAMAS 07. • It is generally more economical for an agent to purchase opinions from a number of third parties than it is to invest heavily in its own opinion • There is little apparent advantage to reputation sharing. reputation is most valuable in cases where direct experience is relatively more difficult to acquire • The final lesson is that although trust can be viewed as a sociological concept, and inspiration for computational models of trust can be drawn from multiple disciplines, the problem of combining estimates of unknown variables (such as trustee behaviour) is fundamentally a statistical one.
  • 148. 2007 ART Competition 2007 Competition Setup: • Costs 100/10/0.1, All agents have equal sum of expertise values, Painting eras: static but unknown, Expertise assignments may change during the course of the game, Include dummy agents, games with 25 agents 2007 Competition Discussion Notes: • it need sto facilitate reputation exchange • It doesn’t have to produce all changes at the same time, Gradual changes • Studying barriers to entry; how a new agent joins an existing MAS: Cold start vs. Hot start (exploration vs explotation) • More competitive dummy agents • relationship between opinion generation cost and accuracy
  • 149. 2008 ART Competition 2008 Competition Setup: • limited in the number of certainty and opinion requests that he can send. • Certainty request has cost. • deny the use of self opinions • Wider range of expertise values • Every time step, select randomly a number of eras to change, and add a given amount of positive change (increase value). For every positive change, apply also a negative change of the same amount, so that the average expertise of the agent is not modified
  • 150. Evaluation criteria • Lack of criteria on which and how the very different trust decisions should be considered Conte and Paolucci 02: • epistemic decisions: those about about updating and generating trust opinions from received reputations • pragmatic-strategic decisions are decisions of how to behave with partners using these reputation-based trust • memetic decisions stand for the decisions of how and when to share reputation with others.
  • 151. Main Evaluation Criteria of The ART testbed • The winning agent is selected as the appraiser with the highest bank account balance in the direct confrontation of appraiser agents repeated X times. • In other words, the appraiser who is able to: – estimate the value of its paintings most accurately – purchase information most prudently. • Where an ART iteration involves 19 steps (11 decisions, 8 interactions) to be taken by an agent.
  • 152. Trust decisions in ART testbed 1. How our agent should aggregate reputation information about others? 2. How our agent should trust weights of providers and recommenders are updated afterwards? 3. How many agents our agent should ask for reputation information about other agents? 4. How many reputations and opinions requests from other agents should our agent answer? 5. How many agents our agent should ask for opinions about our assigned paintings? 6. How much time (economic value) our agent should spend building requested opinions about the paintings of the other agents? 7. How much time (economic value) our agent should spend building the appraisals of the own paintings? (AUTOPROVIDER!) …
  • 153. Limitations of Main Evaluation Criteria of ART testbed From my point of view: • Evaluates all trust decisions jointly: should participants play provider and consumer roles jointly of just the role of opinion consumers? • Is the direct confrontation of competitor agents the right scenario to compare them?
  • 154. Providers vs. Consumers • Playing games with two participants of 2007 competition (iam2 and afras) and other 8 dummy agents. • Dummy agents implemented ad hoc to be the solely opinion providers, they do not ask for any service to 2007 participants. • None of both 2007 participants will ever provide opinions/reputations, they are just consumers. •  Differences between both agents were much less than the official competition stated (absolutely and relatively). “An extension of a fuzzy reputation agent trust model in the ART testbed” Soft Computing v14, issue 8, 2010
  • 155. Trust Strategies in Evolutive Agent Societies • An evolutionarily stable strategy (ESS) is a strategy which, if adopted by a population of players, cannot be invaded by any alternative strategy • An evolutionarily stable trust strategy is a strategy which, if becomes dominant (adopted by a majority of agents) can not be defeated by any alternative trust strategy. • Justification: The goal of trust strategies is to establish some kind of social control over malicious/distrustful agents • Assumption: agents may change of trust strategy. Agents with a failing trust strategy would get rid of it and they would adopt a successful trust strategy in the future.
  • 156. An evolutive view of ART games • We consider a failing trust strategy the one who lost (earning less money than the others) the last ART game. • We consider the successful trust strategy to the one who won the last ART game (earning more money than the others). • By this way replacing in consecutive games the participant who lost the game by the one who won it. • We have applied it to the 16 participant agents of 2007 ART competition
  • 157. and so on… 16 participants Winner in 2007 competition Winner ART gam ART game ART game Loser Loser
  • 158. Game Winner Earnings Loser Earnings 1 iam2 17377 xerxes -8610 2 iam2 14321 lesmes -13700 3 iam2 10360 reneil -14757 4 iam2 10447 blizzard -7093 5 agentevicente 8975 Rex -5495 6 iam2 8512 alatriste -999 7 artgente 8994 agentevicente 2011 8 artgente 10611 agentevicente 1322 9 artgente 8932 novel 424 10 iam2 9017 IMM 1392 11 artgente 7715 marmota 1445 12 artgente 8722 spartan 2083 13 artgente 8966 zecariocales 1324 14 artgente 8372 iam2 2599 15 artgente 7475 iam2 2298 16 artgente 8384 UNO 2719 17 artgente 7639 iam2 2878 18 iam2 6279 JAM 3486 19 iam2 14674 artgente 2811 20 artgente 8035 iam2 3395
  • 159. Results of repeated games 2007 winner is not a Evolutionarily Stable Strategy. • Although the strategy of the winner of the 2007 spreads in the society of agents (until 6 iam2 agents out of 16), it never becomes dominant (no majority of iam2 agents). • iam2 strategy is defeated by artgente strategy, which becomes dominant (11 artgente agents out of 16). Therefore its superiority as winner of 2007 competition is, at least, relative. • The right equilibrium of trust strategies that form an evolutionarily stable society is composed by 10-11 Artgente agents and 6-5 iam2 agents.
  • 160. CompetitionRank EvolutionRank Agent ExcludedInGame 6 1 artgente - 1 2 iam2 - 2 3 JAM 18 7 4 UNO 16 4 5 zecariocales 13 5 6 spartan 12 9 7 marmota 11 13 8 IMM 10 10 9 novel 9 15 10 agentevicente 8 11 11 alatriste 6 12 12 rex 5 3 13 Blizzard 4 8 14 reneil 3 14 15 lesmes 2 16 16 xerxes 1
  • 161. Other Evaluation Criteria of the ART testbed • The testbed also provides functionality to compute: – the average accuracy of the appraiser’s final appraisals (final appraisal error mean) – the consistency of that accuracy (final appraisal error standard deviation) – the quantities of each type of message passed between appraisers are recorded. • We could take into account other relevant evaluation criteria?
  • 162. Evaluation criteria from the agent-based view Characterization and Evaluation of Multi-agent System, P. Davidsson, S. Johanson, M. Svahnberg In Software Engineering for Multi-Agent Systems IV, LNCS 3914, 2006. 9 Quality atributes: 1. Reactivity: How fast are opinions re-evaluated when there are changes in expertise? 2. Load balancing: How evenly is the load balanced between the appraisals? 3. Fairness: Are all the providers treated equally? 4. Utilization of resources: Are the available abilities/information utilized as much as is possible?
  • 163. Evaluation criteria from the agent-based view 5. Responsiveness: How long does it take for the appraisals to get response to an individual request? 6. Communication overhead: How much extra communication is needed for the appraisals? 7. Robustness: How vulnerable is the agent to the absence of responses? 8. Modifiability: How easy is it to change the behaviour of the agent in very different conditions? 9. Scalability: How good is the system at handling large numbers of providers and consumers)?
  • 164. Evaluation criteria from the agent-based view Evaluation of Multi-Agent Systems: The case of Interaction, H. Joumaa, Y. Demazeau, J.M. Vincent, 3rd Int. Conf. on Information & Communication Technologies: from Theory to Applications. IEEE Computer Society, Los Alamitos (2008) • An evaluation at the interaction level, based on the weight of the information brought by a message. • A function Φ is defined in order to calculate the weight of pertinent messages.
  • 165. Evaluation criteria from the agent-based view • The relation between the received message m and the effects on the agent is studied in order to calculate the Φ(m) value. According to the model, two kinds of functions are considered: – A function that associates weight to the message according to its type. – A function that associates weight to the message according to the change provoked on the internal state and the actions triggered by its reception.
  • 166. Consciousness Scale • Too much quantification (AI is not just statistics…) • Compare agents qualitatively  Measure their level of consciusness • A scale of 13 conscious levels according to the cognitive skills of an agent, the “Cognitive Power” of an agent. • The higher the level obtained, the more the behavior of the agent resembles humans • www.consscale.com
  • 167. Bio-inspired order of Cognitive Skills • From the point of view of emotions (Damasio, 1999): “Emotion” “Feeling” “Feeling of a Feeling” “Fake Emotions”
  • 168. Bio-inspired order of Cognitive Skills • From the point of view of perception and action (Perner, 1999): “Perception” “Adaptation” “Attention” “Set Shifting” “Planning” “Imagination”
  • 169. Bio-inspired order of Cognitive Skills • From the point of view of Theory of Mind (Lewis 2003): “I Know” “I Know I Know” “I Know You Know” “I Know You Know I Know”
  • 170. Consciousness Levels Super-Conscious Human-like Social Empathic Self-Conscious Emotional Executive Attentional Adaptive Reactive
  • 172. Thank you ! EASSS 2010, Saint-Etienne, France 172