How to Troubleshoot Apps for the Modern Connected Worker
Ā
HARM: A Hybrid Rule-based Agent Reputation Model based on Temporal Defeasible Logic
1. HARM
A Hybrid Rule-based Agent Reputation
Model based on Temporal Defeasible Logic
Kalliopi Kravari, Nick Bassiliades
Department of Informatics
Aristotle University of Thessaloniki
Thessaloniki, Greece
2. OVERVIEW
Agents in the SW interact under uncertain and risky situations.
Whenever they have to interact with partners of whom they know nothing..
they have to make decision involving risk.
Thus:
Their success may depend on their ability to choose reliable partners.
Solution:
Reliable trust and/or reputation models.
SW Intelligent
SW Trust Layer
evolution Agents
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
2
3. OVERVIEW
Trust is the degree of trust that can be invested in a certain agent.
Reputation is the opinion of the public towards an agent.
Reputation (trust) models provide the means to quantify
reputation and trust
ļ help agents to decide who to trust
ļ encourage trustworthy behavior
ļ deter dishonest participation
Current computational reputation models are usually built either on
interaction trust or witness reputation.
an agentās direct experience
reports provided by
others
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
3
4. APPROACHESā LIMITATIONS
ļ If the reputation estimation is based only on direct experience, it would
require a long time for an agent to reach a satisfying estimation level.
Why?
because when an agent enters an environment for the first time,
it has no history of interactions with the other agents in the environment.
ļ If the reputation estimation is based only on witness reports, it could
not guarantee reliable estimation.
Why?
because self-interested agents could be unwilling or unable
to sacrifice their resources in order to provide reports.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
4
5. HYBRID MODELS
Hybrid models combine both interaction trust and witness reputation.
We propose HARM:
an incremental reputation model that combines
ļ¼ the advantages of the hybrid reputation models
ļ¼ the benefits of temporal defeasible logic (rule-based approach)
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
5
6. (Temporal) Defeasible Logic
Temporal defeasible logic (TDL) is an extension of defeasible logic (DL).
DL is a kind of non-monotonic reasoning
Why defeasible logic?
ļ¼ Rule-based, deterministic (without disjunction)
ļ¼ Enhanced representational capabilities
ļ¼ Classical negation used in rule heads and bodies
ļ¼ Negation-as-failure can be emulated
ļ¼ Rules may support conflicting conclusions
ļ¼ Skeptical: conflicting rules do not fire
ļ¼ Priorities on rules resolve conflicts among rules
ļ¼ Low computational complexity
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
6
7. Defeasible Logic
ļFacts: e.g. student(Sofia)
ļStrict Rules: e.g. student(X) person(X)
ļDefeasible Rules: e.g. r: person(X) works(X)
rā: student(X) Ā¬works(X)
ļPriority Relation between rules, e.g. rā > r
ļProof theory example:
ļ A literal q is defeasibly provable if:
ļsupported by a rule whose premises are all defeasibly provable
AND
ļ q is not definitely provable AND
ļeach attacking rule is non-applicable or defeated by a superior
counter-attacking rule
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
7
8. Temporal Defeasible Logic
Two types of temporal literals:
ļ¼ expiring temporal literals l:t (a literal l is valid for t time instances)
ļ¼ persistent temporal literals l@t
(a literal l is active after t time instances have passed and is valid
thereafter)
ļ temporal rules: a1:d1 ... an:dn d b:db
delay between the cause a1:d1 ... an:dn and the effect b:db
Example:
(r1) => a@1 Literal a is created due to r1.
(r2) a@1=>7 b:3 It becomes active at time offset 1.
It causes the head of r2 to be fired at time 8.
The result b lasts only until time 10.
Thereafter, only the fact a remains.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
8
9. HARM ā Evaluated Abilities
ļEvaluates four agent abilities:
validity, completeness, correctness and response time.
ļAn agent is valid if it is both sincere and credible.
ļ¼ Sincere: believes what it says
ļ¼ Credible: what it believes is true in the world
ļAn agent is complete if it is both cooperative and vigilant.
ļ¼ Cooperative: says what it believes
ļ¼ Vigilant: believes what is true in the world
ļAn agent is correct if its provided service is correct with
respect to a specification.
ļResponse time is the time that an agent needs to complete
the transaction.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
9
10. HARM - Ratings
Agent A establishes interaction with agent B:
(A)Truster is the evaluating agent
(B) Trustee is the evaluated agent
Trusterās rating value (r) in HARM has 8 coefficients:
ļ§ 2 IDs: Truster, Trustee
ļ§ 4 abilities:
Validity, Completeness, Correctness, Response time
ļ§ 2 weights: Confidence, Transaction value
ļ¼ Confidence: how confident the agent is for the rating
ļ¼ Transaction value: how important the transaction was for the
agent
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
10
11. HARM ā Experience Types
ļ¼ Direct Experience (PRAX )
ļ¼ Indirect Experience
ļ§ reports provided by strangers (SRAX)
ļ§ reports provided by known agents (e.g friends) due to
previous interactions (KRAX )
ļ¼ Both
Final reputation value of an agent X,
required by an agent A:
RAX = {PRAX , KRAX, SRAX}
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
11
12. HARM ā Experience Types
ļ Sometimes one or more rating categories are missing.
ā« e.g. a newcomer has no personal experience
ļ A user is much more likely to believe statements from a trusted
acquaintance than from a stranger.
ā« Thus, personal opinion (AX) is more valuable than strangersā opinion
(SX), as well as it is more valuable even from previously trusted partners
(KX).
ļ Superiority relationship among rating categories
AX, KX, SX
AX, KX AX, SX KX, SX
AX KX SX
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
12
13. HARM ā Final reputation value
ļRAX is a function that combines each available category
ā« personal opinion (AX)
ā« strangersā opinion (SX)
ā« previously trusted partners (KX)
RAX PR AX , KR AX , SR AX
ļHARM allows agents to define weights of ratingsā
coefficients
ā« Personal preferences
coefficient coefficient coefficient
AVG wi log prAX AVG wi log krAX AVG wi log srAX
RAX 4
, 4
, 4
,
i 1
wi i 1
wi i 1
wi
coefficient validity, completeness, correctness, response _ time 2
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
13
15. HARM
Rule-based Decision Making Mechanism
ļ Confidence and transaction value allow us to decide
how much attention we should pay on each rating.
ļ It is important to take into account ratings that were
made by confident trusters, since their ratings are more
likely to be right.
ļ Confident trusters, that were interacting in an
important for them transaction, are even more likely to
report truthful ratings.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
15
16. HARM
Which ratings ācountā?
r1: count_rating(ratingā?idx, trusterā?a, trusteeā ?x) :=
confidence_threshold(?conf),
ā¢ if both trusterās confidence and
transaction_value_threshold(?tran), transaction importance are high,
rating(idā?idx, confidenceā?confx, then that rating will be counted
transaction_valueā?tranx), during the estimation process
?confx >= ?conf, ?tranx >= ?tran.
r2: count_rating(ā¦) := ā¢ if the transaction value is lower than
the threshold, it doesnāt matter so much
ā¦ if the trusterās confidence is high
?confx >= ?conf.
ā¢ if there are only ratings with high
r3: count_rating(ā¦) :=
transaction value,
ā¦ then they should be taken into account
?tranx >= ?tran.
r1 > r2 > r3 ā¢ In any other case, the rating
should be omitted.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
16
17. HARM
Conflicting Literals
ļ All the previous rules are conclude positive literals.
ļ These literals are conflicting each other, for the same pair of
agents (truster and trustee)
ā« We want in the presence e.g. of personal experience to omit strangersā
ratings.
ā« Thatās why there is also a superiority relationship between the rules.
ļ The conflict set is formally determined as follows:
C[count_rating(trusterā?a, trusteeā?x)] =
{ Ā¬ count_rating(trusterā?a, trusteeā?x) }
{ count_rating(trusterā?a1, trusteeā?x1) | ?a ?a1 ā§ ?x ?x1 }
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
17
18. HARM
Determining Experience Types
r4: known(agent1ā?a, agent2ā?y) :- Which agents
count_rating(rating ā ?id, trusterā?a, trusteeā?y). are considered
as known?
r5: count_prAX(agentā?a, trusterā?a, trusteeā?x, ratingā?id) :-
count_rating(rating ā ?id, trusterā? a, trusteeā ?x).
r6: count_krAX(agentā?a, trusterā?k, trusteeā?x, rating ā?id) :-
known(agent1ā?a, agent2ā?k), Categori
zation of
count_rating(ratingā?id, trusterā?k, trusteeā ?x).
ratings
r7: count_srAX(agentā?a, trusterā?s, trusteeā?x, ratingā?id) :-
count_rating(rating ā ?id, truster ā?s, trusteeā ?x),
not(known(agent1ā?a, agent2ā?s)).
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
18
19. HARM
Rule-based Decision Making Mechanism
ļ Final step is to decide whose experience will ācountā:
direct, indirect (witness), or both.
ļ The decision for RAX is based on a relationship theory
e.g. Theory #1: All categories count equally.
r8: participate(agentā?a, trusteeā?x, ratingā?id_ratingAX) :=
count_pr(agentā?a, trusteeā?x, ratingā ?id_ratingAX).
r9: participate(agentā?a, trusteeā?x, ratingā?id_ratingKX) :=
count_kr(agentā?a, trusteeā?x, ratingā ?id_ratingKX).
r10: participate(agentā?a, trusteeā?x, ratingā?id_ratingSX) :=
count_sr(agentā?a, trusteeā?x, ratingā ?id_ratingSX).
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
19
20. HARM
Theory #1: All categories count equally
AX, KX, SX
AX, KX AX, SX KX, SX
AX KX SX
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
20
21. HARM
Rule-based Decision Making Mechanism
e.g. Theory #2:
An agent relies on its own experience if it believes it is
sufficient.
If not it acquires the opinions of others.
r8: participate(agentā?a, trusteeā?x, ratingā?id_ratingAX) :=
count_pr(agentā?a, trusteeā?x, ratingā ?id_ratingAX).
r9: participate(agentā?a, trusteeā?x, ratingā?id_ratingKX) :=
count_kr(agentā?a, trusteeā?x, ratingā ?id_ratingKX).
r10: participate(agentā?a, trusteeā?x, ratingā?id_ratingSX) :=
count_sr(agentā?a, trusteeā?x, ratingā ?id_ratingSX).
r8>r9>r10
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
21
22. HARM
Theory #2: Personal experience is preferred to
friendsā opinion to strangersā opinion
AX, KX, SX
AX, KX AX, SX KX, SX
AX > KX > SX
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
22
23. HARM
Rule-based Decision Making Mechanism
e.g. Theory #3:
If direct experience is available (PRAX), then it is preferred
to be combined with ratings from known agents (KRAX).
If not, HARM acts as a pure witness system.
r8: participate(agentā?a, trusteeā?x, ratingā?id_ratingAX) :=
count_pr(agentā?a, trusteeā?x, ratingā ?id_ratingAX).
r9: participate(agentā?a, trusteeā?x, ratingā?id_ratingKX) :=
count_kr(agentā?a, trusteeā?x, ratingā ?id_ratingKX).
r10: participate(agentā?a, trusteeā?x, ratingā?id_ratingSX) :=
count_sr(agentā?a, trusteeā?x, ratingā ?id_ratingSX).
r8> r10, r9>r10
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
23
24. HARM
Theory #3: Personal experience and friendsā
opinion is preferred to strangersā opinion
AX, KX, SX
AX, KX AX, SX KX, SX
>
AX KX SX
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
24
25. HARM
Temporal Defeasible Logic Extension
ļ Agents may change their objectives at any time
ļ¼ Evolution of trust over time should be taken into account
ļ¼ Only the latest ratings participate in the reputation estimation
ļ In the temporal extension of HARM:
ļ¼ each rating is a persistent temporal literal of TDL
ļ¼ each rule conclusion is an expiring temporal literal of TDL
ļ The trusterās rating (r) is active after time_offset time
instances have passed and is valid thereafter
rating(idāvalue1, trusterāvalue2, trusteeā value3, validityāvalue4,
completenessāvalue5, correctnessāvalue6,
response_time āvalue7, confidenceāvalue8,
transaction_valueāvalue9)@time_offset.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
25
26. HARM
Temporal Defeasible Logic Extension
ļ Rules are modified accordingly:
ļ¼ each rating is active after t time instances have passed (ā@tā)
ļ¼ each conclusion has a duration (ā:durationā)
ļ¼ each rule has a delay, which models the delay between the cause and
the effect.
e.g.
r1: count_rating(ratingā?idx, trusterā?a, trusteeā ?x):duration := delay
confidence_threshold(?conf),
transaction_value_threshold(?tran),
rating(idā?idx, confidenceā?confx,
transaction_valueā?tranx) @t,
?confx >= ?conf, ?tranx >= ?tran.
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
26
27. HARM Evaluation
ļ We implemented the model in EMERALD
ļ¼ Framework for interoperating knowledge-based
intelligent agents in the SW.
ļ¼ Built on JADE multi-agent platform
ļ EMERALD uses Reasoners (agents offering
reasoning services)
ļ¼ Supports the DR-Device defeasible logic system
ļ¼ Used temporal predicates to simulate the temporal
semantics.
o No available temporal defeasible logic reasoner
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
27
28. HARM - Evaluation
ļ All agents provide the same service
ļ¼ Performance is service ā independent
ļ Consumer agent selects the provider with the highest
reputation value
5. Report rating
1. Request reputations of the provider agents
2. Inform about the provider with the highest reputation
Consumer agent HARMAgent
4. Service providing
3. Service request
Provider agenty
Provider agentx
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
28
29. HARM - Evaluation Number of simulation: 500
ļ Performance of providers (e.g. Number of providers: 100
quality of service) is the utility Good providers 10
that a consumer gains from each Ordinary providers 40
interaction
Intermittent
ļ¼ Utility Gain (UG), UG ļ¤[-10, 10] providers
5
ļ Four models are used: Bad providers 45
ļ¼ HARM (rule-based / temporal)
ļ¼ T-REX (temporal degradation) Average UG per interaction
ļ¼ SERM (uses all history) HARM 5.73
ļ¼ NONE (no trust mechanism). T-REX 5.57
SERM 2.41
ļ From previous study:
NONE 0.16
ļ¼ CR, SPORAS (literature famous, CR 5.48
distributed)
SPORAS 4.65
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
29
30. Conclusions
ļ We proposed HARM that combines:
ļ¼ the hybrid approach (interaction trust and witness reputation)
ļ¼ the benefits of temporal defeasible logic (rule-based approach)
ļ Overcomes the difficulty to locate witness reports
(centralized administration authority)
ļ It is the first reputation model that uses explicitly
knowledge, in the form of defeasible logic, to predict
agentās future behavior
ļ¼ Easy to simulate human decision making
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
30
31. Future Work
ļ Fully implement HARM with temporal defeasible logic
ļ Compare HARMās performance with other centralized
and decentralized models from the literature
ļ Combine HARM and T-REX
ļ Develop a distributed version of HARM
ļ Verify its performance in real-world e-commerce
applications
ļ Combining it with Semantic Web metadata for trust
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
31
32. Thank you!
Any Questions?
Nick Bassiliades RuleML 2012, Montpellier, Aug 27-29
32
Editor's Notes
..deter dishonest participation by providing the mean through which reputation and ultimately trust can be quantified
Hence, models based only on one or the other approach typically cannot guarantee stable and reliable estimations.
The testbed environment for evaluating HARM is a multi-agent system consisting ofagents providing services and agents using these services in an on-line community.We assume that the performance of a provider (and effectively its trustworthiness) isindependent from the service that is provided. In order to reduce the complexity of the testbedās environment, it isassumed that there is only one type of service in the testbed.The value of UG variesfrom ā10 to 10 and depends on the level of performance of the provider in the interaction.Ī¤ĻĻĪ± ĻĻĪæĪ½ Ī±ĻĪæĻĪ¬ ĻĪ± Ī¬Ī»Ī»Ī± Ī¼ĪæĪ½ĻĪĪ»Ī± ĻĪæĻ ĻĻĪ·ĻĪ¹Ī¼ĪæĻĪæĪ¹ĪæĻĪ¼Īµ ĪæĻĻĪµ ĻĻĪæ paper Ī»ĪĪ¼Īµ ĪŗĪ¬ĻĪ¹. ĪĻĪæĻĪµĪÆĻĪµ Ī½Ī± ĻĪµĪÆĻĪµ Ī±Ī½ ĪøĪĪ»ĪµĻĪµ ĻĻĪ¹ ĻĪæ Ī¼ĻĪ½Īæ ĻĪæ HARM ĪĻĪµĪ¹ temporal, ĪµĪ½Ļ ĻĪæ t-rexĪ»Ī±Ī¼Ī²Ī¬Ī½ĪµĪ¹ Ļ ĻĻĻĪ· ĻĪæĪ½ ĻĻĻĪ½Īæ ĪøĪµĻĻĻĪ½ĻĪ±Ļ ĻĪ± ĻĪ¹Īæ ĻĻĻĻĻĪ±ĻĪ± rating ĻĪ¹Īæ ĻĪ·Ī¼Ī±Ī½ĻĪ¹ĪŗĪ¬ Ī¼Īµ ĻĪ·Ī½ ĻĻĪ®ĻĪ· ĪæĻ ĻĪ¹Ī±ĻĻĪ¹ĪŗĪ¬ ĻĪæĻ clock (Timer) ĪµĪ½Ļ ĻĪæ setmĪ“ĪµĪ½ Ī»Ī±Ī¼Ī²Ī¬Ī½ĪµĪ¹ Ļ ĻĻĻĪ· ĻĪæĪ½ ĻĻĻĪ½Īæ.
The testbed environment for evaluating HARM is a multi-agent system consisting ofagents providing services and agents using these services in an on-line community.We assume that the performance of a provider (and effectively its trustworthiness) isindependent from the service that is provided. In order to reduce the complexity of the testbedās environment, it isassumed that there is only one type of service in the testbed.The value of UG variesfrom ā10 to 10 and depends on the level of performance of the provider in the interaction.Ī¤ĻĻĪ± ĻĻĪæĪ½ Ī±ĻĪæĻĪ¬ ĻĪ± Ī¬Ī»Ī»Ī± Ī¼ĪæĪ½ĻĪĪ»Ī± ĻĪæĻ ĻĻĪ·ĻĪ¹Ī¼ĪæĻĪæĪ¹ĪæĻĪ¼Īµ ĪæĻĻĪµ ĻĻĪæ paper Ī»ĪĪ¼Īµ ĪŗĪ¬ĻĪ¹. ĪĻĪæĻĪµĪÆĻĪµ Ī½Ī± ĻĪµĪÆĻĪµ Ī±Ī½ ĪøĪĪ»ĪµĻĪµ ĻĻĪ¹ ĻĪæ Ī¼ĻĪ½Īæ ĻĪæ HARM ĪĻĪµĪ¹ temporal, ĪµĪ½Ļ ĻĪæ t-rexĪ»Ī±Ī¼Ī²Ī¬Ī½ĪµĪ¹ Ļ ĻĻĻĪ· ĻĪæĪ½ ĻĻĻĪ½Īæ ĪøĪµĻĻĻĪ½ĻĪ±Ļ ĻĪ± ĻĪ¹Īæ ĻĻĻĻĻĪ±ĻĪ± rating ĻĪ¹Īæ ĻĪ·Ī¼Ī±Ī½ĻĪ¹ĪŗĪ¬ Ī¼Īµ ĻĪ·Ī½ ĻĻĪ®ĻĪ· ĪæĻ ĻĪ¹Ī±ĻĻĪ¹ĪŗĪ¬ ĻĪæĻ clock (Timer) ĪµĪ½Ļ ĻĪæ setmĪ“ĪµĪ½ Ī»Ī±Ī¼Ī²Ī¬Ī½ĪµĪ¹ Ļ ĻĻĻĪ· ĻĪæĪ½ ĻĻĻĪ½Īæ.