The document discusses trust and reputation in social networks. It proposes a new algorithm called PolarityTrust that computes trustworthiness rankings while addressing issues with existing approaches. PolarityTrust uses a graph-based propagation method that considers both positive and negative links differently. It was evaluated on generated networks with malicious users and attacks, and was shown to outperform baseline methods by being more resistant to such attacks.
Time Series Foundation Models - current state and future directions
PolarityTrust: measuring Trust and Reputation in Social Networks
1. escuela técnica superior
de ingeniería informática
PolarityTrust: measuring Trust and
Reputation in Social Networks
F. Javier Ortega
javierortega@us.es
José A. Troyano
troyano@us.es
Fermín L. Cruz
fcruz@us.es
Fernando Enríquez de Salamanca
fenros@us.es
Departamento de
Lenguajes y Sistemas Informáticos
4. Motivation
How can I make the most from these
transactions?
Selling more products but cheaper?
Selling rare (and maybe expensive) articles?
Free shipping?
How can I choose the best seller?
The one with highest amount of sales?
The one with most positive opinions?
The cheapest one?
5. Motivation
♦ Δ Reputation => Δ Sales
♦ Gaining high reputation:
●
Obtain (false) positive opinions from
other accounts (not neccesarily
other users).
●
Sell some bargains to obtain high
reputation from the buyers.
●
Give negative opinions for sellers that
can be competitors.
6. Motivation
♦ Goals:
●
Compute a ranking of users according to their
trustworthiness
●
Process a network with positive and negative links
(opinions) between the nodes (users)
●
Avoid the effects of the actions performed by malicious
users in order to increase their reputation
8. Introduction
♦ Trust and Reputation Systems (TRS) manage
trustworthiness of users in social networks.
♦ Common mechanisms:
●
Moderators (on-line forums)
●
Votes from users to users (eBay)
●
Karma (Slashdot, Meneame)
●
Graph-based ranking algorithms (EigenTrust)
9. Introduction
♦ Users feedback needed!
♦ Problems:
●
Positive bias
●
Incentives for users feedback
●
Cold-start problem
●
Exit problem
●
Duplicity of identities
10. Introduction
♦ Malicious users strategies to gain high reputation:
♦ Orchestrated attacks: Obtaining positive
opinions from other accounts (not neccesarily
other users).
♦ Camouflage behind good behavior: selling
some bargains to obtain high reputation from
the buyers.
♦ Malicious spies: using a honest account to
provide positive opinions to a malicious user.
♦ Camouflage behind judgments: giving
negative opinions from seller that can be
competitors.
11. Introduction
♦ Malicious users strategies to gain high reputation:
♦ Orchestrated attacks: Obtaining positive
opinions from other accounts (not neccesarily
other users).
6
1 7
2
0 3
9
5 8
4
12. Introduction
♦ Malicious users strategies to gain high reputation:
♦ Camouflage behind good behavior: selling
some bargains to obtain high reputation from
the buyers.
6
1 7
2
0 3
9
5 8
4
13. Introduction
♦ Malicious users strategies to gain high reputation:
♦ Malicious spies: using a honest account to
provide positive opinions to a malicious user.
6
1 7
2
0 3
9
8
4 5
14. Introduction
♦ Malicious users strategies to gain high reputation:
♦ Camouflage behind judgments: giving
negative opinions from seller that can be
competitors.
6
1 7
2
0 3
9
5 8
4
15. PolarityTrust
♦ Graph-based ranking algorithm
♦ Two scores for each node: PT⁺ and PT⁻
♦ Propagation of trust and distrust over the network
♦ PT⁺ and PT⁻ influence each other depending on the
polarity of the links between a node and its
neighbours.
16. PolarityTrust
♦ Propagation mechanism:
●
Given a set of trustworthy users
●
Their PT⁺ and PT⁻ scores are propagated to their
neighbours, and so on.
6
1 7
2
0 3
9
5 8
4
17. PolarityTrust
♦ Propagation rules:
●
Positive opinions => direct relation between scores
●
Negative opinions => cross relation between scores
♦ Non-negative Propagation extension:
●
Avoid the propagation of negative opinions from negative
users
b b
a a
c c
18. Evaluation
♦ Baselines:
●
EigenTrust
●
Fans Minus Freaks
♦ Dataset:
●
Randomly generated graphs: Barabasi and Albert model.
●
Malicious users added in order to perform common attacks
♦ Evaluation metrics:
●
Number of inversions: bad users in good positions
●
Incremental number of bad nodes
19. Evaluation
♦ Performance against common attacks:
Models ET FmF PT PT+NN Models ET FmF PT PT+NN
A 50 0 0 0 A 50 0 0 0
B 197 36 0 0 B 197 36 0 0
C 63 207 94 94 B+C 155 873 27 27
D 86 9 9 9 B+C+D 169 871 26 26
E 74 4 0 0 B+C+D+E 183 849 38 36
A: No attacks
B: Orchestrated attacks
C: Camouflage behind good behaviour
D: Malicious Spies
E: Camouflage behind judgments