SlideShare a Scribd company logo
1 of 154
Download to read offline
Global Computing: an Analysis of Trust and Wireless
Communications (Ph.D. Thesis)
Nicola Mezzetti
Technical Report UBLCS-2006-09
April 2006
Department of Computer Science
University of Bologna
Mura Anteo Zamboni 7
40127 Bologna (Italy)
The University of Bologna Department of Computer Science Research Technical Reports are available in
PDF and gzipped PostScript formats via anonymous FTP from the area ftp.cs.unibo.it:/pub/TR/UBLCS
or via WWW at URL http://www.cs.unibo.it/. Plain-text abstracts organized by year are available in
the directory ABSTRACTS.
Recent Titles from the UBLCS Technical Report Series
2005-9 A Reasoning Infrastructure to Support Cooperation of Intelligent Agents on the Semantic Grid, Dragoni,
N., Gaspari, M., Guidi, D., April 2005.
2005-10 Fault Tolerant Knowledge Level Communication in Open Asynchronous Multi-Agent Systems, Dragoni,
N., Gaspari, M., April 2005.
2005-11 The AEDSS Application Ontology: Enhanced Automatic Assessment of EDSS in Multiple Sclerosis, Gas-
pari, M., Saletti, N., Scandellari, C., Stecchi, S., April 2005.
2005-12 How to cheat BitTorrent and why nobody does, Hales, D., Patarin, S., May 2005.
2005-13 Choose Your Tribe! - Evolution at the Next Level in a Peer-to-Peer network, Hales, D., May 2005.
2005-14 Knowledge-Based Jobs and the Boundaries of Firms: Agent-based simulation of Firms Learning and Work-
force Skill Set Dynamics, Mollona, E., Hales, D., June 2005.
2005-15 Tag-Based Cooperation in Peer-to-Peer Networks with Newscast, Marcozzi, A., Hales, D., Jesi, G., Arte-
coni, S., Babaoglu, O., June 2005.
2005-16 Atomic Commit and Negotiation in Service Oriented Computing, Bocchi, L., Ciancarini, P., Lucchi, R.,
June 2005.
2005-17 Efficient and Robust Fully Distributed Power Method with an Application to Link Analysis, Canright, G.,
Engo-Monsen, K., Jelasity, M., September 2005.
2005-18 On Computing the Topological Entropy of One-sided Cellular Automata, Di Lena, P., September 2005.
2005-19 A model for imperfect XML data based on Dempster-Shafer’s theory of evidence, Magnani, M., Montesi,
D., September 2005.
2005-20 Friends for Free: Self-Organizing Artificial Social Networks for Trust and Cooperation, Hales, D., Arte-
coni, S., November 2005.
2005-21 Greedy Cheating Liars and the Fools Who Believe Them, Arteconi, S., Hales, D., December 2005.
2006-01 Lambda-Types on the Lambda-Calculus with Abbreviations: a Certified Specification, Guidi, F., January
2006.
2006-02 On the Quality-Based Evaluation and Selection of Grid Services (Ph.D. Thesis), Andreozzi, S., March
2006.
2006-03 Transactional Aspects in Coordination and Composition of Web Services (Ph.D. Thesis), Bocchi, L., March
2006.
2006-04 Semantic Frameworks for Implicit Computational Complexity (Ph.D. Thesis), Dal Lago, U., March 2006.
2006-05 Fault Tolerant Knowledge Level Inter-Agent Communication in Open Multi-Agent Systems (Ph.D. Thesis),
Dragoni, N., March 2006.
2006-06 Middleware Services for Dynamic Clustering of Application Servers (Ph.D. Thesis), Lodi, G., March 2006.
2006-07 Meta Model Management for (Semi) Structured and Uncertain Models (Ph.D. Thesis), Magnani, M.,
March 2006.
2006-08 Towards Abstractions for Web Services Composition (Ph.D. Thesis), Mazzara, M., March 2006.
Global Computing: an Analysis of Trust and Wireless
Communications (Ph.D. Thesis)
Nicola Mezzetti1
Technical Report UBLCS-2006-09
April 2006
1. Dept. of Computer Science, University of Bologna, 7, Mura Anteo Zamboni, 40127 Bologna, ITALY, e-mail:
nicola.mezzetti@cs.unibo.it, homepage: http://www.cs.unibo.it/˜mezzettn
1
Abstract
In this thesis we analyze problems of trust modelling and implementation in the context of Global Comput-
ing, and problems of how implementations of trust protocols for wireless settings can be formally verified.
A framework for supporting dynamic trust relationships is important for the definition of trustworthy
protocols for wireless computing; in fact, in ne a such context a dynamic set of possibly mutually distrustful
entities (i.e., individuals, services) interact with each other, consuming or producing services, for pursuing
their own goals. In this context, traditional static trust techniques fail to support the trustworthy execution
of interaction protocols among all the possible interacting entities. It is thus necessary the development of
a decentralized dynamic-trust system to be independently implemented by any possible entity in the global
computing system.
The study and modelling of trust has its roots in sociology, which provides conceptual and qualitative
specifications on the manner in which trust forms and evolves. However, current research on trust do not
address a unique formalization of trust, and of the requirements it has to meet. Currently a number of dy-
namic trust models exist, and each of these models is provided with simulation-based validation techniques
whose design is mainly motivated by good-sense. Only when one such a lack will be compensated, rigorous
techniques and tools for the validation of trust models will be available.
The goal of this thesis is twofold: firstly, we aim at providing a computational model of trust that faith-
fully represents the socio-cognitive models that are available in the literature and to show its application
flexibility in the global computing context; secondly, we aim at providing formal tools for the specification
and verification of trust protocols for wireless systems.
The first point is addressed through the development of the Socially-Inspired Reputation computational
model (SIR), which has been designed in order to monitor and evaluate entity behaviour in an adaptive,
robust, and scalable manner. SIR is the first computational model of reputation that implements the socio-
cognitive process that regulates the establishment of trust relationships in human societies. SIR has been
employed in the design, verification, and implementation of a Trust-Aware extension for naming services
(TAw), which shows to be effective in supporting the implementation of trustworthy interaction protocols
in open systems. SIR has also been employed in the design and implementation of rawDonkey, a client for
the peer-to-peer file-sharing eDonkey network which integrates an autonomous and decentralized monitor-
ing system reputation evaluation and formation which is employed for the penalization of non-collaborative
peers and for optimizing the global quality of the file-sharing service.
The second part of this thesis examines the communication model of wireless systems, which character-
ize the communication model that best represents global computing. We follow the process calculi approach
and we develop the Calculus of Wireless Systems (CWS). The principal contribution of this work is the de-
velopment of reduction semantics and a labelled transition semantics, and the proof of their equivalence.
The novelty of this work lies in the fact that wireless systems implement a communication model which
is substantially different from the ones which are implemented by the process calculi that are currently
proposed by the literature. Hence, for the modelling of wireless communications, it has been necessary the
development of a novel technique based on the observation of changes of communication states of the single
devices (i.e., transmission, reception) instead of on the observation of single transmission instances. This
approach introduces original aspects into both of our semantics and, consequently, on the demonstration
of the equivalence between them.
At the current state, the works on trust and on wireless systems have not been integrated yet, and the
two parts of this thesis may appear separate works. However, the binding between them lies in the achieve-
ments that are left open by this thesis, that we list below. First, the integration of CWS with a set of tools
for supporting the specification and the verification of trust-related properties and the consequent formal
specification of a reputation system based on SIR. This specification system will be then employed in the
verification of the properties of the reputation model. Secondly, the main idea is for us to provide compre-
hensive framework for the specification and analysis of decentralized reputation systems for supporting the
implementation of trustworthy interaction protocols for wireless systems. Even though this was the initial
goal of this thesis, we discovered that one such an ambitious goal requires an amount of efforts that is far
beyond the requirements of a Ph.D. thesis.
UBLCS-2006-09 2
Acknowledgments
This progress report is in partial fulfillment of the Ph.D. program; I would like to thank Prof.
Davide Sangiorgi, my Ph. D. advisor, and Prof. Fabio Panzieri, my supervisor within the FP5
European Project TAPAS (IST-2001-34069), for the support and the advices with which they are
helping me in the achievement of my Ph. D. title. Moreover, I would like to thank the European
FP5 RTD project TAPAS (IST-2001-34069) and the base project “WebMiNDS” under the FIRB pro-
gram of the Italian Ministry of Education, University and Research for the partial founding of my
Ph. D. Program.
3
CONTENTS
Contents
1 Introduction 8
2 Motivation 11
1 Discussion 11
2 The Socio-Cognitive Model of Trust 13
2.1 Considerations about the Socio-Cognitive Trust Model 15
3 A Conceptual Model for Computing Reputation 16
4 Related Contributions 17
4.1 The Web of Trust 19
3 SIR: Socially Inspired Reputation Model 21
1 Foundations of Trust 21
1.1 Trust Degrees 23
2 The Reputation Model 23
2.1 Modeling Trust Stability 25
2.2 Modeling Trust Balance 27
3 Augmenting Contexts with Attributes 27
4 Relations with the Socio-Cognitive model 28
4 Case Study: Trust-Aware Naming Service (TAw) 30
1 Motivation 30
2 The TAw Architecture 31
2.1 TAw Peer 31
2.2 Virtual Society Service 36
2.3 TAw Trust Propagation Protocol 39
2.4 Experimental Results 41
2.4.1 Adaptability Test 41
2.4.2 Robustness Test 43
5 Case Study: rawDonkey 46
1 Introduction 46
2 Related Work 47
3 rawDonkey 48
3.1 The eDonkey Protocol 48
3.2 The Architecture of rawDonkey 48
3.3 Trust Management System 49
3.3.1 Trust Evaluation Criteria 50
3.3.2 Gossiping 51
3.3.3 Trust-Aware Policies 52
6 Considerations 53
1 Considerations 53
2 Future Work 54
7 Motivation of the work 55
1 Discussion 55
2 Related Work 58
8 The Basic Language 59
1 Mobile Computing and Wireless Communications 59
1.1 Examples 60
2 The Language 62
UBLCS-2006-09 4
CONTENTS
3 Semantics 64
4 Reduction Semantics 65
4.1 Examples 71
5 Labelled Transition Semantics 74
5.1 Examples 77
6 The Harmony Theorem 79
6.1 The Harmony Theorem 86
9 The Extended Language 100
1 Language Extensions 100
2 Semantic Extensions 102
2.1 Extensions to the Reduction Semantics 102
2.2 Extensions to the Labelled Transition Semantics 105
3 Extensions to the Harmony Theorem 106
4 The Extended Harmony Theorem 119
5 Example: The Alternating Bit Protocol 140
5.1 Formal description 141
5.2 Execution Cases 142
5.2.1 Execution in Absence of Collisions 142
5.2.2 Execution in Presence of Collisions 142
10 Considerations 145
1 Considerations 145
2 Future Work 145
11 Conclusions 147
1 Future Works 148
UBLCS-2006-09 5
LIST OF FIGURES
List of Figures
1 Framework supporting interactions based on dynamical trust. 12
2 A Conceptual Model for Computing Reputation 16
1 UML class diagram for the TAwPeer. 32
2 Directory Information Tree implemented by the VSS service. 36
3 UML class diagram for the Virtual Society Service. 38
4 Reconfiguration test employing, respectively, the cognitive models (a) and arbitrary
trust stability and trust balance (b). 40
5 Robustness test employing the cognitive models. 42
6 Robustness test employing arbitrary trust stability and balance. 44
1 Architecture of rawDonkey. 49
2 Interaction between TMS and Core in free rider identification. 51
1 Example of a possible communication between node T and node R. 60
2 Example of coordination between transmitters. 60
3 Example of coordination between transmitters with different transmission
radius. 61
4 Example of collision on a receiver. 61
5 Possible ways in which transmissions in intersecting cells may collide. 62
6 A more complex example. 62
UBLCS-2006-09 6
LIST OF TABLES
List of Tables
1 Structure of Φ augmented with jurisdiction. 22
2 Structure of Φ augmented with attributes. 28
1 Structure of a trust propagation protocol message. 39
2 Pseudocode describing the trust propagation protocol. 40
1 Language for the description of wireless networks 63
2 Language for the description of tagged networks 67
3 Rules for >c
l . 68
4 Rules for >. 68
5 Structural Congruence. 68
6 Structural Congruence for tagged networks. 69
7 Rules for −→c
l,r. 69
8 Rules for c
l . 70
9 Reduction rules for normalization with possible interferences. 70
10 Rules for . 70
11 Labelled transition rules for networks. 75
12 Labelled transition rules for processes. 76
1 Language for the description of wireless networks 101
2 Extension to the language for tagged networks 102
3 Additional rules for >c
l . 103
4 Additional the rules for >. 103
5 Rules for ≡. 103
6 Additional rules for event firing. 104
7 Rules for ≡E. 104
8 Additional rules for c
l . 104
9 Additional rules for . 104
10 Additional rules for . 104
11 Reduction rules for internal actions. 105
12 Extensions to the labelled transition rules for networks. 105
13 Extensions to the labelled transition rules for processes. 106
UBLCS-2006-09 7
Chapter 1
Introduction
In the last years, global computing has become an active field of research in distributed and
mobile computation supported by infrastructures available globally and able to provide uniform
services meeting variable requirements in system aspects (e.g., communication, co-operation and
mobility, resource usage, security policies and mechanisms), with particular regard to exploiting
their universal scale and the programmability of their services. Specifically, global computing
aims at enabling a ubiquitous support for heterogeneous open systems in which entities (i.e.,
individuals and resources) dynamically join and leave the system, and employ the facilities of the
system for consuming, for providing, or for collaborating in the provisioning of a set of services.
When speaking of such ubiquitous contexts, we cannot make assumptions on the available
resources with which any single computing platform may support the global computation; we
can neither make assumptions about the specific networking technology interconnecting each
single device. It is because of its characteristics that mobile wireless devices is the technology
that most is taken as a reference in the research in global computing.It both enables the setting
up of open and ubiquitous computing infrastructures and their hardware may arbitrarily vary
between sensors of the size of one cubic millimeter and laptop computers.
To address the goals of global computing, further research has to be accomplished in order
to fill the gap between the currently available theory on distributed systems and the knowledge
that is needed to enable cooperative activities and resource integration between mutually un-
trustworthy devices. During our preliminary research in global computing, we came across a
subject that we believe to be important: the establishment of trust relationships between com-
pletely unknown entities via wireless communication protocols.
Trust is mandatory to support the dependable implementation of distributed protocols and
services; in addition, it is much more fundamental when the services are implemented through
collaboration of mutually untrustworthy entities (i.e., which are not guaranteed to behave de-
pendably). Standard applications for wired distributed systems build their dependability on the
existence of static trust relationships (i.e., there are entities, called trusted third parties, which
are trusted for always exhibiting a dependable behaviour). By contrast, in the context of global
computing both the available resources and the entity behaviours are dynamic; for this reason
security in global computing scenario cannot rely on statically defined trust relationships.
Due to the dynamic aspect of global computing scenarios, in order to support trust-aware
interactions there is the need for a completely decentralized reputation system that enables each
entity with reputations without requiring access to remote information repositories. However,
the current trust models do not support the development of pervasive reputation systems.
In the first part of this thesis we thus present our Socially Inspired Reputation (SIR) model that
we have been developing; SIR is the first model of reputation implementing the conceptual socio-
cognitive model of trust [FC01, FC04]. SIR has been developed for enabling the development of
decentralized reputation management systems; thus, we have identified the properties belor that
we require SIR to meet.
8
LIST OF TABLES
Flexibility: the reputation model has to be general enough to be applied to any possible context
– i.e., the logics for aggregating the single trust evaluations have to be independent from
the application context;
Adaptability: reputations have to adapt according to the behaviour that the individuals exhibit
within the system and to the time passing;
Robustness: assuming the presence of a certain amount of noise, the reputation model has to
identify it and prevent it to affect trustworthy information;
Scalability: the reputation model has not to require the maintenance of any global knowledge.
Furthermore, in the trust of this thesis we present two practical applications of SIR which have
been designed, evaluated and developed in order to show that SIR meets the above require-
ments. In particular, the first application is TAw, our Trust-Aware naming service (TAw), and
the second application is rawDonkey, a peer-to-peer file-sharing platform based on the eDonkey
protocol that employs an autonomous reputation monitoring system in order to encourage peer
to collaborate with each others.
Though these applications are important to show that SIR meets the requirements that guided
its development, it is also important for us to provide a more formal and rigorous verification of
its properties. Specifically, we are interested in showing that it can support trust-aware applica-
tions in global computing contexts. To this end, formal methods, and in particular process calculi,
can bring important contributions; provided a target system, process calculi enable its rigorous
specification and the application of a wide set of accepted techniques and tools for the analysis,
the implementation, and the verification of that specification system. Firstly, they enable com-
positionality and openess. Compositionality enables one to analyze the behaviour of a system
focusing on its components; this property makes it possible to both limit the analysis to a single
part of a system and to analyze the whole system by taking separately each of its components
and then composing the obtained results. Openess enables one to evaluate the behaviour of a
system deployed in an open environment where other systems executing in parallel may possi-
bly interact or interfere with that system. Moreover, a process calculus provided with a reduction
semantics enables the development of a virtual machine capable of implementing that semantics
and interpret a given specification model to perform tests or simulations, and a process calculus
provided with a labelled transition semantics enables the implementation of techniques by which
it is possible to verify whether certain properties (e.g., security, deadlock freeness) are satisfied
by a given specification model. For instance, when verifying properties such as security where
it is important to examine the possible interaction of the system with the environment, process
calculi can be developed to compare the model with infinitely many adversaries; simulation and
other experimental techniques do not enable such kind of verifications.
In the second part of this thesis we thus present our work on the development of a process
calculus for describing wireless systems. We have been developing that calculus envisioning
its application in the modelling and verification of our reputation model. However, we believe
that the contributions that are currently available in formal methods are not adequate for our
verification purposes.
Hence, in the second part of this thesis we present our work on the development of a for-
mal model for describing the wireless communication model. Our calculus, called Calculus of
Wireless Systems (CWS), is provided with a reduction semantics (RS) and a labelled transition
semantics (LTS). Since these communication systems groups together aspects of synchrony, asyn-
chrony, and broadcast communications, in the design of our semantics we could not follow the
approaches available in the literature; thus, for each of our semantics we invented a specific ap-
proach to model the behaviour of the target system. The main technical result of this second part
is the equivalence between the two semantics; this result is important because, given that the RS
is much more intuitive, it proves the correctness of the LTS and enables one to interchangeably
employ the more appropriate between the developed semantics for that task to be performed.
While reading this thesis, the trust and wireless parts may appear two separate works. How-
ever, it is not so. The first part of this thesis has been developed in order to propose a model of
UBLCS-2006-09 9
LIST OF TABLES
trust that could satisfy the requirements imposed by global computing applications. Instead, the
second part of this work has been developed in order to provide us with formal tools to enable
the specification and the verification of trust-aware communication protocols in the technologic
setting that most represents global computing. One such an integration work is not obvious is
far beyond the purpose of this thesis. It will be in the further contributions that we will move
toward the provisioning of formal tools for employing CWS to reason on wireless systems and
protocols; then, the goal will be (i) to augment CWS with the specification of a reputation man-
agement system, and (ii) to employ this specification to perform the verification of the properties
that guided us in the development of SIR. Further contributions will also present and discuss
the deep simulation analysis about the reputation system implemented within the rawDonkey
distributed monitoring system.
This thesis is structured as follows. Chapters 2 to 6 present our work on trust and chapter 7
to 10 present our work on wireless communications. Finally, Chap. 11 concludes this work and
defines future directions for the research on trusted communications in wireless networks. The
part on trust is structured as follows:
Chap. 2: This chapter motivates this work and presents the state of the art in the field;
Chap. 3: This chapter presents and discusses SIR, our Socially-Inspired Reputation model;
Chap. 4: This chapter presents our Trust-Awareness extension for naming services, the first case
study built on SIR;
Chap. 5: This chapter presents our rawDonkey, a reputation enhanced peer-to-peer file sharing
client which has been the second case study for our reputation model;
Chap. 6: This chapter concludes the part on trust and draws the related considerations.
The part on wireless communications is structured as follows:
Chap. 7: This chapter presents and discusses the motivations that lead our work on this topic;
Chap. 8: This chapter presents a basic language for describing wireless communications, the
associated semantics (i.e., reduction semantics and labelled transition semantics), and the
first contributions;
Chap. 9: This chapters presents some extensions for our language, the augmented semantics,
and the last contributions;
Chap. 10: This chapter concludes the part on trust and draws the related considerations.
UBLCS-2006-09 10
Chapter 2
Motivation
1 Discussion
In global computing scenarios possibly unknown and distrustful entities (e.g., resources, indi-
viduals) consume, produce, or support, services. In such scenarios being enabled with trust
information is a key requirement for any entity; in fact, an entity which is aware of other entity
trustworthiness is also enabled with means to decide whether and under which conditions to
engage in interactions with other entities.
For the reader clarity, it is worth defining terms such as trust, trustworthiness and reputa-
tion. We define trust to be the measure of how much reliance a truster can justifiably place on
the dependability of a trustee behaviour within a specific context. We say that a trustee is trust-
worthy, within a given context, if it actually justifies reliance to be put on the dependability of
its behaviour within that context. We say that a trustee is trustworthy, within a given context, if
it actually justifies reliance to be put on the dependability of its behaviour within that context;
We define trustworthiness to be the degree of reliance, within a given context, that any entity is
to be justifiably assigned. We assume it to be a private and secret property of each entity and,
therefore, neither known to other trusters, nor provable.
In social environments trust is an important aspect that stands at the base of each possible in-
teraction between entities: placing trust on some entity for assuming a specific behaviour within
a specific context means believing that, at some extent, under the conditions enforced by that
specific context that person will assume the expected behaviour. Trust is important because of its
concrete implications. Firstly, trust enables interaction between entities: without trust-awareness
it would not be possible for any entity to have information about the possible outcome of an
interaction engaged in with another entity. Secondly, trust enables any entity to select the best
entity with which to interact for accomplishing a given task: trust enables one to compare any
two entities for estimating which one is the best for performing a giving task. It is worth not-
ing that trust cannot be employed for substituting cryptography-based techniques for meeting
security requirements; trust is only a mean for enabling an individual to be aware of possible
behaviours that other individuals my exhibit when acting within a specific context.
Old trust models were based on the assumption of a number of Trusted Third Parties (TTPs)
which were assumed to be always available, to always behave in a dependable manner, and to
be possibly bound with each other by means of static trust relationships. Based on these static
trust relationships, possibly by means of delegation (e.g., employment of credential chains) and
abstraction (e.g., employment of roles), complex protocols were built for enabling applications
with interaction rules to satisfy the security properties required by the specific application (e.g.,
[BFL96, BFK98, YMB02, Zim95]); for these properties to be always satisfied, the TTPs are required
to remain trustworthy.
By contrast, since a generic global computing scenario is characterized by an arbitrary large
population in which each individual may dynamically leave and join and possibly exhibit dy-
namical behaviour (e.g. due to a limited resource amount, to a failure occurrence, or just to the
11
1 Discussion
Interaction Rules
Dynamic Trust Criteria Evaluation Criteria
Trust Management
Figure 1. Framework supporting interactions based on dynamical trust.
choice of a new behavioural strategy), it is clear that the maintenance of static trust relationship
would not provide a solid infrastructure on which to implement trustworthy interactions. For
this reason, trust has recently become an important research area which investigates the possibil-
ity of dynamically building trust relationships in a manner which is similar to how individuals
in human societies build their own, that is through evaluations and expectations. Hence, we
associate global computing scenarios to the abstraction of Virtual Society (VS), defined below.
A Virtual Society is a dynamic set of possibly federated and mutually distrustful entities (i.e.,
individuals and resources) which interact with each other, providing and consuming services,
in order to accomplish their respective tasks.
We thus define reputation to be a specific form of trust which is collaboratively maintained by
individuals in a society through the sharing of a given, possible partial, amount of information
about the trustee behaviour within a given context.
In a social system, each entity is autonomous and, by interacting with other individuals, col-
lects and locally maintains trust information to be employed for trust computation purposes. In
this setting, interactions can be either one of the following:
• Task performance interaction: In this case each one directly experiences the other trustwor-
thiness within the context;
• Exchange of trust information: In this case each one makes some of its trust information
available to the other.
Therefore, so as to support interactions based on a dynamical form of trust, we designed the
framework that is shown in Fig. 1. The implementation of interaction policies based on dynamic
trust requires both trust criteria and evaluation criteria to be implemented. The first criteria iden-
tify a decision semantics which enables an entity to decide whether to consider a second entity
trusted or not, once an estimation about that entity behaviour is provided. The second criteria en-
ables an entity to evaluate the outcome of an interaction and produce the trust information that
is needed to adapt the estimations about other entities baheviours; in fact, since entities build
their own expectations according to the opinions that they make interacting with and evaluating
the other entities in the system, a dynamic-trust criteria implies the existence of an evaluation
criteria. The trust management layer implements a trust model; by taking the single evalua-
tions produced by the evaluation criteria, it aggregates these values according to the trust model
to obtain the expectations about entity behaviours. Such expectations are then avalable to the
dynamic-trust criteria for deriving a trust opinion which depend on each single entity’s trust cri-
teria; for instance, Alice may consider trustworthy an entity which is expected to satisfy a given
request with a probability between 0.4 and 0.6 while Bob may require a probability between 0.5
and 0.7 so as to consider the same entity trustworthy.
Our work focuses on trust models. In order for global computing systems to be provided with
a trust model which satisfies the above requirements, we took inspiration by the social model; we
consider it to be a good reference model because of its autonomy, its high decentralization, and its
extensive validation. Specifically, based on the socio-cognitive model of trust [FC01, FC04] and
on the above principles, we develop the first distributed implementation in order to satisfy the
above requirements and to enable the trustworthy implementation of Reputation Management
Systems (RMSs) for global computing purposes. Moreover, we require that our dynamic trust
model enable reputations with the following requirements:
UBLCS-2006-09 12
2 The Socio-Cognitive Model of Trust
Adaptability: reputations have to adapt according to the behaviour that the individuals exhibit
within the system and to the time passing;
Robustness: assuming the presence of a certain amount of noise, the reputation model has to
identify it and prevent it to affect trustworthy information;
Scalability: the reputation model has not to require the maintenance of any global knowledge.
When reading the literature about the currently available trust models, it is clear that neither
of the available research contributions (see below in this chapter) satisfy the above requirements.
In fact, examining the proposed trust and reputation models, it is possible to observe that some
of them do not adequately capture trust’s adaptive nature (e.g., [ARH00, AD01, AM04, BB04,
Cap04, Jøs01], some of them cannot isolate and ignore byzantine trust information the noise
injected within the RMS (e.g., [AD01, BB03, BB04, Cap04, CNS03]), and finally some of them
enable implementations which may present scalability problems (e.g., [BBK94, CNS03, Car00,
Mau96, YKB93]). Moreover, some of the available trust models require the global sharing of trust
information through distributed data structures; even if this may increase the accuracy in trust
computation, there are no guarantees on the trustworthy maintenance of shared information.
The purpose of this work is then the development of a trust model which overcomes the
limitations of existing trust models enabling its employment in global computing settings. In the
first part of this thesis, we report our experience in developing, validating, and employing the
Socially Inspired Reputation (SIR) model, our flexible computational model of reputation which
can employed for enabling trust-awareness within global computing scenarios. SIR is the first
formal implementation of the socio-cognitive model of trust [FC01, FC04], which describes the
computational logic by which each single individual in a human society aggregates the collected
information for computing reputations.
In order for us to validate the SIR reputation model, the Trust-Aware naming service (TAw)
has been designed and extensively simulated. TAw enables service oriented architectures with
trust-awareness; that is, by employing TAw any entity is enabled with trust information for de-
ciding whether, and at which conditions, to engage an interaction with any other entity. The
performed simulations showed that TAw implements an adaptive and robust behaviour: specifi-
cally, we have been simulating a consumer/producer scenario in which TAw is deployed in order
for the consumer to select the provider to refer to in a trust-aware manner. The obtained results
confirm our expectation that SIR enables TAw to scalably implement an adaptive and robust
reputation management system within global computing scenarios.
Another application of the SIR model is represented by rawDonkey, which is an eDonkey
client for the homonym peer-to-peer file sharing network. rawDonkey employs the SIR model to
evaluate the users on the network according to the possible misbehaviours (i.e., faker and free-
rider) and to advantage the collaborative ones with respect to the non collaborative ones. The so
obtained reputations are employed to enforce a file distribution priority among the peers in order
to penalize non-collaborative users and to guarantee a fair resource sharing among collaborative
ones.
2 The Socio-Cognitive Model of Trust
In [FC01, FC04], Castelfranchi and Falcone define a socio-cognitive model of trust that is based
on a the the mental process of trust formation in cognitive terms (e.g., beliefs, goals); although it
does not describe the psychological aspects of trust, it formalizes the most rational and conscious
aspects. For the sake of clarity, the socio-cognitive model of trust is based on the BDI (Belief-
Desire-Intention) approach that is inspired by Bratman’s philosophical model [Bra87] and has
been defined to model the mental processes of decision-taking. According to this model, only a
principal that is characterized by both goals and beliefs can trust another principal. It is worth
noting two issues of trust: first, trust does not apply to unconcerned principals, they only have
opinions and forecast based on their knowledge; second, trust itself consists of beliefs, specifically
UBLCS-2006-09 13
2 The Socio-Cognitive Model of Trust
of evaluations and expectations. In this context, we are going to examine the three notions of trust
below, taking in consideration the trust that binds a truster with a trustee within a given context.
In this context, we can associate with trust the following meanings:
• A mere mental attitude (prediction and evaluation) towards another principal, a simple
disposition;
• A decision to rely upon the other, i.e. an intention to delegate and trust, which makes the
truster ”vulnerable”;
• A behaviour, i.e. the intentional act of trusting, and the consequent relation between the
truster and the trustee.
In each of the above concepts, different sets of cognitive actions are evaluated by the truster; all
together, these actions describe the whole process of trusting: firstly, some information enables
the truster to predict for each possible trustee the most likely behaviour, and to select the entity
which appear to provide the truster with the best service; secondly, the truster has to decide
whether the expected service is worth taking the risk of having produced a wrong expectation;
finally, there is the actual relation between the truster and the trustee (i.e., the interaction). Hence,
the socio-cognitive model of trust identifies three important concepts related with trust: trust
disposition, decision to trust, and act of trusting. For the sake of clarity, since the trustee competence
within a specific context is useful to the truster (trust disposition) which decided to rely on it
(decision to trust), this means that the truster might delegate (act of trusting) to the trustee some
task which composes his own goal.
Moreover, the model includes three basic beliefs:
Competence Belief: a sufficient evaluation of the trustee’s ability is necessary, the truster should
believe that the trustee is useful for his goal, that the trustee can provide the expected result,
and that the trustee can play such a role on behalf of him.
Willingness Belief: the truster should believe that the trustee not only can perform the requested
service, but actually he will perform it. This belief makes the trustee’s behaviour pre-
dictable.
Dependence Belief: either the truster believes to depend on the trustee’s competence (strong
dependence), or it believes that there is an advantage in relying on the trustee instead of
not relying on it (weak dependence). In other words, in a trust relationship the truster is in
a strategic situation in that he believes that the achievement of his goals depends, either
partially or totally, on the actions of the trustee.
From the point of view of the dynamic studies of trust, it is worth noting how the above basic
beliefs might change during the same interaction or during a sequence of interactions: for ex-
ample, the competence (or ability) of the trustee, or the trustee’s willingness, or the dependence
relationship between the truster and the trustee might change. Another important characteris-
tic of the socio-cognitive model of trust is the distinction between trust in a trustee that has to
perform a task based on his internal charactestics (e.g., competence, ability), and the trust in the
environment which can affect the achievement of the goals by external factors like opportunities
and interferences.
According to this model, the trust in a principal consists in two beliefs/evaluations that are
identified as the basis for reliance: competence, which includes knowledge and self-confidence,
and disposition, which in turn is based on aspects such as willingness and availability.
Like the above trust definitions, the socio-cognitive model asserts that trust implies, either
implicitly or explicitly, the subjective probability of the successful performance of a given trustee
within a specific context; it is on the basis of this subjective evaluation of risk that the relationship
between a truster and a trustee is engaged. That probability itself depends on the truster beliefs
and evaluations. However, within a given environment, an evaluation of a specific trustee is not
really an evaluation about it, instead the evaluation accounts the environment aspects as well;
UBLCS-2006-09 14
2 The Socio-Cognitive Model of Trust
thus, the subjective probability of the trustee successful performance should be decomposed into
the expectation of the trustee itself of behaving dependably (internal attribution) and the expecta-
tion of having the appropriate environmental conditions (external attribution) for the performance
to succeed, such as that the absence of interferences that could harm the performance.
The socio-cognitive trust model is based on the following formal constructs. Be Act =
{α1, . . . , αn} a finite set of actions, and Agt = {A, . . . , X, Y, . . .} a finite set of principals (that
in the original work are called agents). Each principal is described by its own set of actions, its
plan library, its set of resources, its set of goals, believes and motives.
In this model, the actual object of delegation is the action/goal pair, τ = (α, g), and it takes
the name of task. Given a principal X and an environmental context Ω (a set of propositions
describing the state of the world), it is possible to define as trusworthiness of principal X about
τ in Ω (indicated by the notation trustworthiness(XτΩ)), the objective probability that X will
dependably perform task τ in environment Ω; this objective probability is computed on the basis
of elemental components such as:
Degree of Ability (DoA): it ranges between 0 and 1 and indicates the level of ability of a given
principal X about a specific task τ, i.e. the probability that X will correctly perform τ given
that he intends to do that;
Degree of Willingness (DoW): it ranges between 0 and 1 and indicates the level of intentionality
of a given principal X about a specific task τ, i.e., it is the probability that X actually initiates
the performance of a given task given that he declared himself intended to perform that
task.
Thus, in this model the following dependence holds.
Trustworthiness(XτΩ) = F (DoAXτΩ, DoWXτΩ) (1)
Where F is a function which preserves monotonicity, and ranges in [0, 1], expressing the objective
probability that principal X will behave dependably about task τ within the environment Ω.
According to this model, to correctly represent the trustworthiness of a principal through sub-
jective probabilities, it is important to be enabled with cognitive attribution process to interpret
the outcomes of a truster’s reliance on a given trustee and that trustee’s performance (i.e., failure
or success); in particular, the effect of the trust a given truster X placed on a given trustee Y’s
failure or success on X’s trust in Y depends on the X’s causal attribution [Wei61] of the event. Ac-
cording to that theory, any event (i.e., success or failure) can be either ascribed to factors internal to
the subject, or to environmental, external causes; moreover, these events may either be occasional
or depend on stable properties of the principal or the environment. For example, a failure will
impact on the self-esteem of a subject only when attributed to a internal and stable characteristics
of the subject itself.
The socio-cognitive model defines a function, namely DoT, by which a principal A can eval-
uate its own trust (degree of trust) in principal B about the task τ (to be performed) in the envi-
ronment Ω. In [FC01, FC04], f is a general function that preserves monotonicity; in particular,
DoAA,B,τ is the B’s degree of ability about task τ in A’s opinion, DoWA,B,τ is the B’s degree of
intention about task τ in A’s opinion, and e(Ω) takes into account the part of the task that is not
directly performed by B and the hampering or facilitating conditions that are enforced by the
specific environment Ω.
DoTA,B,τ,Ω = f(DoAA,B,τ, DoWA,B,τ, e(Ω)) (2)
In the analysis of this function, one has to consider all the dependence relationships that may
hold between the three sub-constituents (i.e., DoAA,B,τ, DoWA,B,τ, e(Ω)).
2.1 Considerations about the Socio-Cognitive Trust Model
The socio-cognitive model provides a very precise description of trust dynamics from a global
perspective; however it is worth noting that, in order for this trust model to be completely spec-
ified, one has to be aware of how to separate the environment responsibilities from the trustee’s
UBLCS-2006-09 15
3 A Conceptual Model for Computing Reputation
Reputation
Computation Adaptation
+
+
Third−Party
Opinion
Self−Made
OpinionEvaluation
Third−Party Expectation
Third−Party Expectation
Third−Party Expectation
Third−Party Expectation
Time
+
Figure 2. A Conceptual Model for Computing Reputation
responsibilities. Thus, within a generic scenario in which the truster and the trustee belong to
possibly different environments, it is clear that the truster has to be provided with informations
about how to separate these two components; on the other hand, unless a specific study be per-
formed so as to evaluate the environment effects on the different task’s contexts, it is not clear
how to enable the truster to predict environment effects on a specific task request and to evaluate
the objective trustworthiness of the trustee.
Within a global computing context, it is clear that it is not scalable to enable each principal to
be aware of the effect that might affect a delegated task in each possible environment in which
it may be performed. Moreover, in a virtual society scenario where the consumer/producer
interaction paradigm is implemented, the truster does not really cares how failure responsibility
has to be divided for deciding whether and whom to delegate a task; actually, he only cares
about success probability and wants to be able to evaluate it. In our opinion, the information that
is important for the truster to be provided with is the overall probability for the delegated task to
succeed.
3 A Conceptual Model for Computing Reputation
Based on the socio-cognitive model of trust, we designed a conceptual model for computing
reputations which is inspired on the social dynamics; this model is represented in Fig. 2. In this
model we encode our interpretation of the socio-cognitive process that implements reputations.
In the figure we see all the abstraction that take part to the reputation forming relation be-
tween them. First, we can isolate the basic trust elements, namely the evaluations and the expecta-
tions. We define an evaluation to be a sampling of the behaviour of the trustee, decided using a
specific evaluation criteria. By contrast, we define an expectation to be a representation of trust
which considers all the available trust information about the trustee and presently describes its
possible behaviour.
Before proceeding in the explanation of the model in the picture, it is worth making some
consideration about the effect of time on trust information. Each single basic piece of trust in-
formation is determined by the behaviour exhibited by the trustee until a certain time. Since
we assume to employ our model in systems in which principals can dynamically change their
behaviours, we can legitimately assume each piece of trust information to lose relevance in repu-
tation computation as time passes. For this reason, the picture shows that time is an element that
influences the aggregation of the basic trust elements.
For computing reputations, each principal employs both expectations and evaluations. Ex-
pectations may be obtained by third parties (Third-Party Expectations), through the exchange of
UBLCS-2006-09 16
4 Related Contributions
recommendations, which are generally produced by the recommender through the aggregation
of basic trust values, or self-constructed, through the composition of locally-available trust infor-
mation derived by the direct experience of the truster.
Hence, each principal aggregates the locally-available basic trust information in order to ob-
tain the Third-Party Opinion and the Self-Made Opinion; the former is obtained aggregating the
available Third-Party Expectations, and the latter is obtained aggregating the available direct
Evaluations.
Then, the opinions are aggregated to form reputation. This is nevertheless the social mecha-
nism of reputation forming: each of us maintains information about the recommendations that
he gets from other individuals and employs them in the composition with the expectation ob-
tained from its direct experience for obtaining a more complete evaluation which takes the name
of reputation.
In the composition of these trust abstractions, there is also dependence between the some
trust abstractions and the adaptiveness criteria.
First, each single trust evaluation has a role in its composition with the aggregated value;
for instance, the aggregated value can vary depending on whether the evaluation is classified
as stable or occasional, and on whether it is positive or negative with respect to the expected
evaluation.
Second, the self-made opinion have a role in the aggregation of Third-Party Expectations;
for instance, these expectations can considered in the aggregation according on whether they
provided useful information for guessing the result of the successive evaluations. Moreover,
Self-Made Opinion is also important for aggregating itself with the Third-Party Opinion; based
on how much a principal found himself able to guess the result of interactions by employing the
only self-obtained trust expectation, it can decide which weight to give to the one expectation
with respect to the other.
This conceptual model defines a class of computational models of reputation which is both
compatible with the ideals expressed in socio-cognitive model of trust and that is implementable
in settings in which it is impossible to maintain centralized information. In the next chapter, we
present our implementation of this model, namely the Socially Inspired Reputation model (SIR).
4 Related Contributions
In [ARH97, ARH00], Abdul-Rahman and Hailes employs a reputation-based infrastructure for
information retrieval. First, they propose a discrete metric for expressing trust degrees. We be-
lieve that this design choice does not adequately enable a faithful representation of the social
trust; in fact, it is not clear the social intuition which is encoded in the aggregation trust informa-
tions to compute direct trust, indirect trust, and reputations. For instance, the way in which direct
trust is encoded wants to describe the whole interaction history between a given truster and a
given trustee; however, that representation makes no distinction about which evaluations hap-
pened far in the past and which ones happened recently, therefore it impossible for the truster to
understand the behavioural pattern of the trustee and to get a picture of its current dependability.
In [AD01], Aberer and Despotovic introduce a complaint-based reputation system to be ap-
plied within peer-to-peer infrastructures for file sharing. To our opinionion, a reputation model
which keeps record only of complaints is not adequate to represent social reputation in that it
is not representative of the whole interaction history; the complaint could refer, for instance, to
events happened far away in the past, after which a dependable behaviour has been exhibited
for a long period. Therefore, since the model itself does not model trust adaptiveness with time,
it is impossible for a faulty principal which committed mistakes to rebuild a good reputation. In
addition, the model does not provide any mean to distinguish legitimate trust informations from
malicious ones. Finally, their reputation management system requires a distributed data struc-
ture, namely P-Grid, to be deployed over the whole peer-2-peer architecture so as to maintain
trust informations.
In [AM04], Azzedin and Maherswaran develop a socially inspired reputation model that is
UBLCS-2006-09 17
4 Related Contributions
similar to the one proposed in this paper. First of all, they define a criteria for evaluating the hon-
esty of a recommender that is quite debatable, in that a recommender is evaluated as honest if
in the past history it returned recommendations that describe a quite stable behaviour of a given
trustee; however, in order for this criteria to be valid it requires the trustee to really exhibit a sta-
ble behaviour. This contradicts the dynamical nature of reputation and the asynchronous nature
of failures which may substantially affect the behaviour of a principal; a social reputation system
should be useless if it was guaranteed that principals always exhibit the same behaviour. Sec-
ond, in this model, after an trust evaluation has been performed, direct trust is updated without
considering the time elapsed since the last update; thus, it does not consider trust information to
become obsolete as time passes. Moreover, the direct-trust update semantics does not consider
the behavioural history of the trustee. Similarly to what happens for direct trust, we think that the
semantics which defines reputation in terms of both direct and indirect trust is not representative
of the human social behaviour.
In [BBK94, YKB93], Beth and Yahalom describe a formal trust model, founded on bayesian
probability theory and reliability analysis techniques, which is to be used for evaluating the trust-
worthiness of entities in open networks. In order for it to be applied, a truster has to know all
the paths to the trustee in the trust network, within the specific context in which trust is to be
evaluated; hence, as the trust-network complexity and the trust contexts grow, this model does
not meet scalability and efficiency requirements to enable its practical employment. Moreover,
this model does not implement any technique for encoding the freshness of the available trust-
information (e.g., trust decay as time passes). Our dynamical and adaptive reputation model
does not require a truster to handle any structured knowledge of the trust network; in addition,
it is expected to meet the required flexibility and scalability requirements despite the network
topology complexity.
In [BB03, BB04], Buchegger presents a reputation which is based on the beta probability den-
sity function; This model does not implement dynamical trustworthiness on recommendations;
instead, identifies as malicious recommendations the ones which substantially differ from the
direct trust. This assumption might not be true and lead to slowed adaptiveness in that, if a
recommendation reports in the trustee a behaviour change which recently happened and which
has not yet been identified by the truster, this trustworthy information would be identified as
malicious by the truster and thus discarded. In addition, this model also implements an aging
technique to express the decay of trust with time which is rather not realistic: firstly, the decay
does not depend on time but on the position of the trust evaluation within the sequence, the each
trust evaluation is applied the same decay factor independently from the time interleaving each
pair of consecutive evaluations; second, when the decay is applied, the aging does not applies
on the whole evaluation history but rather on either the sequence of successes of the sequence of
failures, according to the last trust evaluation.
In [Cap04], Capra proposes a reputation model which is inspired to the human behaviour;
however, though detailed a trust metric is presented, it is not clear its root within the human
psychology. The author argues that, so as to weight recommendations according to their depend-
ability, trust is evaluated within the provisioning of recommendations; further in that document,
a quality attribute, which does not depend on the recommender reputation but on the degree of
knowledge between the truster and the recommendor, is introduced to weight the various rec-
ommendations. This may lead to problems in that the truster can believe to have a high degree of
knowledge about an undependable provider; in this case it will possibly weight its recommenda-
tions more than ones which are more dependable. Moreover, it is not clear the binding between
the semantics which specifies the composition of the direct trust value and the one derived by
the evaluation of the provider recommendations and the human behaviour.
In the context of the SECURE European Project a reputation model specific for ubiquitous
computing is being studyied; specifically, in [CNS03] Carbone makes use of knowledge degrees
to disambiguate between principals that present the same trust levels and to bootstrap the trust
acquisition mechanism in an unknown environment. Although the trust model the author de-
velops is very precise about statical definitions, it is not clear how they develop the dynamical
adaptation of trust and reputation degrees. By means of the decay functions that depend on time,
UBLCS-2006-09 18
4 Related Contributions
our model encodes a degree of knowledge (i.e., a degree of the encoded information freshness)
within the reputation model as well.
In [JK98, Jøs99, Jøs01], Jøsang defines a trust model which is based on a triple of probability
values describing the behaviour that a specific principal will assume when an interaction with
him is engaged; these triples specify, respectively, the probability of engaging a positive inter-
action, the probability of engaging a negative one, and the uncertainity. Although the model is
well specified and developed, the chosen trust representation hides all the information about the
interaction history of the trustee and does not allow the truster to understand the behaviour that
the trustee could currently exhibit (as happens for Buchegger’s model in [BB03]). In fact, given
any two trust evaluation sequences with both the same number of success results and the same
number of failure results, the model returns the same reputation whatever permutation of these
evaluation is assumed; therefore, a trustee whose dependability was bad at the beginning then
always exhibited a dependable behaviour is considered as trustworthy as another which recently
lost its dependability. Trust adaptability with time has not been considered in the model.
In [Jøs02], Jøsang proposes a reputation model based on the beta probability density func-
tion. Though it is based on solid theoretical foundations, we think that it is not representative
enough of social reputation. In fact, the model does not distinguish between direct and indirect
forms of trust, therefore it does not specify any semantics to represent the human psychology
in distinguishing among them. Moreover, although an aging mechanism is implemented for the
obtained recommendations, the semantics which implements the recommendation aging is not
clear; according to that semantics, it appears that the aging is not based on the time in which this
recommendation was produced, rather it is based on its position in the sequence of the recom-
mendations that the truster collected.
In [JP04], Jøsang and Lo Presti define trust in a way that is is semantically different from
the definition we gave in Chap. 1 and define a trust degree according to that definition; to our
opinion, their definition of trust melts reputation semantics with a trusting criteria (i.e., deciding
whether to trust or not). Although we believe that it would be better to maintain the decision
criteria separate from the reputation semantics, given the trustee reputation, this criteria can be
applied on our reputation model as well to implement decision making about whether to trust
or not.
Both Dragovic et al. [DHH+
03] and the OpenPrivacy project [ope] implement reputation man-
agement systems; the former for trust-aware selection of application servers, and the latter for
disseminating and computing trust information within a specific distributed object middleware.
However, no specific information about the adopted reputation models were available to the
authors.
4.1 The Web of Trust
In the context of Public Key Infrastructures (PKI), trust has been widely studied within the ”Web
of Trust” model [Car00, Mau96]. Introduced by Zimmermann in Pretty Good Privacy (PGP)[Zim95],
the web of trust can be defined as PKI model where a global state about the principals and their
direct trust relationships is maintained; i.e., each principal can make publicly known whose keys
he trusts to be authentic. Moreover, as a member of the infrastructure, each principal can decide
whom and to which degree to trust as an introducer of new keys. In order for communication to
take place between any two principals, a trust link between them has to be enabled by the trust
relationships encoded in the web of trust. In [Mau96], Maurer formalizes the web of trust model
and describes a technique for assessing probabilistic trust between two principals in a web of
trust. However, such a technique require all the possible trust paths between the two principals
to be known and a complex analysis technique to be applied on them; its complexity prevents
such a technique to be applied in practice. In [Car00], Caronni introduces and evaluates heuristics
techniques for reducing the complexity of the Maurer’s technique; however, such techniques still
require a global state about certificates and direct trust relationships to be maintained. Human
Societies do not rely on having global knowledge of trust networks and, still, the social reputation
model succeeds in identifying untrustworthy individuals so as to support legitimate interactions
among population members. Our social reputation model enables one to implement a reputation
UBLCS-2006-09 19
4 Related Contributions
system without requiring principals to maintain a global state about the trust network, meeting
scalability and efficiency even in arbitrarily complex scenarios. Moreover, our reputation model
implements the dynamical behaviour that is not captured by the earlier model implemented in
the web of trust.
UBLCS-2006-09 20
Chapter 3
SIR: Socially Inspired Reputation
Model
1 Foundations of Trust
In [Mez03], trust is defined as a ternary relation T (α, β, φ), where α and β are two principals and
φ is a context. The presence of the triple (Alice, Bob, c(i)) in T indicates that Alice trusts Bob for
consuming interface i.
A trust system is defined as the triple (P, Φ, T ), where P is the set of principals and Φ is the set
of contexts on which the trust relation T is defined. A trust relation T defined over P × P × Φ
may satisfy the following properties:
reflexivity: A trust relation T is reflexive in context φ if, for every principal α ∈ P, the triple (α, α, φ)
is in T .
symmetry: A trust relation T is symmetric in context φ if, for every pair of principals α, β ∈ P, if
(α, β, φ) ∈ T then (β, α, φ−1
) ∈ T (i.e., each has assessed the trustworthiness of the other).
Reflexivity is also called implicit trust. Basically, the reflexive property enables the existence
of self confidence; that is, after two principals engaged an interaction with each other, each of
them is enabled with much experience and, thus, is enabled with more data to compute a more
precise “a-priori” estimation of other principal behaviours.
For instance, let us say that Alice carries out an interaction with Bob because she wanted
interface i to be provided and Bob was able to satisfy her request. Hence, after the interaction
Alice has a new trust evaluation which can be used to provide an estimation about the error that
Alice made by referring to the trust information she had a priori.
Symmetric property states that any consumer/producer interaction has to affect the trust in-
formation of both the principals; i.e., both the consumer trust information about the producer
and the producer trust information about the consumer. In fact, after an interaction between any
two principals, each of them is enabled with more information about the other and can compute
a better estimation of its behaviour in the context in which the interaction occurred.
Moreover, within each possible context, each single interaction is an instance of the client/ser-
ver interaction paradigm; thus, both the client and the server are bound to the same interaction
semantics. in this specific case of this work, the two context are both bound to the same interface.
When speaking of trust, it is worth mentioning the first context in which trust appeared, that
is authentication. We define the context of authentication, represented by the notation auth, to be
a special context that is equal to its inverse.
Having the reflexive property satisfied within authentication expresses the trust that each
principal is believed to maintain the secrecy of its private key. More in general, trusting a princi-
pal within authentication means trusting the binding between his private key and his identity.
On the other hand, having the symmetric property satisfied within authentication indicates
21
1 Foundations of Trust
that, for any two principals, it is possible for them to mutually authenticate and, thus, implement
security requirements such as non-repudiation of origin and integrity of origin which are the base
of each effective exchange of information. In fact, in the real world any two entities exchanging
messages are successfully only if there is some kind of authentication between them, otherwise
any communication between these two principals has no effect (i.e., without any information
about the message origin, that message is to be ignored). Thus, proof of message origin and
integrity have to be provided so as for principal to consider the content of the message as coming
from the expected sender.
Proposition 1 (communicability condition) Let α, β ∈ P be any two principals, then a communica-
tion between them can take place if and only if T{α,β}, the restriction of T over the set of principals {α, β},
is symmetrical in the context of authentication.
A third property, namely transitivity, is defined by introducing the jurisdiction subcontext; it
is used to represent trustworthiness in recommending principals acting within a specific context.
Given a context φ, the jurisdiction subcontext associated with φ is represented in mathemat-
ical notation by the symbol j(φ). A principal having jurisdiction over a context φ is trusted
for providing reliable trust information about trustees within context φ (i.e., whether a given
trustee can be trusted or not); such information is referred to as recommendations. For example,
(Alice, Bob, j(φ)) ∈ T means that Alice places trust in Bob for having jurisdiction over context φ
and is willing to inherit Bob’s trust relationships for context φ.
Transitivity is formally defined as follows:
transitivity: A trust relation T is transitive in context φ if for every three principals α, β, γ such that
(α, β, j(φ)) ∈ T and (β, γ, φ) ∈ T and the communicability condition holds for both {α, β} and
{β, γ}, then (α, γ, φ) ∈ T × T.
Table 1. Structure of Φ augmented with jurisdiction.
I ::= i | h | k
C ::= p(I) | c(I) | authentication
Φ ::= C | j(C)
In other words, given a trust relation T , transitive in context φ, and any three principals
α, β, γ, if T (α, γ, φ) is not defined in T , it can be indirectly computed if both T (α, β, j(φ)) and
T (β, γ, φ) are defined; thus, the trust between principals α and γ is defined as T (α, γ, φ) =
T (α, β, j(φ)) T (β, γ, φ) in T{α,β,γ} × T{α,β,γ}.
Having the transitive property satisfied within authentication enables Certification Authori-
ties and credential chains, a principal, say Alice, trusting another one, say Bob, for having juris-
diction within authentication, will inherit all the trust relationships that hold between Bob and
any other principal, within the context of authentication. In this case, for Alice, Bob plays the
role of Certification Authority. Hence, having a transitivity chain within T enables principals to
inherit trust relationships along the chain and thus, at an abstract level, certificate chains to be
implemented.
In Tab. 1 the set Φ of the possible contexts is defined in terms of the definitions given above;
here, I indicates the set of service interfaces and C the basic contexts.
Given the basic notions of trust, we define a trust system to be an environment in which trust
relationships can be established between that environment’s principals; a trust system is specified
by the triple (P, Φ, T ) where P is the set of principals, Φ is the above defined set of contexts, and
T is the trust relation.
UBLCS-2006-09 22
2 The Reputation Model
1.1 Trust Degrees
So far, we have seen a trust relationship T as a relation: given a couple of entities and a context, T
indicates the existence of a trust relation without giving any other information about the strength
of this trust binding. In order to express this “strength”, the trust relationship is changed into a
trust function that returns a real value belonging to the closed set [0, 1] within which the value
1 indicates “full trust” and 0 indicates “absence of trust”; where, by “absence of trust” we are
not meaning distrust (which is a rather different concept), but we mean that the data which
are available to the principal enable him not to expect a dependable behaviour. We name this
strength measure trust degree. While remaining consistent with the definition of trust given in
Sec. ??, this representation greatly improves its expressiveness.
T : P × P × Φ −→ [0, 1] (1)
For example, the notation T (Alice, VISA, p(ATM)) = 0.99 indicates that Alice expects from
VISA’s automatic teller machine a dependable behaviour (wich embodies properties such as
availability, and exactly-once semantics) with a probability of 0.99, when used for withdraw-
ing cash; in turn, with a probability of 0.01 Alice expects the ATM service not to dependably
satisfy her request (for example she might expect it to be unavailable).
Trust degrees are not only used by a principal to decide whether or not to interact with a
given principal; they might also be used to determine the security mechanisms and parameters
that have to be required for the interaction to take place. For instance, in electronic commerce
applications trust models are used to determine the frequency at which a principal has to au-
thenticate performing micro-transactions payments [Man00].
Within transitive trust, the truster should be prevented from trusting a given trustee more
than both the trust he places on the recommender and the trust the recommender places on that
trustee. Hence, if T (Alice, Bob, j(φ)) and T (Bob, Cecilia, φ), then Alice would trust Cecilia in
the same context with degree
T (Alice, Cecilia, φ) ≤ min{T (Alice, Bob, j(φ)), T (Bob, Cecilia, φ)} (2)
where equality holds iff both T (Alice, Bob, j(φ)) and T (Bob, Cecilia, φ) assume either the value
0 or the value 1. The minority relation (2) is satisfied by defining recommended trust as the arith-
metic multiplication between the trust degree that holds between the truster and the recom-
mender and the one that holds between the recommender and the trustee, within the appropriate
contexts (see below).
T (Alice, Cecilia, φ) = T (Alice, Bob, j(φ)) · T (Bob, Cecilia, φ) (3)
2 The Reputation Model
Before introducing the reputation model, we assume a trust system (P, Φ, T ) where j(φ) is de-
fined for each context φ ∈ Φ.
In Sec. 1, we defined trust as a function that returns the trust degree between a truster and a
trustee within a given context. However, trust is not static in time; both time and events (e.g., an
interaction that take place between the truster and the trustee) can change a given trust degree.
Thus, we define the generic trust function as below, where τ indicates the set of numerical time
values.
T : P × P × Φ × τ −→ [0, 1] (4)
In [BB03, BB04, JK98, Jøs99, Jøs01], the authors model direct trust by using a beta probability
density function, which in statistics is used to represent the “a priori” estimation about an event
behaviour. The basic assumption for this model to be applied is that the trust evaluations have
to be abstracted as stochastic events which respect a binomial probability distribution and may
only represent either the failure or the success in the trustee’s dependability. Since we want to
improve the expressiveness of the dependability evaluation, in this section we propose a model
UBLCS-2006-09 23
2 The Reputation Model
for direct trust which is based on the evaluation of trust degrees which belong to the continuous
interval [0, 1]; this enables the measure of partial dependability which can implement a better
degree of precision in computing trustee reputations.
Thus, we define a generic trust function as in (5), where the parameters respectively represent
the truster, the trustee, the context, and the time at which trust has to be evaluated, and the return
value is named trust value and indicates the strength of the trust relationship between the truster
and the trustee; we assume the value 0 to represent “absence of trust” and the value 1 to represent
“full trust”.
T : P × P × Φ × τ −→ [0, 1] (5)
Consistently with our definition of trust, we assume all the trust values to be initialized to 0 since
there is no a priori data that justifies a different trust value to be assigned.
We assume obsolete information not to enable accurate estimations of current behaviours.
Hence, we assume trust to decay as time passes; specifically, for each context we define a de-
cay function that models the speed with which trust information becomes obsolete within that
context.
δφ(t) : τ → [0, 1]
Several decay functions are needed because the validity of a trust degree depends on the nature
of the context within which it applies; e.g., the more critical (i.e., risky) the context, the more
rapidly the trust decreases with respect to time if no new trust information is available. For us to
choose a proper decay function we have to consider that, if no fresh trust information is provided,
trust decay should not depend on the trust-update rate; i.e., if a given reputation is measured as ρ
at time t, at time t that reputation has to be measured as ρ independently of how many times it
was updated in between. Thus, given t such that t ≤ t ≤ t , the proper decay function which
satisfies this property is one that solves the following equation:
ρ · δφ(t − t) = ρ · δφ(t − t) · δφ(t − t ) (6)
It is easily provable that (6) is satisfied by all the functions which are homomorphisms from
(R+
∪ {0}, +) to ([0, 1], ·), such as, for example, the exponential function in (7).
δφ(t) = ηt
φ (7)
According to the nature of social trust, in this section we define three trust abstractions: rep-
utation, direct trust, and indirect trust. On the one hand, direct trust encodes trust based on the
truster’s own experience. On the other hand, indirect trust encodes trust based on the recom-
mendations the truster receives from the other principals.
Equation (8) models direct trust, which represents the perception of a principal’s trustwor-
thiness which is uniquely determined according to the trust evaluations in which the principal
computing the reputation took part in the role of truster. In order for direct trust to be computed,
a truster has to evaluate each action that is exhibited by the trustee within the trust assessment
context; in (8), tvt ∈ [0, 1] models a trust value derived from an interaction which happened at
time t. Specifically, if the set E of trust evaluations does not change, the direct trust at time t is
computed by applying the decay function to the last direct trust value D(α, β, φ, t)E, computed
at time t; otherwise, be tvt ∈ [0, 1] the fresh trust evaluation obtained at time t , the new direct
trust value is computed by normalizing the linear combination of both tvt and the past direct
value of D(α, β, φ, t)E, that is the last direct trust value computed at time t ≤ t .
D(α, β, φ, t )E =



0 if E = ∅
D(α, β, φ, t)E · δφ(t − t) if E = E
D(α,β,φ,t)E·δφ(t −t)+ω·tvt
δφ(t −t)+ω if E = E ∪ {tvt }
(8)
In the equation above, ω is named the trust stability factor and it represents the adjustment to
be made to the trust value according to the previous interactions; the cognitive model for trust
stability is described below in this section.
UBLCS-2006-09 24
2 The Reputation Model
Equation (9) models indirect trust, which is the average trust-degree that a set Γ of known
principals, namely recommenders, associate with the trustee (within a given context at a specific
time) We define it to be the average reputation that a set Γ of known principals, namely recom-
menders, associate with the trustee (within a given context and at a specific time). In that equation,
it is worth noting the use of the notation R(γ, β, φ, t) that stands for the reputation obtained from
principal γ about principal β; for clarity, we call recommendation each reputation that a princi-
pal obtains from other principals. The weight that is assigned to each single recommendation
is the current direct trust between the α, the principal receiving the recommendations, and the
recommender which provided that recommendation, within the context of ”provisioning of rec-
ommendations within context φ” which is represented by the notation j(φ). Whenever fresh
trust information is not available the trust decay applies as in the case of direct trust.
I(α, β, φ, t )R =



0 if R = ∅
I(α, β, φ, t)R · δφ(t − t) if R = R
γ∈Γ D(α,γ,j(φ),t )E·R(γ,β,φ,t )
γ∈Γ D(α,γ,j(φ),t ) otherwise
(9)
Equation (10) defines reputation as the convex combination of direct and undirect trust; the
trust balancing factor, represented by the notation ψ, is a factor that indicates the subjective weight
a specific principal assigns to direct trust with respect to indirect trust; the cognitive model for
trust balance is described below in this section. Moreover, in this equation E and R indicate,
respectively, the set of trust evaluations directly performed by the truster and the set of recom-
mendations that he collected from a set Γ of recommenders. Similarly to what happens to both
direct and indirect trust, if any new trust evaluation is not available, it is possible to compute the
current reputation from the previously computed value.
R(α, β, φ, t )E ,R =



R(α, β, φ, t)E,R · δφ(t − t) if E = E ∧ R = R
ψ · D(α, β, φ, t )E
+(1 − ψ) · I(α, β, φ, t )R otherwise
(10)
Let any two principals carry out an interaction; after that interaction, each of them can as-
sociate a trust value with the other principal, according to his behaviour during that interaction.
However, a new trust value does not only contribute to compute the direct trust between a truster
and a trustee; it is also used for computing the direct trust between the truster and the recom-
menders. Basically, the smaller the difference between the recommendation and that trustee’s
direct trust (updated with the latest trust value), the better reputation that will be associated with
that recommender by the truster. Thus, given a truster α, a recommendor β, and a trustee γ,
the trust value, tv, that α associates with a recommendation from β regarding γ trustworthiness
within a context φ, the new direct trust degree between the truster and the recommender is to be
computed according to (8) and (11).
tv = (1 − |R(β, γ, φ, t ) − D(α, γ, φ, t )|) (11)
2.1 Modeling Trust Stability
According to [Wei61], to correctly represent the trustworthiness of a principal through subjec-
tive probabilities, it is important to be enabled with cognitive attribution process to interpret the
outcomes of a trusters reliance on a given trustee and that trustees performance (i.e., failure or
success); in particular, the effect of the trust a given truster placed on the failure or success of a
given trustee on the trust that the truster places in the trustee depends on the causal attribution
that the truster associates with the event. Hence, any event (i.e., success or failure) can be either
ascribed to factors internal to the subject, or to environmental, external causes; moreover, these
events may either be occasional or depend on stable properties of the principal or the environ-
ment. However, since we are not provided with information about the trustee information and
our reputation model is intended to estimate the truster subjective reliance on the trustee behav-
iour, we decide not to distinguish between individual and environmental responsibilities but to
UBLCS-2006-09 25
2 The Reputation Model
only distinguish between stable and occasional behaviours. In fact, the actual goal of the truster
is to assess the extent of overall trustworthiness that can be associated to the trustee within a
specific context.
Therefore, trust stability is modeled by (12); here, we want to identify whether an interac-
tion was typical or not and to adapt direct trust according to that. Therefore, we assume that
the trust values that can derive from the trust evaluations of a single principal, within the same
context, to have a gaussian distribution with mean value tv and standard deviation σ; thus,
given the set E of currently available trust evaluations about a specific principal within a given
context, we define his stable behaviours to be the ones whose trust degrees fall within the in-
terval [tvE − σE, tvE + σE], where tvE is the mean value computed on the set of evaluations E
and σE the respective standard deviation. In (12), the functions focc,pos(E, tv), fstab,pos(E, tv),
fstab,neg(E, tv), and focc,neg(E, tv) are to be instantiated according to the specific behaviour that
the principal wants to implement: focc,pos(E, tv) implements the behaviour corresponding to the
occurrence of occasional evaluations which are better than the stable ones, fstab,pos(E, tv) imple-
ments the behaviour corresponding to the occurrence of stable evaluations which are better than
the expected value, fstab,neg(E, tv) implements the behaviour corresponding to the occurrence
of stable evaluations which are worse than the expected value, focc,neg(E, tv) implements the
behaviour corresponding to the occurrence of occasional evaluations which are worse than the
stable ones.
σ∅ = 0
σE∪{tv} =
tvi∈E∪{tv} tvi − tvE∪{tv}
2
|E| + 1
ω =



focc,pos(E, tv) if tv > tvE + σE
fstab,pos(E, tv) if tvE + σE ≥ tv > tvE
fexp(E, tv) if tv = tvE
fstab,neg(E, tv) if tvE > tv ≥ tvE − σE
focc,neg(E, tv) if tvE + σE < tv
(12)
It is worth noting that the proposed approach does not require for each principal to maintain the
evaluations about each single interaction in which it took part; in order for a truster to maintain
updated statistical information about the interactions engaged with another principal, it is suffi-
cient for that truster to maintain three pieces of information regarding the number of the engaged
interactions, the sum of the evaluations, and the sum of the squares of each single evaluation.
This information enable one to easily achieve statistical information (i.e., mean and variance),
and can be easily updated with evaluations about further interactions.
In order for our model to be employed within some dependability context, we believe a cau-
tious and responsible behaviour should be modeled by the trust stability. For instance, if the fresh
trust value had meant an unusual low-dependable behaviour, then the trust stability value has
to enforce a significant decrease the associated direct trust; in turn, if it had meant an unusual
high-dependable behaviour, then the trust stability value has to enforce a negligible increase in
the associated direct trust. For instance, the functions below encode this evaluation approach.
focc,pos(E, tv) = 0
fstab,pos(E, tv) = 1 − tv− ¯tvE+σE
2σE
fexp(E, tv) = 1
2
fstab,neg(E, tv) = 1 − tv− ¯tvE+σE
2σE
focc,neg(E, tv) = 1
(13)
UBLCS-2006-09 26
3 Augmenting Contexts with Attributes
2.2 Modeling Trust Balance
When deciding how much to take into account received recommendations with respect to its
direct experience, an individual relies upon how confident he believes to be within the specific
context. Hence, we believe a good model for trust balance to be the self-trust an individual places
upon itself for providing recommendations within that context.
Equation (14) describes the model for trust balance; essentially, self-confidence and depends
on a principal α, a context φ, and time t. Trust balance also depends on the size of the set Γ of
recommenders; in fact, the more the available recommendations are, the less the direct trust will
be considered when computing the reputation. When there are no recommenders, only direct
trust is taken into account for computing reputation.
ψα,φ,t,Γ = D (α, α, j(φ), t)
log(|Γ|+1)
E (14)
Self-confidence is to be computed in a way that is similar to how direct trust within jurisdiction
is computed; it is the reflexive direct trust updated according to the precision with which the cur-
rently available direct trust approximates the computed reputations. Thus, given an interaction
between principals α and β which happened within context φ at time t, the self confidence is
updated according to the above defined tvφ.
tvφ = (1 − |R(α, γ, φ, t ) − D(α, γ, φ, t )|) (15)
Trust balancing for context φ is updated each time the principal carries on an interaction
within that context. The less the fresh trust value differentiates from the expected interaction
outcome (i.e., the trustee reputation) the more the self confidence will decrease. Similarly, the
more the fresh trust value differentiates from the expected interaction outcome (i.e., the trustee
reputation) the more the self confidence will decrease.
For instance, let us say that in our lifetime each of us statistically undergoes one major surgery;
in this case, it is impossible to assess the competence of a doctor from personal experience. By
maintaining the self-competence on each context, each of us which never had a major surgery
will completely base his estimation by relying on the experience of other individuals. Therefore,
given a patient, as the number of direct evaluations in that context increases and, consequently,
his personal experience adequately approximates the doctor behaviour, then his direct experience
will be more relevant in determining that estimation.
3 Augmenting Contexts with Attributes
So far, we described how to evaluate a principal’s reputation either in consuming or producing a
given interface or in providing recommendations within a specified context. Hence, if we know
that R(Alice, Nicola, p(smtp), t0) = 0.67, it is still unclear on which aspect the partial unrelia-
bility depends, e.g., whether Nicola’s smtp server rarely allows illegitimate access to emails or
just because of a low availability of the service. When a truster finds a trustee’s reputation to be
unsatisfactory, in order for the truster to interact with that trustee it is legitimate for him to be
aware of the origin of that partial unreliability; attributes are introduced to fill this purpose.
We define an attribute to be a property that is relevant within a given context, such as avail-
ability, confidentiality or authentication. Attributes are introduced into our model by defining
associated subcontexts, that are used to specialize the generic contexts.
In Tab. 2 the grammar describing the structure of Φ with attributes is formally described; here,
I indicates the set of interface names, A the set of attribute names, G the set of simple contexts
and C augments G with subcontexts.
Such a design approach enables us to simultaneously maintain both overall reputations and
specialized ones in an efficient manner; in fact, when a principal is assigned a new trust value
in a generic context φ ∈ Φ, if it corresponds to a specialized context, then the corresponding
reputation can be updated and the associated generic context can be retrieved in order for it
to be updated as well. Hence, the redefined definition of Φ allows us to manage specialized
UBLCS-2006-09 27
4 Relations with the Socio-Cognitive model
Table 2. Structure of Φ augmented with attributes.
I ::= i | h | k
A ::= a | b | d
G ::= p(I) | c(I)
C ::= a(G) | G | authentication
Φ ::= C | j(C)
reputations preserving the validity of the formulas presented in the previous section. The inverse
is not defined for attributes.
4 Relations with the Socio-Cognitive model
The main difference between the SIR computational model and the socio-cognitive model is the
concept of reputation as the aggregation of direct experience and indirect one that is encoded
within recommendations; in fact, the socio-cognitive model only specifies trust update as a con-
sequence of new direct evaluations. To our opinion, recommendations play a central role in
reputation management since, specially when a small number of direct evaluations has been per-
formed, they enable the truster to rely on a larger amount of possibly trustworthy evaluations.
Similarly to what the socio-cognitive model, through reputation the SIR model enables a prin-
cipal to approximate the trustworthiness of each possible principal. What in the socio-cognitive
model is represented by DoA and DoW, in SIR is implementable by introducing two attributes,
respectively competence and willingness, to all the possible contexts; the first one should be
updated each time an interaction is carried out with the evaluation of that interaction outcome,
while the second one should always be updated according to the interaction was completed or
not. However, what SIR enables a truster to achieve is not an approximation actual trustee’s
trustworthiness; instead it is aggregated with the influence that the global environment enforced
on the whole interaction. For example, be competence.p(i) and willingness.p(i) the contexts
respectively associated with the competence and willingness in providing interface i, then both
(16) and (17) hold; here, Eβ,p(i),t,S, Eβ,p(i),t,W, and Ep(i),t,Ω indicate, respectively, the event that
expresses success of β in providing interface i at time t, the event that express the willingness of
β about providing interface i at time t, and the event that express the absence of failure enforced
by the environment on the production of interface i at time t (we assume the environment influ-
ence to be independent from both the objective competence and the objective willingness of the
trustee).
lim
|I|→+∞
R(α, β, competence.p(i), t) = P Eβ,p(i),t,S · P Ep(i),t,Ω (16)
lim
|I|→+∞
R(α, β, willingness.p(i), t) = P Eβ,p(i),t,W · P Ep(i),t,Ω (17)
That is, the more evaluations or recommendations are collected, the better the reputation com-
puted by SIR approximates the overall probability of success within the evaluated context. How-
ever, it is worth noting the below decomposition wich expresses the probability of success with
which the interface i will be provided by principal β at time t to both the competence and will-
ingness of that trustee.
P Eβ,p(i),t,S = P Eβ,p(i),t,S Eβ,p(i),t,W · P Eβ,p(i),t,W
+ P Eβ,p(i),t,S Eβ,p(i),t,W · P Eβ,p(i),t,W
UBLCS-2006-09 28
4 Relations with the Socio-Cognitive model
But, since there is no probability of success if the trustee does not engage the task, then the overall
probability of success is expressed in (18).
P Eβ,p(i),t,S = P Eβ,p(i),t,S Eβ,p(i),t,W · P Eβ,p(i),t,W (18)
UBLCS-2006-09 29
Chapter 4
Case Study: Trust-Aware Naming
Service (TAw)
1 Motivation
The naming service is a fundamental part of any component oriented architecture. It implements
location transparency between components enabling each other with references for invoking op-
erations implemented by remote components. Specifically, the name service associates a specific
name with each component instance and the client component which requires the service of a
remote component instance needs to query the naming service with the required instance as-
sociated name in order to being enabled with its remote reference. Specifically, a name server
implements a one to one map between names and instances. Similarly, service oriented architec-
tures implement location transparency by means of directory services. Although implementing a
one to many relation between names and references, directory services maintain a map between
names and the associated service instances.
In a dynamic application context, as a generic service oriented application scenario, several
factors may affect service provisioning: for instance, components functionalities may be subject
of failures and recoveries, or service providers may dynamically change service provisioning
strategies for better achieving its own goals. Within such contexts, each client component (or
service) may obtain great advantage from being enabled with information about which service
provider could better meet its needs. Among the currently available technologies, neither nam-
ing nor directory services enable the client component, or service, with information about the
trustworthiness of each other. Such information would be useful for implementing a service to
evaluate the dependability (i.e., availability, reliability) of available services as well as to activate
security mechanisms when a number of entities engage in an interaction (i.e., based on the trust
degree each part places in the others, the security measures that are to be employed can be agreed
so as to avoid illegitimate behaviour and optimize system performances).
Hence, in this chapter we present our Trust-Aware Naming Service (TAw), a hybrid peer-to-
peer architecture which has specifically been designed for enabling existing naming and direc-
tory services with the abstraction of trust. The TAw architecture implements a middle tier be-
tween the client application and the naming service. Within this tier, information about principal
behaviours is collected and aggregated to form the abstraction of reputation; thus, this meta-
information will complete the results returned by the naming service enabling the principals to
compare the possible trustees with each other in order to find out the most trustworthy. Because
of its application context, we require TAw to be both scalable and flexible: scalability entails the
ability of the system to scale as the number of entities within the system increases, and flexibil-
ity entails the compatibility of TAw with different naming service architectures, as required by
a multi-institutional environment. It is worth noting that TAw does not play any active role in
managing computer security; it only provides each entity in the system with information about
the behaviour of other entities, allowing them to possibly avoid engaging in interactions with
30
2 The TAw Architecture
misbehaving entities.
2 The TAw Architecture
TAw is a hybrid peer-to-peer architecture in which each entity (i.e., service producer or con-
sumer) is associated with a TAw peer. TAw peers exchange trust information about each other;
locally, each peer maintains the available trust information, according to a specific trust model
(the current prototype implements the SIR reputation model), in order to approximate the trust-
worthiness of the entities with which he is engaging in interactions. The Virtual Society Service
(VSS) is responsible for enabling the peers to aggregate themselves into a virtual society; more-
over, it enforces an access control policy that is specific to the virtual society. Its purpose is both
to allow only legitimate entities to participate in the exchange of trust information, and to pre-
vent any entity from owning multiple instances of TAw peers simultaneously. Each TAw peer
both interfaces the entity with the naming service in a trust-aware manner and, simultaneously,
implements the social behaviour on behalf of that entity, i.e., it locally collects and maintains trust
information, computes reputations and propagates them on his behalf. A piece of trust informa-
tion held by a peer either originates from the owner entity as the result of concluded interactions,
or has been obtained via the Trust Propagation Protocol.
Since its completely decentralized design approach, TAw inherently meets scalability, effi-
ciency, and fault-tolerance; specifically, it implements both a hybrid peer-to-peer interaction para-
digm and an epidemic-like information dissemination technique (see below). Moreover, in order
for our architecture to meet adaptiveness, robustness, and scalability, the SIR reputation model
has been implemented within TAw.
Within the framework described in Section A Generic Trust Management Framework TAw
implements the trust management system. Because of the possible diversity between application
requirements, it owes to the application developer to enable the application with trust evaluation
and dynamic trust criteria.
2.1 TAw Peer
The TAw peer is the abstraction of a principal within a virtual society. Each TAw peer (i) im-
plements the social behaviour on behalf of that entity, i.e., it locally collects and maintains trust
information, computes reputations and propagates them on his behalf, and (ii) employs such in-
formation for enabling the associated entity to access the naming service in a trust-aware manner.
A piece of trust information held by a peer either originates from the owner entity as the result
of concluded interactions, or has been obtained via the Trust Propagation Protocol (TPP), which
we describe below.
Each peer embodies a data structure, namely the trust repository, that is used to maintain trust
information; basically, it is a collection of tuples (α, β, φ, t, p) where α ∈ P is the truster, β ∈ P
is the trustee, φ ∈ Φ is the context, t is the time to which the trust degree refers and p ∈ [0, 1] is
the trust degree associated with R(α, β, φ, t). Autonomously each TAw peer disseminates trust
information towards a subset of its neighborhood through the trust propagation protocol.
When speaking of interactions between application components which are deployed in multi-
institutional environments, interoperability is a key issue to address. For this reason we require
TAw to be suitable for accommodating any possible naming service in a virtual enterprise sce-
nario. To meet this requirement we divided the TAw peer in two parts, namely the TAw Core
Peer and the TAw Peer. The former implements the trust model and metrics which is common to
all the TAw Peers, providing them with a common way to compute and understand trust infor-
mation. The latter is a wrapper which employs the TAw Peer Core to enable a specific naming
service with trust requirements.
TAw Core Peer: The TawCorePeer class is responsible for providing the Taw peer with facili-
ties to implement all the trust related operations in a manner that is independent from the
specific naming technology. The TawCorePeer implements the TAwCoreInterface (see the
UBLCS-2006-09 31
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications
Global Computing: an Analysis of Trust and Wireless Communications

More Related Content

What's hot

Financial Networks and Cartography
Financial Networks and CartographyFinancial Networks and Cartography
Financial Networks and CartographyKimmo Soramaki
 
Relation of Coffee Break and Productivity
Relation of Coffee Break and ProductivityRelation of Coffee Break and Productivity
Relation of Coffee Break and ProductivityJavaCoffeeIQ.com
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network AnalysisWael Elrifai
 
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...mlaij
 
Effective and efficient on line communication dexa2012
Effective and efficient on line communication dexa2012Effective and efficient on line communication dexa2012
Effective and efficient on line communication dexa2012STIinnsbruck
 
Data mining based social network
Data mining based social networkData mining based social network
Data mining based social networkFiras Husseini
 
Data mining-ieee-2014-projects
Data mining-ieee-2014-projectsData mining-ieee-2014-projects
Data mining-ieee-2014-projectsVijay Karan
 
Data Mining IEEE 2014 Projects
Data Mining IEEE 2014 ProjectsData Mining IEEE 2014 Projects
Data Mining IEEE 2014 ProjectsVijay Karan
 
Networking Portfolio Term Paper
Networking Portfolio Term PaperNetworking Portfolio Term Paper
Networking Portfolio Term PaperWriters Per Hour
 
2009-Social computing-First steps to netviz nirvana
2009-Social computing-First steps to netviz nirvana2009-Social computing-First steps to netviz nirvana
2009-Social computing-First steps to netviz nirvanaMarc Smith
 
IEEE 2014 C# Projects
IEEE 2014 C# ProjectsIEEE 2014 C# Projects
IEEE 2014 C# ProjectsVijay Karan
 
IEEE 2014 C# Projects
IEEE 2014 C# ProjectsIEEE 2014 C# Projects
IEEE 2014 C# ProjectsVijay Karan
 
New prediction method for data spreading in social networks based on machine ...
New prediction method for data spreading in social networks based on machine ...New prediction method for data spreading in social networks based on machine ...
New prediction method for data spreading in social networks based on machine ...TELKOMNIKA JOURNAL
 
Routing and Security Issues for Trust Based Framework in Mobile Ad Hoc Networks
Routing and Security Issues for Trust Based Framework in Mobile Ad Hoc NetworksRouting and Security Issues for Trust Based Framework in Mobile Ad Hoc Networks
Routing and Security Issues for Trust Based Framework in Mobile Ad Hoc Networksiosrjce
 
Using Social Network Analysis for Assessing Team Capacity and Collaboration
Using Social Network Analysis for Assessing Team Capacity and CollaborationUsing Social Network Analysis for Assessing Team Capacity and Collaboration
Using Social Network Analysis for Assessing Team Capacity and CollaborationJocelyne Helbling
 

What's hot (18)

Financial Networks and Cartography
Financial Networks and CartographyFinancial Networks and Cartography
Financial Networks and Cartography
 
Relation of Coffee Break and Productivity
Relation of Coffee Break and ProductivityRelation of Coffee Break and Productivity
Relation of Coffee Break and Productivity
 
Social Network Analysis
Social Network AnalysisSocial Network Analysis
Social Network Analysis
 
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
A Deep Learning Model to Predict Congressional Roll Call Votes from Legislati...
 
thesis
thesisthesis
thesis
 
Web.3.0
Web.3.0Web.3.0
Web.3.0
 
Effective and efficient on line communication dexa2012
Effective and efficient on line communication dexa2012Effective and efficient on line communication dexa2012
Effective and efficient on line communication dexa2012
 
Edu.03 assignment
Edu.03 assignment Edu.03 assignment
Edu.03 assignment
 
Data mining based social network
Data mining based social networkData mining based social network
Data mining based social network
 
Data mining-ieee-2014-projects
Data mining-ieee-2014-projectsData mining-ieee-2014-projects
Data mining-ieee-2014-projects
 
Data Mining IEEE 2014 Projects
Data Mining IEEE 2014 ProjectsData Mining IEEE 2014 Projects
Data Mining IEEE 2014 Projects
 
Networking Portfolio Term Paper
Networking Portfolio Term PaperNetworking Portfolio Term Paper
Networking Portfolio Term Paper
 
2009-Social computing-First steps to netviz nirvana
2009-Social computing-First steps to netviz nirvana2009-Social computing-First steps to netviz nirvana
2009-Social computing-First steps to netviz nirvana
 
IEEE 2014 C# Projects
IEEE 2014 C# ProjectsIEEE 2014 C# Projects
IEEE 2014 C# Projects
 
IEEE 2014 C# Projects
IEEE 2014 C# ProjectsIEEE 2014 C# Projects
IEEE 2014 C# Projects
 
New prediction method for data spreading in social networks based on machine ...
New prediction method for data spreading in social networks based on machine ...New prediction method for data spreading in social networks based on machine ...
New prediction method for data spreading in social networks based on machine ...
 
Routing and Security Issues for Trust Based Framework in Mobile Ad Hoc Networks
Routing and Security Issues for Trust Based Framework in Mobile Ad Hoc NetworksRouting and Security Issues for Trust Based Framework in Mobile Ad Hoc Networks
Routing and Security Issues for Trust Based Framework in Mobile Ad Hoc Networks
 
Using Social Network Analysis for Assessing Team Capacity and Collaboration
Using Social Network Analysis for Assessing Team Capacity and CollaborationUsing Social Network Analysis for Assessing Team Capacity and Collaboration
Using Social Network Analysis for Assessing Team Capacity and Collaboration
 

Viewers also liked

A study of security in wireless and mobile payments
A study of security in wireless and mobile paymentsA study of security in wireless and mobile payments
A study of security in wireless and mobile paymentsJamal Meselmani
 
Flavio Felici Dissertation
Flavio Felici DissertationFlavio Felici Dissertation
Flavio Felici DissertationFlavio Felici
 
Automatic detection of click fraud in online advertisements
Automatic detection of click fraud in online advertisementsAutomatic detection of click fraud in online advertisements
Automatic detection of click fraud in online advertisementsTrieu Nguyen
 
Theory simulation fabrication and testing of double negative an
Theory simulation fabrication and testing of double negative anTheory simulation fabrication and testing of double negative an
Theory simulation fabrication and testing of double negative anNgoc Hieu Quang
 
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...Mostafa El-Beheiry
 
Thesis_A Study on 3G Mobile Technology
Thesis_A Study on 3G Mobile TechnologyThesis_A Study on 3G Mobile Technology
Thesis_A Study on 3G Mobile TechnologyZareen Rahman
 
Detecting fraud in cellular telephone networks
Detecting fraud in cellular telephone networksDetecting fraud in cellular telephone networks
Detecting fraud in cellular telephone networksJamal Meselmani
 
Satellite hospitals
Satellite hospitalsSatellite hospitals
Satellite hospitalsnone
 
Final thesis paper Digital Optical fiber link design
Final thesis paper Digital Optical fiber link designFinal thesis paper Digital Optical fiber link design
Final thesis paper Digital Optical fiber link designMd. Nadimul Islam
 
Design horn-antenna using hfss
Design horn-antenna using hfssDesign horn-antenna using hfss
Design horn-antenna using hfssMusbiha Binte Wali
 
Wireless Accident Identification
Wireless Accident IdentificationWireless Accident Identification
Wireless Accident Identificationshivu1234
 
BITS MS- Dissertation Final Report
BITS MS- Dissertation Final ReportBITS MS- Dissertation Final Report
BITS MS- Dissertation Final ReportAnnie Sofia
 
1G vs 2G vs 3G vs 4G vs 5G
1G vs 2G vs 3G vs 4G vs 5G1G vs 2G vs 3G vs 4G vs 5G
1G vs 2G vs 3G vs 4G vs 5GBharathi Ravi
 
Construction of telecommunication towers
Construction of telecommunication towersConstruction of telecommunication towers
Construction of telecommunication towerssnookala
 
analysis and design of telecommunication tower
analysis and design of telecommunication toweranalysis and design of telecommunication tower
analysis and design of telecommunication towerRohithasangaraju
 

Viewers also liked (20)

Puvan Dissertation 2
Puvan Dissertation 2Puvan Dissertation 2
Puvan Dissertation 2
 
certi2
certi2certi2
certi2
 
MSc Thesis in University of Tehran
MSc Thesis in University of TehranMSc Thesis in University of Tehran
MSc Thesis in University of Tehran
 
A study of security in wireless and mobile payments
A study of security in wireless and mobile paymentsA study of security in wireless and mobile payments
A study of security in wireless and mobile payments
 
Flavio Felici Dissertation
Flavio Felici DissertationFlavio Felici Dissertation
Flavio Felici Dissertation
 
Automatic detection of click fraud in online advertisements
Automatic detection of click fraud in online advertisementsAutomatic detection of click fraud in online advertisements
Automatic detection of click fraud in online advertisements
 
Theory simulation fabrication and testing of double negative an
Theory simulation fabrication and testing of double negative anTheory simulation fabrication and testing of double negative an
Theory simulation fabrication and testing of double negative an
 
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
Challenges in VoIP Systems - Mostafa Ahmed Mostafa El Beheiry - First Draft F...
 
Thesis_A Study on 3G Mobile Technology
Thesis_A Study on 3G Mobile TechnologyThesis_A Study on 3G Mobile Technology
Thesis_A Study on 3G Mobile Technology
 
Detecting fraud in cellular telephone networks
Detecting fraud in cellular telephone networksDetecting fraud in cellular telephone networks
Detecting fraud in cellular telephone networks
 
Satellite hospitals
Satellite hospitalsSatellite hospitals
Satellite hospitals
 
Final thesis paper Digital Optical fiber link design
Final thesis paper Digital Optical fiber link designFinal thesis paper Digital Optical fiber link design
Final thesis paper Digital Optical fiber link design
 
Design horn-antenna using hfss
Design horn-antenna using hfssDesign horn-antenna using hfss
Design horn-antenna using hfss
 
IP PBX
IP PBXIP PBX
IP PBX
 
Wireless Accident Identification
Wireless Accident IdentificationWireless Accident Identification
Wireless Accident Identification
 
BITS MS- Dissertation Final Report
BITS MS- Dissertation Final ReportBITS MS- Dissertation Final Report
BITS MS- Dissertation Final Report
 
Radar ppt
Radar pptRadar ppt
Radar ppt
 
1G vs 2G vs 3G vs 4G vs 5G
1G vs 2G vs 3G vs 4G vs 5G1G vs 2G vs 3G vs 4G vs 5G
1G vs 2G vs 3G vs 4G vs 5G
 
Construction of telecommunication towers
Construction of telecommunication towersConstruction of telecommunication towers
Construction of telecommunication towers
 
analysis and design of telecommunication tower
analysis and design of telecommunication toweranalysis and design of telecommunication tower
analysis and design of telecommunication tower
 

Similar to Global Computing: an Analysis of Trust and Wireless Communications

A Survey on Trust Inference Network for Personalized Use from Online Data Rating
A Survey on Trust Inference Network for Personalized Use from Online Data RatingA Survey on Trust Inference Network for Personalized Use from Online Data Rating
A Survey on Trust Inference Network for Personalized Use from Online Data RatingIRJET Journal
 
Trust Based Content Distribution for Peer-ToPeer Overlay Networks
Trust Based Content Distribution for Peer-ToPeer Overlay NetworksTrust Based Content Distribution for Peer-ToPeer Overlay Networks
Trust Based Content Distribution for Peer-ToPeer Overlay NetworksIJNSA Journal
 
Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...
Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...
Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...IRJET Journal
 
A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING
A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING
A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING IAEME Publication
 
HI5030 Business Systems Analysis And Design.docx
HI5030 Business Systems Analysis And Design.docxHI5030 Business Systems Analysis And Design.docx
HI5030 Business Systems Analysis And Design.docxwrite4
 
Travel Recommendation Approach using Collaboration Filter in Social Networking
Travel Recommendation Approach using Collaboration Filter in Social NetworkingTravel Recommendation Approach using Collaboration Filter in Social Networking
Travel Recommendation Approach using Collaboration Filter in Social NetworkingIRJET Journal
 
The Revolution Of Cloud Computing
The Revolution Of Cloud ComputingThe Revolution Of Cloud Computing
The Revolution Of Cloud ComputingCarmen Sanborn
 
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...Editor IJAIEM
 
List of Journal after read the abstract.docx
List of Journal after read the abstract.docxList of Journal after read the abstract.docx
List of Journal after read the abstract.docxAdieYadie1
 
IRJET- Cross System User Modeling and Personalization on the Social Web
IRJET- Cross System User Modeling and Personalization on the Social WebIRJET- Cross System User Modeling and Personalization on the Social Web
IRJET- Cross System User Modeling and Personalization on the Social WebIRJET Journal
 
Framework for opinion as a service on review data of customer using semantics...
Framework for opinion as a service on review data of customer using semantics...Framework for opinion as a service on review data of customer using semantics...
Framework for opinion as a service on review data of customer using semantics...IJECEIAES
 
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’S
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’SDiscovering Influential User by Coupling Multiplex Heterogeneous OSN’S
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’SIRJET Journal
 
Survey on Peer to Peer Car Sharing System Using Blockchain
Survey on Peer to Peer Car Sharing System Using BlockchainSurvey on Peer to Peer Car Sharing System Using Blockchain
Survey on Peer to Peer Car Sharing System Using BlockchainIRJET Journal
 
Data mining java titles adrit solutions
Data mining java titles adrit solutionsData mining java titles adrit solutions
Data mining java titles adrit solutionsAdrit Techno Solutions
 
Ncct Ieee Software Abstract Collection Volume 1 50+ Abst
Ncct   Ieee Software Abstract Collection Volume 1   50+ AbstNcct   Ieee Software Abstract Collection Volume 1   50+ Abst
Ncct Ieee Software Abstract Collection Volume 1 50+ Abstncct
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)IJERD Editor
 
A Design Theory For Digital Platforms Supporting Online Communities A Multip...
A Design Theory For Digital Platforms Supporting Online Communities  A Multip...A Design Theory For Digital Platforms Supporting Online Communities  A Multip...
A Design Theory For Digital Platforms Supporting Online Communities A Multip...Andrew Parish
 
IRJET- Predicting Social Network Communities Structure Changes and Detection ...
IRJET- Predicting Social Network Communities Structure Changes and Detection ...IRJET- Predicting Social Network Communities Structure Changes and Detection ...
IRJET- Predicting Social Network Communities Structure Changes and Detection ...IRJET Journal
 

Similar to Global Computing: an Analysis of Trust and Wireless Communications (20)

A Survey on Trust Inference Network for Personalized Use from Online Data Rating
A Survey on Trust Inference Network for Personalized Use from Online Data RatingA Survey on Trust Inference Network for Personalized Use from Online Data Rating
A Survey on Trust Inference Network for Personalized Use from Online Data Rating
 
Trust Based Content Distribution for Peer-ToPeer Overlay Networks
Trust Based Content Distribution for Peer-ToPeer Overlay NetworksTrust Based Content Distribution for Peer-ToPeer Overlay Networks
Trust Based Content Distribution for Peer-ToPeer Overlay Networks
 
Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...
Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...
Location Privacy Protection Mechanisms using Order-Retrievable Encryption for...
 
A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING
A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING
A STUDY ON PEER TO PEER NETWORK IN CURRENT NETWORKING
 
Ijciet 08 02_003
Ijciet 08 02_003Ijciet 08 02_003
Ijciet 08 02_003
 
HI5030 Business Systems Analysis And Design.docx
HI5030 Business Systems Analysis And Design.docxHI5030 Business Systems Analysis And Design.docx
HI5030 Business Systems Analysis And Design.docx
 
Travel Recommendation Approach using Collaboration Filter in Social Networking
Travel Recommendation Approach using Collaboration Filter in Social NetworkingTravel Recommendation Approach using Collaboration Filter in Social Networking
Travel Recommendation Approach using Collaboration Filter in Social Networking
 
The Revolution Of Cloud Computing
The Revolution Of Cloud ComputingThe Revolution Of Cloud Computing
The Revolution Of Cloud Computing
 
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...
Unification Algorithm in Hefty Iterative Multi-tier Classifiers for Gigantic ...
 
List of Journal after read the abstract.docx
List of Journal after read the abstract.docxList of Journal after read the abstract.docx
List of Journal after read the abstract.docx
 
IRJET- Cross System User Modeling and Personalization on the Social Web
IRJET- Cross System User Modeling and Personalization on the Social WebIRJET- Cross System User Modeling and Personalization on the Social Web
IRJET- Cross System User Modeling and Personalization on the Social Web
 
Framework for opinion as a service on review data of customer using semantics...
Framework for opinion as a service on review data of customer using semantics...Framework for opinion as a service on review data of customer using semantics...
Framework for opinion as a service on review data of customer using semantics...
 
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’S
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’SDiscovering Influential User by Coupling Multiplex Heterogeneous OSN’S
Discovering Influential User by Coupling Multiplex Heterogeneous OSN’S
 
Prototype Models of IoTs Interoperability
Prototype Models of IoTs InteroperabilityPrototype Models of IoTs Interoperability
Prototype Models of IoTs Interoperability
 
Survey on Peer to Peer Car Sharing System Using Blockchain
Survey on Peer to Peer Car Sharing System Using BlockchainSurvey on Peer to Peer Car Sharing System Using Blockchain
Survey on Peer to Peer Car Sharing System Using Blockchain
 
Data mining java titles adrit solutions
Data mining java titles adrit solutionsData mining java titles adrit solutions
Data mining java titles adrit solutions
 
Ncct Ieee Software Abstract Collection Volume 1 50+ Abst
Ncct   Ieee Software Abstract Collection Volume 1   50+ AbstNcct   Ieee Software Abstract Collection Volume 1   50+ Abst
Ncct Ieee Software Abstract Collection Volume 1 50+ Abst
 
International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)International Journal of Engineering Research and Development (IJERD)
International Journal of Engineering Research and Development (IJERD)
 
A Design Theory For Digital Platforms Supporting Online Communities A Multip...
A Design Theory For Digital Platforms Supporting Online Communities  A Multip...A Design Theory For Digital Platforms Supporting Online Communities  A Multip...
A Design Theory For Digital Platforms Supporting Online Communities A Multip...
 
IRJET- Predicting Social Network Communities Structure Changes and Detection ...
IRJET- Predicting Social Network Communities Structure Changes and Detection ...IRJET- Predicting Social Network Communities Structure Changes and Detection ...
IRJET- Predicting Social Network Communities Structure Changes and Detection ...
 

More from Nicola Mezzetti

La Trasformazione Digitale al Servizio della Ripresa
La Trasformazione Digitale al Servizio della RipresaLa Trasformazione Digitale al Servizio della Ripresa
La Trasformazione Digitale al Servizio della RipresaNicola Mezzetti
 
Formazione aziendale: focus sui fabbisogni
Formazione aziendale: focus sui fabbisogniFormazione aziendale: focus sui fabbisogni
Formazione aziendale: focus sui fabbisogniNicola Mezzetti
 
United for Change - Intervento Camera Penale Pistoia 11 02 2020
United for Change - Intervento Camera Penale Pistoia 11 02 2020United for Change - Intervento Camera Penale Pistoia 11 02 2020
United for Change - Intervento Camera Penale Pistoia 11 02 2020Nicola Mezzetti
 
La Formazione, Leva del Cambiamento Organizzativo
La Formazione, Leva del Cambiamento OrganizzativoLa Formazione, Leva del Cambiamento Organizzativo
La Formazione, Leva del Cambiamento OrganizzativoNicola Mezzetti
 
La Trasformazione Digitale al servizio della Cooperazione
La Trasformazione Digitale al servizio della CooperazioneLa Trasformazione Digitale al servizio della Cooperazione
La Trasformazione Digitale al servizio della CooperazioneNicola Mezzetti
 
Sviluppare un percorso di Trasformazione Digitale
Sviluppare un percorso di Trasformazione DigitaleSviluppare un percorso di Trasformazione Digitale
Sviluppare un percorso di Trasformazione DigitaleNicola Mezzetti
 
Intervento United for Change al Open Day UCPI 2019
Intervento United for Change al Open Day UCPI 2019Intervento United for Change al Open Day UCPI 2019
Intervento United for Change al Open Day UCPI 2019Nicola Mezzetti
 
United For Change - conferenza stampa 24/01/2019
United For Change - conferenza stampa 24/01/2019United For Change - conferenza stampa 24/01/2019
United For Change - conferenza stampa 24/01/2019Nicola Mezzetti
 
La trasformazione digitale delle organizzazioni
La trasformazione digitale delle organizzazioniLa trasformazione digitale delle organizzazioni
La trasformazione digitale delle organizzazioniNicola Mezzetti
 
Articolo qualita 3_2018_nicola_mezzetti
Articolo qualita 3_2018_nicola_mezzettiArticolo qualita 3_2018_nicola_mezzetti
Articolo qualita 3_2018_nicola_mezzettiNicola Mezzetti
 
Innovazione nella quarta rivoluzione industriale
Innovazione nella quarta rivoluzione industrialeInnovazione nella quarta rivoluzione industriale
Innovazione nella quarta rivoluzione industrialeNicola Mezzetti
 
The process approach (and business process management)
The process approach (and business process management)The process approach (and business process management)
The process approach (and business process management)Nicola Mezzetti
 
Tools and Techniques of Quality Planning
Tools and Techniques of Quality PlanningTools and Techniques of Quality Planning
Tools and Techniques of Quality PlanningNicola Mezzetti
 
2017 SMAU Bologna Industria 4.0
2017 SMAU Bologna Industria 4.02017 SMAU Bologna Industria 4.0
2017 SMAU Bologna Industria 4.0Nicola Mezzetti
 
Progettazione di un sistema di gestione qualità per una startup
Progettazione di un sistema di gestione qualità per una startupProgettazione di un sistema di gestione qualità per una startup
Progettazione di un sistema di gestione qualità per una startupNicola Mezzetti
 
Nicola Mezzetti short CV EN
Nicola Mezzetti short CV ENNicola Mezzetti short CV EN
Nicola Mezzetti short CV ENNicola Mezzetti
 
Nicola Mezzetti short CV ITA
Nicola Mezzetti short CV ITANicola Mezzetti short CV ITA
Nicola Mezzetti short CV ITANicola Mezzetti
 

More from Nicola Mezzetti (20)

La Trasformazione Digitale al Servizio della Ripresa
La Trasformazione Digitale al Servizio della RipresaLa Trasformazione Digitale al Servizio della Ripresa
La Trasformazione Digitale al Servizio della Ripresa
 
Formazione aziendale: focus sui fabbisogni
Formazione aziendale: focus sui fabbisogniFormazione aziendale: focus sui fabbisogni
Formazione aziendale: focus sui fabbisogni
 
United for Change - Intervento Camera Penale Pistoia 11 02 2020
United for Change - Intervento Camera Penale Pistoia 11 02 2020United for Change - Intervento Camera Penale Pistoia 11 02 2020
United for Change - Intervento Camera Penale Pistoia 11 02 2020
 
La Formazione, Leva del Cambiamento Organizzativo
La Formazione, Leva del Cambiamento OrganizzativoLa Formazione, Leva del Cambiamento Organizzativo
La Formazione, Leva del Cambiamento Organizzativo
 
La Trasformazione Digitale al servizio della Cooperazione
La Trasformazione Digitale al servizio della CooperazioneLa Trasformazione Digitale al servizio della Cooperazione
La Trasformazione Digitale al servizio della Cooperazione
 
Sviluppare un percorso di Trasformazione Digitale
Sviluppare un percorso di Trasformazione DigitaleSviluppare un percorso di Trasformazione Digitale
Sviluppare un percorso di Trasformazione Digitale
 
Intervento United for Change al Open Day UCPI 2019
Intervento United for Change al Open Day UCPI 2019Intervento United for Change al Open Day UCPI 2019
Intervento United for Change al Open Day UCPI 2019
 
United For Change - conferenza stampa 24/01/2019
United For Change - conferenza stampa 24/01/2019United For Change - conferenza stampa 24/01/2019
United For Change - conferenza stampa 24/01/2019
 
La trasformazione digitale delle organizzazioni
La trasformazione digitale delle organizzazioniLa trasformazione digitale delle organizzazioni
La trasformazione digitale delle organizzazioni
 
Articolo qualita 3_2018_nicola_mezzetti
Articolo qualita 3_2018_nicola_mezzettiArticolo qualita 3_2018_nicola_mezzetti
Articolo qualita 3_2018_nicola_mezzetti
 
Innovazione nella quarta rivoluzione industriale
Innovazione nella quarta rivoluzione industrialeInnovazione nella quarta rivoluzione industriale
Innovazione nella quarta rivoluzione industriale
 
The process approach (and business process management)
The process approach (and business process management)The process approach (and business process management)
The process approach (and business process management)
 
Tools and Techniques of Quality Planning
Tools and Techniques of Quality PlanningTools and Techniques of Quality Planning
Tools and Techniques of Quality Planning
 
2017 SMAU Bologna Industria 4.0
2017 SMAU Bologna Industria 4.02017 SMAU Bologna Industria 4.0
2017 SMAU Bologna Industria 4.0
 
Procurement Management
Procurement ManagementProcurement Management
Procurement Management
 
Progettazione di un sistema di gestione qualità per una startup
Progettazione di un sistema di gestione qualità per una startupProgettazione di un sistema di gestione qualità per una startup
Progettazione di un sistema di gestione qualità per una startup
 
Innovazione etica
Innovazione eticaInnovazione etica
Innovazione etica
 
Nicola Mezzetti short CV EN
Nicola Mezzetti short CV ENNicola Mezzetti short CV EN
Nicola Mezzetti short CV EN
 
Nicola Mezzetti short CV ITA
Nicola Mezzetti short CV ITANicola Mezzetti short CV ITA
Nicola Mezzetti short CV ITA
 
Teaching innovation
Teaching innovationTeaching innovation
Teaching innovation
 

Recently uploaded

Bioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptxBioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptx023NiWayanAnggiSriWa
 
Harmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms PresentationHarmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms Presentationtahreemzahra82
 
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingBase editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingNetHelix
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubaikojalkojal131
 
Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)Tamer Koksalan, PhD
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPirithiRaju
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxMurugaveni B
 
User Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationUser Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationColumbia Weather Systems
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naJASISJULIANOELYNV
 
Microteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringMicroteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringPrajakta Shinde
 
OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024innovationoecd
 
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》rnrncn29
 
Forensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxForensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxkumarsanjai28051
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxpriyankatabhane
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024AyushiRastogi48
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)Columbia Weather Systems
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxmalonesandreagweneth
 
preservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptxpreservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptxnoordubaliya2003
 
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.PraveenaKalaiselvan1
 

Recently uploaded (20)

Bioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptxBioteknologi kelas 10 kumer smapsa .pptx
Bioteknologi kelas 10 kumer smapsa .pptx
 
Harmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms PresentationHarmful and Useful Microorganisms Presentation
Harmful and Useful Microorganisms Presentation
 
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editingBase editing, prime editing, Cas13 & RNA editing and organelle base editing
Base editing, prime editing, Cas13 & RNA editing and organelle base editing
 
Volatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -IVolatile Oils Pharmacognosy And Phytochemistry -I
Volatile Oils Pharmacognosy And Phytochemistry -I
 
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In DubaiDubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
Dubai Calls Girl Lisa O525547819 Lexi Call Girls In Dubai
 
Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)Carbon Dioxide Capture and Storage (CSS)
Carbon Dioxide Capture and Storage (CSS)
 
Pests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdfPests of Bengal gram_Identification_Dr.UPR.pdf
Pests of Bengal gram_Identification_Dr.UPR.pdf
 
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptxSTOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
STOPPED FLOW METHOD & APPLICATION MURUGAVENI B.pptx
 
User Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather StationUser Guide: Magellan MX™ Weather Station
User Guide: Magellan MX™ Weather Station
 
FREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by naFREE NURSING BUNDLE FOR NURSES.PDF by na
FREE NURSING BUNDLE FOR NURSES.PDF by na
 
Microteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical EngineeringMicroteaching on terms used in filtration .Pharmaceutical Engineering
Microteaching on terms used in filtration .Pharmaceutical Engineering
 
OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024OECD bibliometric indicators: Selected highlights, April 2024
OECD bibliometric indicators: Selected highlights, April 2024
 
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》《Queensland毕业文凭-昆士兰大学毕业证成绩单》
《Queensland毕业文凭-昆士兰大学毕业证成绩单》
 
Forensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptxForensic limnology of diatoms by Sanjai.pptx
Forensic limnology of diatoms by Sanjai.pptx
 
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptxMicrophone- characteristics,carbon microphone, dynamic microphone.pptx
Microphone- characteristics,carbon microphone, dynamic microphone.pptx
 
Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024Vision and reflection on Mining Software Repositories research in 2024
Vision and reflection on Mining Software Repositories research in 2024
 
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
User Guide: Pulsar™ Weather Station (Columbia Weather Systems)
 
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptxLIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
LIGHT-PHENOMENA-BY-CABUALDIONALDOPANOGANCADIENTE-CONDEZA (1).pptx
 
preservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptxpreservation, maintanence and improvement of industrial organism.pptx
preservation, maintanence and improvement of industrial organism.pptx
 
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
BIOETHICS IN RECOMBINANT DNA TECHNOLOGY.
 

Global Computing: an Analysis of Trust and Wireless Communications

  • 1. Global Computing: an Analysis of Trust and Wireless Communications (Ph.D. Thesis) Nicola Mezzetti Technical Report UBLCS-2006-09 April 2006 Department of Computer Science University of Bologna Mura Anteo Zamboni 7 40127 Bologna (Italy)
  • 2. The University of Bologna Department of Computer Science Research Technical Reports are available in PDF and gzipped PostScript formats via anonymous FTP from the area ftp.cs.unibo.it:/pub/TR/UBLCS or via WWW at URL http://www.cs.unibo.it/. Plain-text abstracts organized by year are available in the directory ABSTRACTS. Recent Titles from the UBLCS Technical Report Series 2005-9 A Reasoning Infrastructure to Support Cooperation of Intelligent Agents on the Semantic Grid, Dragoni, N., Gaspari, M., Guidi, D., April 2005. 2005-10 Fault Tolerant Knowledge Level Communication in Open Asynchronous Multi-Agent Systems, Dragoni, N., Gaspari, M., April 2005. 2005-11 The AEDSS Application Ontology: Enhanced Automatic Assessment of EDSS in Multiple Sclerosis, Gas- pari, M., Saletti, N., Scandellari, C., Stecchi, S., April 2005. 2005-12 How to cheat BitTorrent and why nobody does, Hales, D., Patarin, S., May 2005. 2005-13 Choose Your Tribe! - Evolution at the Next Level in a Peer-to-Peer network, Hales, D., May 2005. 2005-14 Knowledge-Based Jobs and the Boundaries of Firms: Agent-based simulation of Firms Learning and Work- force Skill Set Dynamics, Mollona, E., Hales, D., June 2005. 2005-15 Tag-Based Cooperation in Peer-to-Peer Networks with Newscast, Marcozzi, A., Hales, D., Jesi, G., Arte- coni, S., Babaoglu, O., June 2005. 2005-16 Atomic Commit and Negotiation in Service Oriented Computing, Bocchi, L., Ciancarini, P., Lucchi, R., June 2005. 2005-17 Efficient and Robust Fully Distributed Power Method with an Application to Link Analysis, Canright, G., Engo-Monsen, K., Jelasity, M., September 2005. 2005-18 On Computing the Topological Entropy of One-sided Cellular Automata, Di Lena, P., September 2005. 2005-19 A model for imperfect XML data based on Dempster-Shafer’s theory of evidence, Magnani, M., Montesi, D., September 2005. 2005-20 Friends for Free: Self-Organizing Artificial Social Networks for Trust and Cooperation, Hales, D., Arte- coni, S., November 2005. 2005-21 Greedy Cheating Liars and the Fools Who Believe Them, Arteconi, S., Hales, D., December 2005. 2006-01 Lambda-Types on the Lambda-Calculus with Abbreviations: a Certified Specification, Guidi, F., January 2006. 2006-02 On the Quality-Based Evaluation and Selection of Grid Services (Ph.D. Thesis), Andreozzi, S., March 2006. 2006-03 Transactional Aspects in Coordination and Composition of Web Services (Ph.D. Thesis), Bocchi, L., March 2006. 2006-04 Semantic Frameworks for Implicit Computational Complexity (Ph.D. Thesis), Dal Lago, U., March 2006. 2006-05 Fault Tolerant Knowledge Level Inter-Agent Communication in Open Multi-Agent Systems (Ph.D. Thesis), Dragoni, N., March 2006. 2006-06 Middleware Services for Dynamic Clustering of Application Servers (Ph.D. Thesis), Lodi, G., March 2006. 2006-07 Meta Model Management for (Semi) Structured and Uncertain Models (Ph.D. Thesis), Magnani, M., March 2006. 2006-08 Towards Abstractions for Web Services Composition (Ph.D. Thesis), Mazzara, M., March 2006.
  • 3. Global Computing: an Analysis of Trust and Wireless Communications (Ph.D. Thesis) Nicola Mezzetti1 Technical Report UBLCS-2006-09 April 2006 1. Dept. of Computer Science, University of Bologna, 7, Mura Anteo Zamboni, 40127 Bologna, ITALY, e-mail: nicola.mezzetti@cs.unibo.it, homepage: http://www.cs.unibo.it/˜mezzettn 1
  • 4. Abstract In this thesis we analyze problems of trust modelling and implementation in the context of Global Comput- ing, and problems of how implementations of trust protocols for wireless settings can be formally verified. A framework for supporting dynamic trust relationships is important for the definition of trustworthy protocols for wireless computing; in fact, in ne a such context a dynamic set of possibly mutually distrustful entities (i.e., individuals, services) interact with each other, consuming or producing services, for pursuing their own goals. In this context, traditional static trust techniques fail to support the trustworthy execution of interaction protocols among all the possible interacting entities. It is thus necessary the development of a decentralized dynamic-trust system to be independently implemented by any possible entity in the global computing system. The study and modelling of trust has its roots in sociology, which provides conceptual and qualitative specifications on the manner in which trust forms and evolves. However, current research on trust do not address a unique formalization of trust, and of the requirements it has to meet. Currently a number of dy- namic trust models exist, and each of these models is provided with simulation-based validation techniques whose design is mainly motivated by good-sense. Only when one such a lack will be compensated, rigorous techniques and tools for the validation of trust models will be available. The goal of this thesis is twofold: firstly, we aim at providing a computational model of trust that faith- fully represents the socio-cognitive models that are available in the literature and to show its application flexibility in the global computing context; secondly, we aim at providing formal tools for the specification and verification of trust protocols for wireless systems. The first point is addressed through the development of the Socially-Inspired Reputation computational model (SIR), which has been designed in order to monitor and evaluate entity behaviour in an adaptive, robust, and scalable manner. SIR is the first computational model of reputation that implements the socio- cognitive process that regulates the establishment of trust relationships in human societies. SIR has been employed in the design, verification, and implementation of a Trust-Aware extension for naming services (TAw), which shows to be effective in supporting the implementation of trustworthy interaction protocols in open systems. SIR has also been employed in the design and implementation of rawDonkey, a client for the peer-to-peer file-sharing eDonkey network which integrates an autonomous and decentralized monitor- ing system reputation evaluation and formation which is employed for the penalization of non-collaborative peers and for optimizing the global quality of the file-sharing service. The second part of this thesis examines the communication model of wireless systems, which character- ize the communication model that best represents global computing. We follow the process calculi approach and we develop the Calculus of Wireless Systems (CWS). The principal contribution of this work is the de- velopment of reduction semantics and a labelled transition semantics, and the proof of their equivalence. The novelty of this work lies in the fact that wireless systems implement a communication model which is substantially different from the ones which are implemented by the process calculi that are currently proposed by the literature. Hence, for the modelling of wireless communications, it has been necessary the development of a novel technique based on the observation of changes of communication states of the single devices (i.e., transmission, reception) instead of on the observation of single transmission instances. This approach introduces original aspects into both of our semantics and, consequently, on the demonstration of the equivalence between them. At the current state, the works on trust and on wireless systems have not been integrated yet, and the two parts of this thesis may appear separate works. However, the binding between them lies in the achieve- ments that are left open by this thesis, that we list below. First, the integration of CWS with a set of tools for supporting the specification and the verification of trust-related properties and the consequent formal specification of a reputation system based on SIR. This specification system will be then employed in the verification of the properties of the reputation model. Secondly, the main idea is for us to provide compre- hensive framework for the specification and analysis of decentralized reputation systems for supporting the implementation of trustworthy interaction protocols for wireless systems. Even though this was the initial goal of this thesis, we discovered that one such an ambitious goal requires an amount of efforts that is far beyond the requirements of a Ph.D. thesis. UBLCS-2006-09 2
  • 5. Acknowledgments This progress report is in partial fulfillment of the Ph.D. program; I would like to thank Prof. Davide Sangiorgi, my Ph. D. advisor, and Prof. Fabio Panzieri, my supervisor within the FP5 European Project TAPAS (IST-2001-34069), for the support and the advices with which they are helping me in the achievement of my Ph. D. title. Moreover, I would like to thank the European FP5 RTD project TAPAS (IST-2001-34069) and the base project “WebMiNDS” under the FIRB pro- gram of the Italian Ministry of Education, University and Research for the partial founding of my Ph. D. Program. 3
  • 6. CONTENTS Contents 1 Introduction 8 2 Motivation 11 1 Discussion 11 2 The Socio-Cognitive Model of Trust 13 2.1 Considerations about the Socio-Cognitive Trust Model 15 3 A Conceptual Model for Computing Reputation 16 4 Related Contributions 17 4.1 The Web of Trust 19 3 SIR: Socially Inspired Reputation Model 21 1 Foundations of Trust 21 1.1 Trust Degrees 23 2 The Reputation Model 23 2.1 Modeling Trust Stability 25 2.2 Modeling Trust Balance 27 3 Augmenting Contexts with Attributes 27 4 Relations with the Socio-Cognitive model 28 4 Case Study: Trust-Aware Naming Service (TAw) 30 1 Motivation 30 2 The TAw Architecture 31 2.1 TAw Peer 31 2.2 Virtual Society Service 36 2.3 TAw Trust Propagation Protocol 39 2.4 Experimental Results 41 2.4.1 Adaptability Test 41 2.4.2 Robustness Test 43 5 Case Study: rawDonkey 46 1 Introduction 46 2 Related Work 47 3 rawDonkey 48 3.1 The eDonkey Protocol 48 3.2 The Architecture of rawDonkey 48 3.3 Trust Management System 49 3.3.1 Trust Evaluation Criteria 50 3.3.2 Gossiping 51 3.3.3 Trust-Aware Policies 52 6 Considerations 53 1 Considerations 53 2 Future Work 54 7 Motivation of the work 55 1 Discussion 55 2 Related Work 58 8 The Basic Language 59 1 Mobile Computing and Wireless Communications 59 1.1 Examples 60 2 The Language 62 UBLCS-2006-09 4
  • 7. CONTENTS 3 Semantics 64 4 Reduction Semantics 65 4.1 Examples 71 5 Labelled Transition Semantics 74 5.1 Examples 77 6 The Harmony Theorem 79 6.1 The Harmony Theorem 86 9 The Extended Language 100 1 Language Extensions 100 2 Semantic Extensions 102 2.1 Extensions to the Reduction Semantics 102 2.2 Extensions to the Labelled Transition Semantics 105 3 Extensions to the Harmony Theorem 106 4 The Extended Harmony Theorem 119 5 Example: The Alternating Bit Protocol 140 5.1 Formal description 141 5.2 Execution Cases 142 5.2.1 Execution in Absence of Collisions 142 5.2.2 Execution in Presence of Collisions 142 10 Considerations 145 1 Considerations 145 2 Future Work 145 11 Conclusions 147 1 Future Works 148 UBLCS-2006-09 5
  • 8. LIST OF FIGURES List of Figures 1 Framework supporting interactions based on dynamical trust. 12 2 A Conceptual Model for Computing Reputation 16 1 UML class diagram for the TAwPeer. 32 2 Directory Information Tree implemented by the VSS service. 36 3 UML class diagram for the Virtual Society Service. 38 4 Reconfiguration test employing, respectively, the cognitive models (a) and arbitrary trust stability and trust balance (b). 40 5 Robustness test employing the cognitive models. 42 6 Robustness test employing arbitrary trust stability and balance. 44 1 Architecture of rawDonkey. 49 2 Interaction between TMS and Core in free rider identification. 51 1 Example of a possible communication between node T and node R. 60 2 Example of coordination between transmitters. 60 3 Example of coordination between transmitters with different transmission radius. 61 4 Example of collision on a receiver. 61 5 Possible ways in which transmissions in intersecting cells may collide. 62 6 A more complex example. 62 UBLCS-2006-09 6
  • 9. LIST OF TABLES List of Tables 1 Structure of Φ augmented with jurisdiction. 22 2 Structure of Φ augmented with attributes. 28 1 Structure of a trust propagation protocol message. 39 2 Pseudocode describing the trust propagation protocol. 40 1 Language for the description of wireless networks 63 2 Language for the description of tagged networks 67 3 Rules for >c l . 68 4 Rules for >. 68 5 Structural Congruence. 68 6 Structural Congruence for tagged networks. 69 7 Rules for −→c l,r. 69 8 Rules for c l . 70 9 Reduction rules for normalization with possible interferences. 70 10 Rules for . 70 11 Labelled transition rules for networks. 75 12 Labelled transition rules for processes. 76 1 Language for the description of wireless networks 101 2 Extension to the language for tagged networks 102 3 Additional rules for >c l . 103 4 Additional the rules for >. 103 5 Rules for ≡. 103 6 Additional rules for event firing. 104 7 Rules for ≡E. 104 8 Additional rules for c l . 104 9 Additional rules for . 104 10 Additional rules for . 104 11 Reduction rules for internal actions. 105 12 Extensions to the labelled transition rules for networks. 105 13 Extensions to the labelled transition rules for processes. 106 UBLCS-2006-09 7
  • 10. Chapter 1 Introduction In the last years, global computing has become an active field of research in distributed and mobile computation supported by infrastructures available globally and able to provide uniform services meeting variable requirements in system aspects (e.g., communication, co-operation and mobility, resource usage, security policies and mechanisms), with particular regard to exploiting their universal scale and the programmability of their services. Specifically, global computing aims at enabling a ubiquitous support for heterogeneous open systems in which entities (i.e., individuals and resources) dynamically join and leave the system, and employ the facilities of the system for consuming, for providing, or for collaborating in the provisioning of a set of services. When speaking of such ubiquitous contexts, we cannot make assumptions on the available resources with which any single computing platform may support the global computation; we can neither make assumptions about the specific networking technology interconnecting each single device. It is because of its characteristics that mobile wireless devices is the technology that most is taken as a reference in the research in global computing.It both enables the setting up of open and ubiquitous computing infrastructures and their hardware may arbitrarily vary between sensors of the size of one cubic millimeter and laptop computers. To address the goals of global computing, further research has to be accomplished in order to fill the gap between the currently available theory on distributed systems and the knowledge that is needed to enable cooperative activities and resource integration between mutually un- trustworthy devices. During our preliminary research in global computing, we came across a subject that we believe to be important: the establishment of trust relationships between com- pletely unknown entities via wireless communication protocols. Trust is mandatory to support the dependable implementation of distributed protocols and services; in addition, it is much more fundamental when the services are implemented through collaboration of mutually untrustworthy entities (i.e., which are not guaranteed to behave de- pendably). Standard applications for wired distributed systems build their dependability on the existence of static trust relationships (i.e., there are entities, called trusted third parties, which are trusted for always exhibiting a dependable behaviour). By contrast, in the context of global computing both the available resources and the entity behaviours are dynamic; for this reason security in global computing scenario cannot rely on statically defined trust relationships. Due to the dynamic aspect of global computing scenarios, in order to support trust-aware interactions there is the need for a completely decentralized reputation system that enables each entity with reputations without requiring access to remote information repositories. However, the current trust models do not support the development of pervasive reputation systems. In the first part of this thesis we thus present our Socially Inspired Reputation (SIR) model that we have been developing; SIR is the first model of reputation implementing the conceptual socio- cognitive model of trust [FC01, FC04]. SIR has been developed for enabling the development of decentralized reputation management systems; thus, we have identified the properties belor that we require SIR to meet. 8
  • 11. LIST OF TABLES Flexibility: the reputation model has to be general enough to be applied to any possible context – i.e., the logics for aggregating the single trust evaluations have to be independent from the application context; Adaptability: reputations have to adapt according to the behaviour that the individuals exhibit within the system and to the time passing; Robustness: assuming the presence of a certain amount of noise, the reputation model has to identify it and prevent it to affect trustworthy information; Scalability: the reputation model has not to require the maintenance of any global knowledge. Furthermore, in the trust of this thesis we present two practical applications of SIR which have been designed, evaluated and developed in order to show that SIR meets the above require- ments. In particular, the first application is TAw, our Trust-Aware naming service (TAw), and the second application is rawDonkey, a peer-to-peer file-sharing platform based on the eDonkey protocol that employs an autonomous reputation monitoring system in order to encourage peer to collaborate with each others. Though these applications are important to show that SIR meets the requirements that guided its development, it is also important for us to provide a more formal and rigorous verification of its properties. Specifically, we are interested in showing that it can support trust-aware applica- tions in global computing contexts. To this end, formal methods, and in particular process calculi, can bring important contributions; provided a target system, process calculi enable its rigorous specification and the application of a wide set of accepted techniques and tools for the analysis, the implementation, and the verification of that specification system. Firstly, they enable com- positionality and openess. Compositionality enables one to analyze the behaviour of a system focusing on its components; this property makes it possible to both limit the analysis to a single part of a system and to analyze the whole system by taking separately each of its components and then composing the obtained results. Openess enables one to evaluate the behaviour of a system deployed in an open environment where other systems executing in parallel may possi- bly interact or interfere with that system. Moreover, a process calculus provided with a reduction semantics enables the development of a virtual machine capable of implementing that semantics and interpret a given specification model to perform tests or simulations, and a process calculus provided with a labelled transition semantics enables the implementation of techniques by which it is possible to verify whether certain properties (e.g., security, deadlock freeness) are satisfied by a given specification model. For instance, when verifying properties such as security where it is important to examine the possible interaction of the system with the environment, process calculi can be developed to compare the model with infinitely many adversaries; simulation and other experimental techniques do not enable such kind of verifications. In the second part of this thesis we thus present our work on the development of a process calculus for describing wireless systems. We have been developing that calculus envisioning its application in the modelling and verification of our reputation model. However, we believe that the contributions that are currently available in formal methods are not adequate for our verification purposes. Hence, in the second part of this thesis we present our work on the development of a for- mal model for describing the wireless communication model. Our calculus, called Calculus of Wireless Systems (CWS), is provided with a reduction semantics (RS) and a labelled transition semantics (LTS). Since these communication systems groups together aspects of synchrony, asyn- chrony, and broadcast communications, in the design of our semantics we could not follow the approaches available in the literature; thus, for each of our semantics we invented a specific ap- proach to model the behaviour of the target system. The main technical result of this second part is the equivalence between the two semantics; this result is important because, given that the RS is much more intuitive, it proves the correctness of the LTS and enables one to interchangeably employ the more appropriate between the developed semantics for that task to be performed. While reading this thesis, the trust and wireless parts may appear two separate works. How- ever, it is not so. The first part of this thesis has been developed in order to propose a model of UBLCS-2006-09 9
  • 12. LIST OF TABLES trust that could satisfy the requirements imposed by global computing applications. Instead, the second part of this work has been developed in order to provide us with formal tools to enable the specification and the verification of trust-aware communication protocols in the technologic setting that most represents global computing. One such an integration work is not obvious is far beyond the purpose of this thesis. It will be in the further contributions that we will move toward the provisioning of formal tools for employing CWS to reason on wireless systems and protocols; then, the goal will be (i) to augment CWS with the specification of a reputation man- agement system, and (ii) to employ this specification to perform the verification of the properties that guided us in the development of SIR. Further contributions will also present and discuss the deep simulation analysis about the reputation system implemented within the rawDonkey distributed monitoring system. This thesis is structured as follows. Chapters 2 to 6 present our work on trust and chapter 7 to 10 present our work on wireless communications. Finally, Chap. 11 concludes this work and defines future directions for the research on trusted communications in wireless networks. The part on trust is structured as follows: Chap. 2: This chapter motivates this work and presents the state of the art in the field; Chap. 3: This chapter presents and discusses SIR, our Socially-Inspired Reputation model; Chap. 4: This chapter presents our Trust-Awareness extension for naming services, the first case study built on SIR; Chap. 5: This chapter presents our rawDonkey, a reputation enhanced peer-to-peer file sharing client which has been the second case study for our reputation model; Chap. 6: This chapter concludes the part on trust and draws the related considerations. The part on wireless communications is structured as follows: Chap. 7: This chapter presents and discusses the motivations that lead our work on this topic; Chap. 8: This chapter presents a basic language for describing wireless communications, the associated semantics (i.e., reduction semantics and labelled transition semantics), and the first contributions; Chap. 9: This chapters presents some extensions for our language, the augmented semantics, and the last contributions; Chap. 10: This chapter concludes the part on trust and draws the related considerations. UBLCS-2006-09 10
  • 13. Chapter 2 Motivation 1 Discussion In global computing scenarios possibly unknown and distrustful entities (e.g., resources, indi- viduals) consume, produce, or support, services. In such scenarios being enabled with trust information is a key requirement for any entity; in fact, an entity which is aware of other entity trustworthiness is also enabled with means to decide whether and under which conditions to engage in interactions with other entities. For the reader clarity, it is worth defining terms such as trust, trustworthiness and reputa- tion. We define trust to be the measure of how much reliance a truster can justifiably place on the dependability of a trustee behaviour within a specific context. We say that a trustee is trust- worthy, within a given context, if it actually justifies reliance to be put on the dependability of its behaviour within that context. We say that a trustee is trustworthy, within a given context, if it actually justifies reliance to be put on the dependability of its behaviour within that context; We define trustworthiness to be the degree of reliance, within a given context, that any entity is to be justifiably assigned. We assume it to be a private and secret property of each entity and, therefore, neither known to other trusters, nor provable. In social environments trust is an important aspect that stands at the base of each possible in- teraction between entities: placing trust on some entity for assuming a specific behaviour within a specific context means believing that, at some extent, under the conditions enforced by that specific context that person will assume the expected behaviour. Trust is important because of its concrete implications. Firstly, trust enables interaction between entities: without trust-awareness it would not be possible for any entity to have information about the possible outcome of an interaction engaged in with another entity. Secondly, trust enables any entity to select the best entity with which to interact for accomplishing a given task: trust enables one to compare any two entities for estimating which one is the best for performing a giving task. It is worth not- ing that trust cannot be employed for substituting cryptography-based techniques for meeting security requirements; trust is only a mean for enabling an individual to be aware of possible behaviours that other individuals my exhibit when acting within a specific context. Old trust models were based on the assumption of a number of Trusted Third Parties (TTPs) which were assumed to be always available, to always behave in a dependable manner, and to be possibly bound with each other by means of static trust relationships. Based on these static trust relationships, possibly by means of delegation (e.g., employment of credential chains) and abstraction (e.g., employment of roles), complex protocols were built for enabling applications with interaction rules to satisfy the security properties required by the specific application (e.g., [BFL96, BFK98, YMB02, Zim95]); for these properties to be always satisfied, the TTPs are required to remain trustworthy. By contrast, since a generic global computing scenario is characterized by an arbitrary large population in which each individual may dynamically leave and join and possibly exhibit dy- namical behaviour (e.g. due to a limited resource amount, to a failure occurrence, or just to the 11
  • 14. 1 Discussion Interaction Rules Dynamic Trust Criteria Evaluation Criteria Trust Management Figure 1. Framework supporting interactions based on dynamical trust. choice of a new behavioural strategy), it is clear that the maintenance of static trust relationship would not provide a solid infrastructure on which to implement trustworthy interactions. For this reason, trust has recently become an important research area which investigates the possibil- ity of dynamically building trust relationships in a manner which is similar to how individuals in human societies build their own, that is through evaluations and expectations. Hence, we associate global computing scenarios to the abstraction of Virtual Society (VS), defined below. A Virtual Society is a dynamic set of possibly federated and mutually distrustful entities (i.e., individuals and resources) which interact with each other, providing and consuming services, in order to accomplish their respective tasks. We thus define reputation to be a specific form of trust which is collaboratively maintained by individuals in a society through the sharing of a given, possible partial, amount of information about the trustee behaviour within a given context. In a social system, each entity is autonomous and, by interacting with other individuals, col- lects and locally maintains trust information to be employed for trust computation purposes. In this setting, interactions can be either one of the following: • Task performance interaction: In this case each one directly experiences the other trustwor- thiness within the context; • Exchange of trust information: In this case each one makes some of its trust information available to the other. Therefore, so as to support interactions based on a dynamical form of trust, we designed the framework that is shown in Fig. 1. The implementation of interaction policies based on dynamic trust requires both trust criteria and evaluation criteria to be implemented. The first criteria iden- tify a decision semantics which enables an entity to decide whether to consider a second entity trusted or not, once an estimation about that entity behaviour is provided. The second criteria en- ables an entity to evaluate the outcome of an interaction and produce the trust information that is needed to adapt the estimations about other entities baheviours; in fact, since entities build their own expectations according to the opinions that they make interacting with and evaluating the other entities in the system, a dynamic-trust criteria implies the existence of an evaluation criteria. The trust management layer implements a trust model; by taking the single evalua- tions produced by the evaluation criteria, it aggregates these values according to the trust model to obtain the expectations about entity behaviours. Such expectations are then avalable to the dynamic-trust criteria for deriving a trust opinion which depend on each single entity’s trust cri- teria; for instance, Alice may consider trustworthy an entity which is expected to satisfy a given request with a probability between 0.4 and 0.6 while Bob may require a probability between 0.5 and 0.7 so as to consider the same entity trustworthy. Our work focuses on trust models. In order for global computing systems to be provided with a trust model which satisfies the above requirements, we took inspiration by the social model; we consider it to be a good reference model because of its autonomy, its high decentralization, and its extensive validation. Specifically, based on the socio-cognitive model of trust [FC01, FC04] and on the above principles, we develop the first distributed implementation in order to satisfy the above requirements and to enable the trustworthy implementation of Reputation Management Systems (RMSs) for global computing purposes. Moreover, we require that our dynamic trust model enable reputations with the following requirements: UBLCS-2006-09 12
  • 15. 2 The Socio-Cognitive Model of Trust Adaptability: reputations have to adapt according to the behaviour that the individuals exhibit within the system and to the time passing; Robustness: assuming the presence of a certain amount of noise, the reputation model has to identify it and prevent it to affect trustworthy information; Scalability: the reputation model has not to require the maintenance of any global knowledge. When reading the literature about the currently available trust models, it is clear that neither of the available research contributions (see below in this chapter) satisfy the above requirements. In fact, examining the proposed trust and reputation models, it is possible to observe that some of them do not adequately capture trust’s adaptive nature (e.g., [ARH00, AD01, AM04, BB04, Cap04, Jøs01], some of them cannot isolate and ignore byzantine trust information the noise injected within the RMS (e.g., [AD01, BB03, BB04, Cap04, CNS03]), and finally some of them enable implementations which may present scalability problems (e.g., [BBK94, CNS03, Car00, Mau96, YKB93]). Moreover, some of the available trust models require the global sharing of trust information through distributed data structures; even if this may increase the accuracy in trust computation, there are no guarantees on the trustworthy maintenance of shared information. The purpose of this work is then the development of a trust model which overcomes the limitations of existing trust models enabling its employment in global computing settings. In the first part of this thesis, we report our experience in developing, validating, and employing the Socially Inspired Reputation (SIR) model, our flexible computational model of reputation which can employed for enabling trust-awareness within global computing scenarios. SIR is the first formal implementation of the socio-cognitive model of trust [FC01, FC04], which describes the computational logic by which each single individual in a human society aggregates the collected information for computing reputations. In order for us to validate the SIR reputation model, the Trust-Aware naming service (TAw) has been designed and extensively simulated. TAw enables service oriented architectures with trust-awareness; that is, by employing TAw any entity is enabled with trust information for de- ciding whether, and at which conditions, to engage an interaction with any other entity. The performed simulations showed that TAw implements an adaptive and robust behaviour: specifi- cally, we have been simulating a consumer/producer scenario in which TAw is deployed in order for the consumer to select the provider to refer to in a trust-aware manner. The obtained results confirm our expectation that SIR enables TAw to scalably implement an adaptive and robust reputation management system within global computing scenarios. Another application of the SIR model is represented by rawDonkey, which is an eDonkey client for the homonym peer-to-peer file sharing network. rawDonkey employs the SIR model to evaluate the users on the network according to the possible misbehaviours (i.e., faker and free- rider) and to advantage the collaborative ones with respect to the non collaborative ones. The so obtained reputations are employed to enforce a file distribution priority among the peers in order to penalize non-collaborative users and to guarantee a fair resource sharing among collaborative ones. 2 The Socio-Cognitive Model of Trust In [FC01, FC04], Castelfranchi and Falcone define a socio-cognitive model of trust that is based on a the the mental process of trust formation in cognitive terms (e.g., beliefs, goals); although it does not describe the psychological aspects of trust, it formalizes the most rational and conscious aspects. For the sake of clarity, the socio-cognitive model of trust is based on the BDI (Belief- Desire-Intention) approach that is inspired by Bratman’s philosophical model [Bra87] and has been defined to model the mental processes of decision-taking. According to this model, only a principal that is characterized by both goals and beliefs can trust another principal. It is worth noting two issues of trust: first, trust does not apply to unconcerned principals, they only have opinions and forecast based on their knowledge; second, trust itself consists of beliefs, specifically UBLCS-2006-09 13
  • 16. 2 The Socio-Cognitive Model of Trust of evaluations and expectations. In this context, we are going to examine the three notions of trust below, taking in consideration the trust that binds a truster with a trustee within a given context. In this context, we can associate with trust the following meanings: • A mere mental attitude (prediction and evaluation) towards another principal, a simple disposition; • A decision to rely upon the other, i.e. an intention to delegate and trust, which makes the truster ”vulnerable”; • A behaviour, i.e. the intentional act of trusting, and the consequent relation between the truster and the trustee. In each of the above concepts, different sets of cognitive actions are evaluated by the truster; all together, these actions describe the whole process of trusting: firstly, some information enables the truster to predict for each possible trustee the most likely behaviour, and to select the entity which appear to provide the truster with the best service; secondly, the truster has to decide whether the expected service is worth taking the risk of having produced a wrong expectation; finally, there is the actual relation between the truster and the trustee (i.e., the interaction). Hence, the socio-cognitive model of trust identifies three important concepts related with trust: trust disposition, decision to trust, and act of trusting. For the sake of clarity, since the trustee competence within a specific context is useful to the truster (trust disposition) which decided to rely on it (decision to trust), this means that the truster might delegate (act of trusting) to the trustee some task which composes his own goal. Moreover, the model includes three basic beliefs: Competence Belief: a sufficient evaluation of the trustee’s ability is necessary, the truster should believe that the trustee is useful for his goal, that the trustee can provide the expected result, and that the trustee can play such a role on behalf of him. Willingness Belief: the truster should believe that the trustee not only can perform the requested service, but actually he will perform it. This belief makes the trustee’s behaviour pre- dictable. Dependence Belief: either the truster believes to depend on the trustee’s competence (strong dependence), or it believes that there is an advantage in relying on the trustee instead of not relying on it (weak dependence). In other words, in a trust relationship the truster is in a strategic situation in that he believes that the achievement of his goals depends, either partially or totally, on the actions of the trustee. From the point of view of the dynamic studies of trust, it is worth noting how the above basic beliefs might change during the same interaction or during a sequence of interactions: for ex- ample, the competence (or ability) of the trustee, or the trustee’s willingness, or the dependence relationship between the truster and the trustee might change. Another important characteris- tic of the socio-cognitive model of trust is the distinction between trust in a trustee that has to perform a task based on his internal charactestics (e.g., competence, ability), and the trust in the environment which can affect the achievement of the goals by external factors like opportunities and interferences. According to this model, the trust in a principal consists in two beliefs/evaluations that are identified as the basis for reliance: competence, which includes knowledge and self-confidence, and disposition, which in turn is based on aspects such as willingness and availability. Like the above trust definitions, the socio-cognitive model asserts that trust implies, either implicitly or explicitly, the subjective probability of the successful performance of a given trustee within a specific context; it is on the basis of this subjective evaluation of risk that the relationship between a truster and a trustee is engaged. That probability itself depends on the truster beliefs and evaluations. However, within a given environment, an evaluation of a specific trustee is not really an evaluation about it, instead the evaluation accounts the environment aspects as well; UBLCS-2006-09 14
  • 17. 2 The Socio-Cognitive Model of Trust thus, the subjective probability of the trustee successful performance should be decomposed into the expectation of the trustee itself of behaving dependably (internal attribution) and the expecta- tion of having the appropriate environmental conditions (external attribution) for the performance to succeed, such as that the absence of interferences that could harm the performance. The socio-cognitive trust model is based on the following formal constructs. Be Act = {α1, . . . , αn} a finite set of actions, and Agt = {A, . . . , X, Y, . . .} a finite set of principals (that in the original work are called agents). Each principal is described by its own set of actions, its plan library, its set of resources, its set of goals, believes and motives. In this model, the actual object of delegation is the action/goal pair, τ = (α, g), and it takes the name of task. Given a principal X and an environmental context Ω (a set of propositions describing the state of the world), it is possible to define as trusworthiness of principal X about τ in Ω (indicated by the notation trustworthiness(XτΩ)), the objective probability that X will dependably perform task τ in environment Ω; this objective probability is computed on the basis of elemental components such as: Degree of Ability (DoA): it ranges between 0 and 1 and indicates the level of ability of a given principal X about a specific task τ, i.e. the probability that X will correctly perform τ given that he intends to do that; Degree of Willingness (DoW): it ranges between 0 and 1 and indicates the level of intentionality of a given principal X about a specific task τ, i.e., it is the probability that X actually initiates the performance of a given task given that he declared himself intended to perform that task. Thus, in this model the following dependence holds. Trustworthiness(XτΩ) = F (DoAXτΩ, DoWXτΩ) (1) Where F is a function which preserves monotonicity, and ranges in [0, 1], expressing the objective probability that principal X will behave dependably about task τ within the environment Ω. According to this model, to correctly represent the trustworthiness of a principal through sub- jective probabilities, it is important to be enabled with cognitive attribution process to interpret the outcomes of a truster’s reliance on a given trustee and that trustee’s performance (i.e., failure or success); in particular, the effect of the trust a given truster X placed on a given trustee Y’s failure or success on X’s trust in Y depends on the X’s causal attribution [Wei61] of the event. Ac- cording to that theory, any event (i.e., success or failure) can be either ascribed to factors internal to the subject, or to environmental, external causes; moreover, these events may either be occasional or depend on stable properties of the principal or the environment. For example, a failure will impact on the self-esteem of a subject only when attributed to a internal and stable characteristics of the subject itself. The socio-cognitive model defines a function, namely DoT, by which a principal A can eval- uate its own trust (degree of trust) in principal B about the task τ (to be performed) in the envi- ronment Ω. In [FC01, FC04], f is a general function that preserves monotonicity; in particular, DoAA,B,τ is the B’s degree of ability about task τ in A’s opinion, DoWA,B,τ is the B’s degree of intention about task τ in A’s opinion, and e(Ω) takes into account the part of the task that is not directly performed by B and the hampering or facilitating conditions that are enforced by the specific environment Ω. DoTA,B,τ,Ω = f(DoAA,B,τ, DoWA,B,τ, e(Ω)) (2) In the analysis of this function, one has to consider all the dependence relationships that may hold between the three sub-constituents (i.e., DoAA,B,τ, DoWA,B,τ, e(Ω)). 2.1 Considerations about the Socio-Cognitive Trust Model The socio-cognitive model provides a very precise description of trust dynamics from a global perspective; however it is worth noting that, in order for this trust model to be completely spec- ified, one has to be aware of how to separate the environment responsibilities from the trustee’s UBLCS-2006-09 15
  • 18. 3 A Conceptual Model for Computing Reputation Reputation Computation Adaptation + + Third−Party Opinion Self−Made OpinionEvaluation Third−Party Expectation Third−Party Expectation Third−Party Expectation Third−Party Expectation Time + Figure 2. A Conceptual Model for Computing Reputation responsibilities. Thus, within a generic scenario in which the truster and the trustee belong to possibly different environments, it is clear that the truster has to be provided with informations about how to separate these two components; on the other hand, unless a specific study be per- formed so as to evaluate the environment effects on the different task’s contexts, it is not clear how to enable the truster to predict environment effects on a specific task request and to evaluate the objective trustworthiness of the trustee. Within a global computing context, it is clear that it is not scalable to enable each principal to be aware of the effect that might affect a delegated task in each possible environment in which it may be performed. Moreover, in a virtual society scenario where the consumer/producer interaction paradigm is implemented, the truster does not really cares how failure responsibility has to be divided for deciding whether and whom to delegate a task; actually, he only cares about success probability and wants to be able to evaluate it. In our opinion, the information that is important for the truster to be provided with is the overall probability for the delegated task to succeed. 3 A Conceptual Model for Computing Reputation Based on the socio-cognitive model of trust, we designed a conceptual model for computing reputations which is inspired on the social dynamics; this model is represented in Fig. 2. In this model we encode our interpretation of the socio-cognitive process that implements reputations. In the figure we see all the abstraction that take part to the reputation forming relation be- tween them. First, we can isolate the basic trust elements, namely the evaluations and the expecta- tions. We define an evaluation to be a sampling of the behaviour of the trustee, decided using a specific evaluation criteria. By contrast, we define an expectation to be a representation of trust which considers all the available trust information about the trustee and presently describes its possible behaviour. Before proceeding in the explanation of the model in the picture, it is worth making some consideration about the effect of time on trust information. Each single basic piece of trust in- formation is determined by the behaviour exhibited by the trustee until a certain time. Since we assume to employ our model in systems in which principals can dynamically change their behaviours, we can legitimately assume each piece of trust information to lose relevance in repu- tation computation as time passes. For this reason, the picture shows that time is an element that influences the aggregation of the basic trust elements. For computing reputations, each principal employs both expectations and evaluations. Ex- pectations may be obtained by third parties (Third-Party Expectations), through the exchange of UBLCS-2006-09 16
  • 19. 4 Related Contributions recommendations, which are generally produced by the recommender through the aggregation of basic trust values, or self-constructed, through the composition of locally-available trust infor- mation derived by the direct experience of the truster. Hence, each principal aggregates the locally-available basic trust information in order to ob- tain the Third-Party Opinion and the Self-Made Opinion; the former is obtained aggregating the available Third-Party Expectations, and the latter is obtained aggregating the available direct Evaluations. Then, the opinions are aggregated to form reputation. This is nevertheless the social mecha- nism of reputation forming: each of us maintains information about the recommendations that he gets from other individuals and employs them in the composition with the expectation ob- tained from its direct experience for obtaining a more complete evaluation which takes the name of reputation. In the composition of these trust abstractions, there is also dependence between the some trust abstractions and the adaptiveness criteria. First, each single trust evaluation has a role in its composition with the aggregated value; for instance, the aggregated value can vary depending on whether the evaluation is classified as stable or occasional, and on whether it is positive or negative with respect to the expected evaluation. Second, the self-made opinion have a role in the aggregation of Third-Party Expectations; for instance, these expectations can considered in the aggregation according on whether they provided useful information for guessing the result of the successive evaluations. Moreover, Self-Made Opinion is also important for aggregating itself with the Third-Party Opinion; based on how much a principal found himself able to guess the result of interactions by employing the only self-obtained trust expectation, it can decide which weight to give to the one expectation with respect to the other. This conceptual model defines a class of computational models of reputation which is both compatible with the ideals expressed in socio-cognitive model of trust and that is implementable in settings in which it is impossible to maintain centralized information. In the next chapter, we present our implementation of this model, namely the Socially Inspired Reputation model (SIR). 4 Related Contributions In [ARH97, ARH00], Abdul-Rahman and Hailes employs a reputation-based infrastructure for information retrieval. First, they propose a discrete metric for expressing trust degrees. We be- lieve that this design choice does not adequately enable a faithful representation of the social trust; in fact, it is not clear the social intuition which is encoded in the aggregation trust informa- tions to compute direct trust, indirect trust, and reputations. For instance, the way in which direct trust is encoded wants to describe the whole interaction history between a given truster and a given trustee; however, that representation makes no distinction about which evaluations hap- pened far in the past and which ones happened recently, therefore it impossible for the truster to understand the behavioural pattern of the trustee and to get a picture of its current dependability. In [AD01], Aberer and Despotovic introduce a complaint-based reputation system to be ap- plied within peer-to-peer infrastructures for file sharing. To our opinionion, a reputation model which keeps record only of complaints is not adequate to represent social reputation in that it is not representative of the whole interaction history; the complaint could refer, for instance, to events happened far away in the past, after which a dependable behaviour has been exhibited for a long period. Therefore, since the model itself does not model trust adaptiveness with time, it is impossible for a faulty principal which committed mistakes to rebuild a good reputation. In addition, the model does not provide any mean to distinguish legitimate trust informations from malicious ones. Finally, their reputation management system requires a distributed data struc- ture, namely P-Grid, to be deployed over the whole peer-2-peer architecture so as to maintain trust informations. In [AM04], Azzedin and Maherswaran develop a socially inspired reputation model that is UBLCS-2006-09 17
  • 20. 4 Related Contributions similar to the one proposed in this paper. First of all, they define a criteria for evaluating the hon- esty of a recommender that is quite debatable, in that a recommender is evaluated as honest if in the past history it returned recommendations that describe a quite stable behaviour of a given trustee; however, in order for this criteria to be valid it requires the trustee to really exhibit a sta- ble behaviour. This contradicts the dynamical nature of reputation and the asynchronous nature of failures which may substantially affect the behaviour of a principal; a social reputation system should be useless if it was guaranteed that principals always exhibit the same behaviour. Sec- ond, in this model, after an trust evaluation has been performed, direct trust is updated without considering the time elapsed since the last update; thus, it does not consider trust information to become obsolete as time passes. Moreover, the direct-trust update semantics does not consider the behavioural history of the trustee. Similarly to what happens for direct trust, we think that the semantics which defines reputation in terms of both direct and indirect trust is not representative of the human social behaviour. In [BBK94, YKB93], Beth and Yahalom describe a formal trust model, founded on bayesian probability theory and reliability analysis techniques, which is to be used for evaluating the trust- worthiness of entities in open networks. In order for it to be applied, a truster has to know all the paths to the trustee in the trust network, within the specific context in which trust is to be evaluated; hence, as the trust-network complexity and the trust contexts grow, this model does not meet scalability and efficiency requirements to enable its practical employment. Moreover, this model does not implement any technique for encoding the freshness of the available trust- information (e.g., trust decay as time passes). Our dynamical and adaptive reputation model does not require a truster to handle any structured knowledge of the trust network; in addition, it is expected to meet the required flexibility and scalability requirements despite the network topology complexity. In [BB03, BB04], Buchegger presents a reputation which is based on the beta probability den- sity function; This model does not implement dynamical trustworthiness on recommendations; instead, identifies as malicious recommendations the ones which substantially differ from the direct trust. This assumption might not be true and lead to slowed adaptiveness in that, if a recommendation reports in the trustee a behaviour change which recently happened and which has not yet been identified by the truster, this trustworthy information would be identified as malicious by the truster and thus discarded. In addition, this model also implements an aging technique to express the decay of trust with time which is rather not realistic: firstly, the decay does not depend on time but on the position of the trust evaluation within the sequence, the each trust evaluation is applied the same decay factor independently from the time interleaving each pair of consecutive evaluations; second, when the decay is applied, the aging does not applies on the whole evaluation history but rather on either the sequence of successes of the sequence of failures, according to the last trust evaluation. In [Cap04], Capra proposes a reputation model which is inspired to the human behaviour; however, though detailed a trust metric is presented, it is not clear its root within the human psychology. The author argues that, so as to weight recommendations according to their depend- ability, trust is evaluated within the provisioning of recommendations; further in that document, a quality attribute, which does not depend on the recommender reputation but on the degree of knowledge between the truster and the recommendor, is introduced to weight the various rec- ommendations. This may lead to problems in that the truster can believe to have a high degree of knowledge about an undependable provider; in this case it will possibly weight its recommenda- tions more than ones which are more dependable. Moreover, it is not clear the binding between the semantics which specifies the composition of the direct trust value and the one derived by the evaluation of the provider recommendations and the human behaviour. In the context of the SECURE European Project a reputation model specific for ubiquitous computing is being studyied; specifically, in [CNS03] Carbone makes use of knowledge degrees to disambiguate between principals that present the same trust levels and to bootstrap the trust acquisition mechanism in an unknown environment. Although the trust model the author de- velops is very precise about statical definitions, it is not clear how they develop the dynamical adaptation of trust and reputation degrees. By means of the decay functions that depend on time, UBLCS-2006-09 18
  • 21. 4 Related Contributions our model encodes a degree of knowledge (i.e., a degree of the encoded information freshness) within the reputation model as well. In [JK98, Jøs99, Jøs01], Jøsang defines a trust model which is based on a triple of probability values describing the behaviour that a specific principal will assume when an interaction with him is engaged; these triples specify, respectively, the probability of engaging a positive inter- action, the probability of engaging a negative one, and the uncertainity. Although the model is well specified and developed, the chosen trust representation hides all the information about the interaction history of the trustee and does not allow the truster to understand the behaviour that the trustee could currently exhibit (as happens for Buchegger’s model in [BB03]). In fact, given any two trust evaluation sequences with both the same number of success results and the same number of failure results, the model returns the same reputation whatever permutation of these evaluation is assumed; therefore, a trustee whose dependability was bad at the beginning then always exhibited a dependable behaviour is considered as trustworthy as another which recently lost its dependability. Trust adaptability with time has not been considered in the model. In [Jøs02], Jøsang proposes a reputation model based on the beta probability density func- tion. Though it is based on solid theoretical foundations, we think that it is not representative enough of social reputation. In fact, the model does not distinguish between direct and indirect forms of trust, therefore it does not specify any semantics to represent the human psychology in distinguishing among them. Moreover, although an aging mechanism is implemented for the obtained recommendations, the semantics which implements the recommendation aging is not clear; according to that semantics, it appears that the aging is not based on the time in which this recommendation was produced, rather it is based on its position in the sequence of the recom- mendations that the truster collected. In [JP04], Jøsang and Lo Presti define trust in a way that is is semantically different from the definition we gave in Chap. 1 and define a trust degree according to that definition; to our opinion, their definition of trust melts reputation semantics with a trusting criteria (i.e., deciding whether to trust or not). Although we believe that it would be better to maintain the decision criteria separate from the reputation semantics, given the trustee reputation, this criteria can be applied on our reputation model as well to implement decision making about whether to trust or not. Both Dragovic et al. [DHH+ 03] and the OpenPrivacy project [ope] implement reputation man- agement systems; the former for trust-aware selection of application servers, and the latter for disseminating and computing trust information within a specific distributed object middleware. However, no specific information about the adopted reputation models were available to the authors. 4.1 The Web of Trust In the context of Public Key Infrastructures (PKI), trust has been widely studied within the ”Web of Trust” model [Car00, Mau96]. Introduced by Zimmermann in Pretty Good Privacy (PGP)[Zim95], the web of trust can be defined as PKI model where a global state about the principals and their direct trust relationships is maintained; i.e., each principal can make publicly known whose keys he trusts to be authentic. Moreover, as a member of the infrastructure, each principal can decide whom and to which degree to trust as an introducer of new keys. In order for communication to take place between any two principals, a trust link between them has to be enabled by the trust relationships encoded in the web of trust. In [Mau96], Maurer formalizes the web of trust model and describes a technique for assessing probabilistic trust between two principals in a web of trust. However, such a technique require all the possible trust paths between the two principals to be known and a complex analysis technique to be applied on them; its complexity prevents such a technique to be applied in practice. In [Car00], Caronni introduces and evaluates heuristics techniques for reducing the complexity of the Maurer’s technique; however, such techniques still require a global state about certificates and direct trust relationships to be maintained. Human Societies do not rely on having global knowledge of trust networks and, still, the social reputation model succeeds in identifying untrustworthy individuals so as to support legitimate interactions among population members. Our social reputation model enables one to implement a reputation UBLCS-2006-09 19
  • 22. 4 Related Contributions system without requiring principals to maintain a global state about the trust network, meeting scalability and efficiency even in arbitrarily complex scenarios. Moreover, our reputation model implements the dynamical behaviour that is not captured by the earlier model implemented in the web of trust. UBLCS-2006-09 20
  • 23. Chapter 3 SIR: Socially Inspired Reputation Model 1 Foundations of Trust In [Mez03], trust is defined as a ternary relation T (α, β, φ), where α and β are two principals and φ is a context. The presence of the triple (Alice, Bob, c(i)) in T indicates that Alice trusts Bob for consuming interface i. A trust system is defined as the triple (P, Φ, T ), where P is the set of principals and Φ is the set of contexts on which the trust relation T is defined. A trust relation T defined over P × P × Φ may satisfy the following properties: reflexivity: A trust relation T is reflexive in context φ if, for every principal α ∈ P, the triple (α, α, φ) is in T . symmetry: A trust relation T is symmetric in context φ if, for every pair of principals α, β ∈ P, if (α, β, φ) ∈ T then (β, α, φ−1 ) ∈ T (i.e., each has assessed the trustworthiness of the other). Reflexivity is also called implicit trust. Basically, the reflexive property enables the existence of self confidence; that is, after two principals engaged an interaction with each other, each of them is enabled with much experience and, thus, is enabled with more data to compute a more precise “a-priori” estimation of other principal behaviours. For instance, let us say that Alice carries out an interaction with Bob because she wanted interface i to be provided and Bob was able to satisfy her request. Hence, after the interaction Alice has a new trust evaluation which can be used to provide an estimation about the error that Alice made by referring to the trust information she had a priori. Symmetric property states that any consumer/producer interaction has to affect the trust in- formation of both the principals; i.e., both the consumer trust information about the producer and the producer trust information about the consumer. In fact, after an interaction between any two principals, each of them is enabled with more information about the other and can compute a better estimation of its behaviour in the context in which the interaction occurred. Moreover, within each possible context, each single interaction is an instance of the client/ser- ver interaction paradigm; thus, both the client and the server are bound to the same interaction semantics. in this specific case of this work, the two context are both bound to the same interface. When speaking of trust, it is worth mentioning the first context in which trust appeared, that is authentication. We define the context of authentication, represented by the notation auth, to be a special context that is equal to its inverse. Having the reflexive property satisfied within authentication expresses the trust that each principal is believed to maintain the secrecy of its private key. More in general, trusting a princi- pal within authentication means trusting the binding between his private key and his identity. On the other hand, having the symmetric property satisfied within authentication indicates 21
  • 24. 1 Foundations of Trust that, for any two principals, it is possible for them to mutually authenticate and, thus, implement security requirements such as non-repudiation of origin and integrity of origin which are the base of each effective exchange of information. In fact, in the real world any two entities exchanging messages are successfully only if there is some kind of authentication between them, otherwise any communication between these two principals has no effect (i.e., without any information about the message origin, that message is to be ignored). Thus, proof of message origin and integrity have to be provided so as for principal to consider the content of the message as coming from the expected sender. Proposition 1 (communicability condition) Let α, β ∈ P be any two principals, then a communica- tion between them can take place if and only if T{α,β}, the restriction of T over the set of principals {α, β}, is symmetrical in the context of authentication. A third property, namely transitivity, is defined by introducing the jurisdiction subcontext; it is used to represent trustworthiness in recommending principals acting within a specific context. Given a context φ, the jurisdiction subcontext associated with φ is represented in mathemat- ical notation by the symbol j(φ). A principal having jurisdiction over a context φ is trusted for providing reliable trust information about trustees within context φ (i.e., whether a given trustee can be trusted or not); such information is referred to as recommendations. For example, (Alice, Bob, j(φ)) ∈ T means that Alice places trust in Bob for having jurisdiction over context φ and is willing to inherit Bob’s trust relationships for context φ. Transitivity is formally defined as follows: transitivity: A trust relation T is transitive in context φ if for every three principals α, β, γ such that (α, β, j(φ)) ∈ T and (β, γ, φ) ∈ T and the communicability condition holds for both {α, β} and {β, γ}, then (α, γ, φ) ∈ T × T. Table 1. Structure of Φ augmented with jurisdiction. I ::= i | h | k C ::= p(I) | c(I) | authentication Φ ::= C | j(C) In other words, given a trust relation T , transitive in context φ, and any three principals α, β, γ, if T (α, γ, φ) is not defined in T , it can be indirectly computed if both T (α, β, j(φ)) and T (β, γ, φ) are defined; thus, the trust between principals α and γ is defined as T (α, γ, φ) = T (α, β, j(φ)) T (β, γ, φ) in T{α,β,γ} × T{α,β,γ}. Having the transitive property satisfied within authentication enables Certification Authori- ties and credential chains, a principal, say Alice, trusting another one, say Bob, for having juris- diction within authentication, will inherit all the trust relationships that hold between Bob and any other principal, within the context of authentication. In this case, for Alice, Bob plays the role of Certification Authority. Hence, having a transitivity chain within T enables principals to inherit trust relationships along the chain and thus, at an abstract level, certificate chains to be implemented. In Tab. 1 the set Φ of the possible contexts is defined in terms of the definitions given above; here, I indicates the set of service interfaces and C the basic contexts. Given the basic notions of trust, we define a trust system to be an environment in which trust relationships can be established between that environment’s principals; a trust system is specified by the triple (P, Φ, T ) where P is the set of principals, Φ is the above defined set of contexts, and T is the trust relation. UBLCS-2006-09 22
  • 25. 2 The Reputation Model 1.1 Trust Degrees So far, we have seen a trust relationship T as a relation: given a couple of entities and a context, T indicates the existence of a trust relation without giving any other information about the strength of this trust binding. In order to express this “strength”, the trust relationship is changed into a trust function that returns a real value belonging to the closed set [0, 1] within which the value 1 indicates “full trust” and 0 indicates “absence of trust”; where, by “absence of trust” we are not meaning distrust (which is a rather different concept), but we mean that the data which are available to the principal enable him not to expect a dependable behaviour. We name this strength measure trust degree. While remaining consistent with the definition of trust given in Sec. ??, this representation greatly improves its expressiveness. T : P × P × Φ −→ [0, 1] (1) For example, the notation T (Alice, VISA, p(ATM)) = 0.99 indicates that Alice expects from VISA’s automatic teller machine a dependable behaviour (wich embodies properties such as availability, and exactly-once semantics) with a probability of 0.99, when used for withdraw- ing cash; in turn, with a probability of 0.01 Alice expects the ATM service not to dependably satisfy her request (for example she might expect it to be unavailable). Trust degrees are not only used by a principal to decide whether or not to interact with a given principal; they might also be used to determine the security mechanisms and parameters that have to be required for the interaction to take place. For instance, in electronic commerce applications trust models are used to determine the frequency at which a principal has to au- thenticate performing micro-transactions payments [Man00]. Within transitive trust, the truster should be prevented from trusting a given trustee more than both the trust he places on the recommender and the trust the recommender places on that trustee. Hence, if T (Alice, Bob, j(φ)) and T (Bob, Cecilia, φ), then Alice would trust Cecilia in the same context with degree T (Alice, Cecilia, φ) ≤ min{T (Alice, Bob, j(φ)), T (Bob, Cecilia, φ)} (2) where equality holds iff both T (Alice, Bob, j(φ)) and T (Bob, Cecilia, φ) assume either the value 0 or the value 1. The minority relation (2) is satisfied by defining recommended trust as the arith- metic multiplication between the trust degree that holds between the truster and the recom- mender and the one that holds between the recommender and the trustee, within the appropriate contexts (see below). T (Alice, Cecilia, φ) = T (Alice, Bob, j(φ)) · T (Bob, Cecilia, φ) (3) 2 The Reputation Model Before introducing the reputation model, we assume a trust system (P, Φ, T ) where j(φ) is de- fined for each context φ ∈ Φ. In Sec. 1, we defined trust as a function that returns the trust degree between a truster and a trustee within a given context. However, trust is not static in time; both time and events (e.g., an interaction that take place between the truster and the trustee) can change a given trust degree. Thus, we define the generic trust function as below, where τ indicates the set of numerical time values. T : P × P × Φ × τ −→ [0, 1] (4) In [BB03, BB04, JK98, Jøs99, Jøs01], the authors model direct trust by using a beta probability density function, which in statistics is used to represent the “a priori” estimation about an event behaviour. The basic assumption for this model to be applied is that the trust evaluations have to be abstracted as stochastic events which respect a binomial probability distribution and may only represent either the failure or the success in the trustee’s dependability. Since we want to improve the expressiveness of the dependability evaluation, in this section we propose a model UBLCS-2006-09 23
  • 26. 2 The Reputation Model for direct trust which is based on the evaluation of trust degrees which belong to the continuous interval [0, 1]; this enables the measure of partial dependability which can implement a better degree of precision in computing trustee reputations. Thus, we define a generic trust function as in (5), where the parameters respectively represent the truster, the trustee, the context, and the time at which trust has to be evaluated, and the return value is named trust value and indicates the strength of the trust relationship between the truster and the trustee; we assume the value 0 to represent “absence of trust” and the value 1 to represent “full trust”. T : P × P × Φ × τ −→ [0, 1] (5) Consistently with our definition of trust, we assume all the trust values to be initialized to 0 since there is no a priori data that justifies a different trust value to be assigned. We assume obsolete information not to enable accurate estimations of current behaviours. Hence, we assume trust to decay as time passes; specifically, for each context we define a de- cay function that models the speed with which trust information becomes obsolete within that context. δφ(t) : τ → [0, 1] Several decay functions are needed because the validity of a trust degree depends on the nature of the context within which it applies; e.g., the more critical (i.e., risky) the context, the more rapidly the trust decreases with respect to time if no new trust information is available. For us to choose a proper decay function we have to consider that, if no fresh trust information is provided, trust decay should not depend on the trust-update rate; i.e., if a given reputation is measured as ρ at time t, at time t that reputation has to be measured as ρ independently of how many times it was updated in between. Thus, given t such that t ≤ t ≤ t , the proper decay function which satisfies this property is one that solves the following equation: ρ · δφ(t − t) = ρ · δφ(t − t) · δφ(t − t ) (6) It is easily provable that (6) is satisfied by all the functions which are homomorphisms from (R+ ∪ {0}, +) to ([0, 1], ·), such as, for example, the exponential function in (7). δφ(t) = ηt φ (7) According to the nature of social trust, in this section we define three trust abstractions: rep- utation, direct trust, and indirect trust. On the one hand, direct trust encodes trust based on the truster’s own experience. On the other hand, indirect trust encodes trust based on the recom- mendations the truster receives from the other principals. Equation (8) models direct trust, which represents the perception of a principal’s trustwor- thiness which is uniquely determined according to the trust evaluations in which the principal computing the reputation took part in the role of truster. In order for direct trust to be computed, a truster has to evaluate each action that is exhibited by the trustee within the trust assessment context; in (8), tvt ∈ [0, 1] models a trust value derived from an interaction which happened at time t. Specifically, if the set E of trust evaluations does not change, the direct trust at time t is computed by applying the decay function to the last direct trust value D(α, β, φ, t)E, computed at time t; otherwise, be tvt ∈ [0, 1] the fresh trust evaluation obtained at time t , the new direct trust value is computed by normalizing the linear combination of both tvt and the past direct value of D(α, β, φ, t)E, that is the last direct trust value computed at time t ≤ t . D(α, β, φ, t )E =    0 if E = ∅ D(α, β, φ, t)E · δφ(t − t) if E = E D(α,β,φ,t)E·δφ(t −t)+ω·tvt δφ(t −t)+ω if E = E ∪ {tvt } (8) In the equation above, ω is named the trust stability factor and it represents the adjustment to be made to the trust value according to the previous interactions; the cognitive model for trust stability is described below in this section. UBLCS-2006-09 24
  • 27. 2 The Reputation Model Equation (9) models indirect trust, which is the average trust-degree that a set Γ of known principals, namely recommenders, associate with the trustee (within a given context at a specific time) We define it to be the average reputation that a set Γ of known principals, namely recom- menders, associate with the trustee (within a given context and at a specific time). In that equation, it is worth noting the use of the notation R(γ, β, φ, t) that stands for the reputation obtained from principal γ about principal β; for clarity, we call recommendation each reputation that a princi- pal obtains from other principals. The weight that is assigned to each single recommendation is the current direct trust between the α, the principal receiving the recommendations, and the recommender which provided that recommendation, within the context of ”provisioning of rec- ommendations within context φ” which is represented by the notation j(φ). Whenever fresh trust information is not available the trust decay applies as in the case of direct trust. I(α, β, φ, t )R =    0 if R = ∅ I(α, β, φ, t)R · δφ(t − t) if R = R γ∈Γ D(α,γ,j(φ),t )E·R(γ,β,φ,t ) γ∈Γ D(α,γ,j(φ),t ) otherwise (9) Equation (10) defines reputation as the convex combination of direct and undirect trust; the trust balancing factor, represented by the notation ψ, is a factor that indicates the subjective weight a specific principal assigns to direct trust with respect to indirect trust; the cognitive model for trust balance is described below in this section. Moreover, in this equation E and R indicate, respectively, the set of trust evaluations directly performed by the truster and the set of recom- mendations that he collected from a set Γ of recommenders. Similarly to what happens to both direct and indirect trust, if any new trust evaluation is not available, it is possible to compute the current reputation from the previously computed value. R(α, β, φ, t )E ,R =    R(α, β, φ, t)E,R · δφ(t − t) if E = E ∧ R = R ψ · D(α, β, φ, t )E +(1 − ψ) · I(α, β, φ, t )R otherwise (10) Let any two principals carry out an interaction; after that interaction, each of them can as- sociate a trust value with the other principal, according to his behaviour during that interaction. However, a new trust value does not only contribute to compute the direct trust between a truster and a trustee; it is also used for computing the direct trust between the truster and the recom- menders. Basically, the smaller the difference between the recommendation and that trustee’s direct trust (updated with the latest trust value), the better reputation that will be associated with that recommender by the truster. Thus, given a truster α, a recommendor β, and a trustee γ, the trust value, tv, that α associates with a recommendation from β regarding γ trustworthiness within a context φ, the new direct trust degree between the truster and the recommender is to be computed according to (8) and (11). tv = (1 − |R(β, γ, φ, t ) − D(α, γ, φ, t )|) (11) 2.1 Modeling Trust Stability According to [Wei61], to correctly represent the trustworthiness of a principal through subjec- tive probabilities, it is important to be enabled with cognitive attribution process to interpret the outcomes of a trusters reliance on a given trustee and that trustees performance (i.e., failure or success); in particular, the effect of the trust a given truster placed on the failure or success of a given trustee on the trust that the truster places in the trustee depends on the causal attribution that the truster associates with the event. Hence, any event (i.e., success or failure) can be either ascribed to factors internal to the subject, or to environmental, external causes; moreover, these events may either be occasional or depend on stable properties of the principal or the environ- ment. However, since we are not provided with information about the trustee information and our reputation model is intended to estimate the truster subjective reliance on the trustee behav- iour, we decide not to distinguish between individual and environmental responsibilities but to UBLCS-2006-09 25
  • 28. 2 The Reputation Model only distinguish between stable and occasional behaviours. In fact, the actual goal of the truster is to assess the extent of overall trustworthiness that can be associated to the trustee within a specific context. Therefore, trust stability is modeled by (12); here, we want to identify whether an interac- tion was typical or not and to adapt direct trust according to that. Therefore, we assume that the trust values that can derive from the trust evaluations of a single principal, within the same context, to have a gaussian distribution with mean value tv and standard deviation σ; thus, given the set E of currently available trust evaluations about a specific principal within a given context, we define his stable behaviours to be the ones whose trust degrees fall within the in- terval [tvE − σE, tvE + σE], where tvE is the mean value computed on the set of evaluations E and σE the respective standard deviation. In (12), the functions focc,pos(E, tv), fstab,pos(E, tv), fstab,neg(E, tv), and focc,neg(E, tv) are to be instantiated according to the specific behaviour that the principal wants to implement: focc,pos(E, tv) implements the behaviour corresponding to the occurrence of occasional evaluations which are better than the stable ones, fstab,pos(E, tv) imple- ments the behaviour corresponding to the occurrence of stable evaluations which are better than the expected value, fstab,neg(E, tv) implements the behaviour corresponding to the occurrence of stable evaluations which are worse than the expected value, focc,neg(E, tv) implements the behaviour corresponding to the occurrence of occasional evaluations which are worse than the stable ones. σ∅ = 0 σE∪{tv} = tvi∈E∪{tv} tvi − tvE∪{tv} 2 |E| + 1 ω =    focc,pos(E, tv) if tv > tvE + σE fstab,pos(E, tv) if tvE + σE ≥ tv > tvE fexp(E, tv) if tv = tvE fstab,neg(E, tv) if tvE > tv ≥ tvE − σE focc,neg(E, tv) if tvE + σE < tv (12) It is worth noting that the proposed approach does not require for each principal to maintain the evaluations about each single interaction in which it took part; in order for a truster to maintain updated statistical information about the interactions engaged with another principal, it is suffi- cient for that truster to maintain three pieces of information regarding the number of the engaged interactions, the sum of the evaluations, and the sum of the squares of each single evaluation. This information enable one to easily achieve statistical information (i.e., mean and variance), and can be easily updated with evaluations about further interactions. In order for our model to be employed within some dependability context, we believe a cau- tious and responsible behaviour should be modeled by the trust stability. For instance, if the fresh trust value had meant an unusual low-dependable behaviour, then the trust stability value has to enforce a significant decrease the associated direct trust; in turn, if it had meant an unusual high-dependable behaviour, then the trust stability value has to enforce a negligible increase in the associated direct trust. For instance, the functions below encode this evaluation approach. focc,pos(E, tv) = 0 fstab,pos(E, tv) = 1 − tv− ¯tvE+σE 2σE fexp(E, tv) = 1 2 fstab,neg(E, tv) = 1 − tv− ¯tvE+σE 2σE focc,neg(E, tv) = 1 (13) UBLCS-2006-09 26
  • 29. 3 Augmenting Contexts with Attributes 2.2 Modeling Trust Balance When deciding how much to take into account received recommendations with respect to its direct experience, an individual relies upon how confident he believes to be within the specific context. Hence, we believe a good model for trust balance to be the self-trust an individual places upon itself for providing recommendations within that context. Equation (14) describes the model for trust balance; essentially, self-confidence and depends on a principal α, a context φ, and time t. Trust balance also depends on the size of the set Γ of recommenders; in fact, the more the available recommendations are, the less the direct trust will be considered when computing the reputation. When there are no recommenders, only direct trust is taken into account for computing reputation. ψα,φ,t,Γ = D (α, α, j(φ), t) log(|Γ|+1) E (14) Self-confidence is to be computed in a way that is similar to how direct trust within jurisdiction is computed; it is the reflexive direct trust updated according to the precision with which the cur- rently available direct trust approximates the computed reputations. Thus, given an interaction between principals α and β which happened within context φ at time t, the self confidence is updated according to the above defined tvφ. tvφ = (1 − |R(α, γ, φ, t ) − D(α, γ, φ, t )|) (15) Trust balancing for context φ is updated each time the principal carries on an interaction within that context. The less the fresh trust value differentiates from the expected interaction outcome (i.e., the trustee reputation) the more the self confidence will decrease. Similarly, the more the fresh trust value differentiates from the expected interaction outcome (i.e., the trustee reputation) the more the self confidence will decrease. For instance, let us say that in our lifetime each of us statistically undergoes one major surgery; in this case, it is impossible to assess the competence of a doctor from personal experience. By maintaining the self-competence on each context, each of us which never had a major surgery will completely base his estimation by relying on the experience of other individuals. Therefore, given a patient, as the number of direct evaluations in that context increases and, consequently, his personal experience adequately approximates the doctor behaviour, then his direct experience will be more relevant in determining that estimation. 3 Augmenting Contexts with Attributes So far, we described how to evaluate a principal’s reputation either in consuming or producing a given interface or in providing recommendations within a specified context. Hence, if we know that R(Alice, Nicola, p(smtp), t0) = 0.67, it is still unclear on which aspect the partial unrelia- bility depends, e.g., whether Nicola’s smtp server rarely allows illegitimate access to emails or just because of a low availability of the service. When a truster finds a trustee’s reputation to be unsatisfactory, in order for the truster to interact with that trustee it is legitimate for him to be aware of the origin of that partial unreliability; attributes are introduced to fill this purpose. We define an attribute to be a property that is relevant within a given context, such as avail- ability, confidentiality or authentication. Attributes are introduced into our model by defining associated subcontexts, that are used to specialize the generic contexts. In Tab. 2 the grammar describing the structure of Φ with attributes is formally described; here, I indicates the set of interface names, A the set of attribute names, G the set of simple contexts and C augments G with subcontexts. Such a design approach enables us to simultaneously maintain both overall reputations and specialized ones in an efficient manner; in fact, when a principal is assigned a new trust value in a generic context φ ∈ Φ, if it corresponds to a specialized context, then the corresponding reputation can be updated and the associated generic context can be retrieved in order for it to be updated as well. Hence, the redefined definition of Φ allows us to manage specialized UBLCS-2006-09 27
  • 30. 4 Relations with the Socio-Cognitive model Table 2. Structure of Φ augmented with attributes. I ::= i | h | k A ::= a | b | d G ::= p(I) | c(I) C ::= a(G) | G | authentication Φ ::= C | j(C) reputations preserving the validity of the formulas presented in the previous section. The inverse is not defined for attributes. 4 Relations with the Socio-Cognitive model The main difference between the SIR computational model and the socio-cognitive model is the concept of reputation as the aggregation of direct experience and indirect one that is encoded within recommendations; in fact, the socio-cognitive model only specifies trust update as a con- sequence of new direct evaluations. To our opinion, recommendations play a central role in reputation management since, specially when a small number of direct evaluations has been per- formed, they enable the truster to rely on a larger amount of possibly trustworthy evaluations. Similarly to what the socio-cognitive model, through reputation the SIR model enables a prin- cipal to approximate the trustworthiness of each possible principal. What in the socio-cognitive model is represented by DoA and DoW, in SIR is implementable by introducing two attributes, respectively competence and willingness, to all the possible contexts; the first one should be updated each time an interaction is carried out with the evaluation of that interaction outcome, while the second one should always be updated according to the interaction was completed or not. However, what SIR enables a truster to achieve is not an approximation actual trustee’s trustworthiness; instead it is aggregated with the influence that the global environment enforced on the whole interaction. For example, be competence.p(i) and willingness.p(i) the contexts respectively associated with the competence and willingness in providing interface i, then both (16) and (17) hold; here, Eβ,p(i),t,S, Eβ,p(i),t,W, and Ep(i),t,Ω indicate, respectively, the event that expresses success of β in providing interface i at time t, the event that express the willingness of β about providing interface i at time t, and the event that express the absence of failure enforced by the environment on the production of interface i at time t (we assume the environment influ- ence to be independent from both the objective competence and the objective willingness of the trustee). lim |I|→+∞ R(α, β, competence.p(i), t) = P Eβ,p(i),t,S · P Ep(i),t,Ω (16) lim |I|→+∞ R(α, β, willingness.p(i), t) = P Eβ,p(i),t,W · P Ep(i),t,Ω (17) That is, the more evaluations or recommendations are collected, the better the reputation com- puted by SIR approximates the overall probability of success within the evaluated context. How- ever, it is worth noting the below decomposition wich expresses the probability of success with which the interface i will be provided by principal β at time t to both the competence and will- ingness of that trustee. P Eβ,p(i),t,S = P Eβ,p(i),t,S Eβ,p(i),t,W · P Eβ,p(i),t,W + P Eβ,p(i),t,S Eβ,p(i),t,W · P Eβ,p(i),t,W UBLCS-2006-09 28
  • 31. 4 Relations with the Socio-Cognitive model But, since there is no probability of success if the trustee does not engage the task, then the overall probability of success is expressed in (18). P Eβ,p(i),t,S = P Eβ,p(i),t,S Eβ,p(i),t,W · P Eβ,p(i),t,W (18) UBLCS-2006-09 29
  • 32. Chapter 4 Case Study: Trust-Aware Naming Service (TAw) 1 Motivation The naming service is a fundamental part of any component oriented architecture. It implements location transparency between components enabling each other with references for invoking op- erations implemented by remote components. Specifically, the name service associates a specific name with each component instance and the client component which requires the service of a remote component instance needs to query the naming service with the required instance as- sociated name in order to being enabled with its remote reference. Specifically, a name server implements a one to one map between names and instances. Similarly, service oriented architec- tures implement location transparency by means of directory services. Although implementing a one to many relation between names and references, directory services maintain a map between names and the associated service instances. In a dynamic application context, as a generic service oriented application scenario, several factors may affect service provisioning: for instance, components functionalities may be subject of failures and recoveries, or service providers may dynamically change service provisioning strategies for better achieving its own goals. Within such contexts, each client component (or service) may obtain great advantage from being enabled with information about which service provider could better meet its needs. Among the currently available technologies, neither nam- ing nor directory services enable the client component, or service, with information about the trustworthiness of each other. Such information would be useful for implementing a service to evaluate the dependability (i.e., availability, reliability) of available services as well as to activate security mechanisms when a number of entities engage in an interaction (i.e., based on the trust degree each part places in the others, the security measures that are to be employed can be agreed so as to avoid illegitimate behaviour and optimize system performances). Hence, in this chapter we present our Trust-Aware Naming Service (TAw), a hybrid peer-to- peer architecture which has specifically been designed for enabling existing naming and direc- tory services with the abstraction of trust. The TAw architecture implements a middle tier be- tween the client application and the naming service. Within this tier, information about principal behaviours is collected and aggregated to form the abstraction of reputation; thus, this meta- information will complete the results returned by the naming service enabling the principals to compare the possible trustees with each other in order to find out the most trustworthy. Because of its application context, we require TAw to be both scalable and flexible: scalability entails the ability of the system to scale as the number of entities within the system increases, and flexibil- ity entails the compatibility of TAw with different naming service architectures, as required by a multi-institutional environment. It is worth noting that TAw does not play any active role in managing computer security; it only provides each entity in the system with information about the behaviour of other entities, allowing them to possibly avoid engaging in interactions with 30
  • 33. 2 The TAw Architecture misbehaving entities. 2 The TAw Architecture TAw is a hybrid peer-to-peer architecture in which each entity (i.e., service producer or con- sumer) is associated with a TAw peer. TAw peers exchange trust information about each other; locally, each peer maintains the available trust information, according to a specific trust model (the current prototype implements the SIR reputation model), in order to approximate the trust- worthiness of the entities with which he is engaging in interactions. The Virtual Society Service (VSS) is responsible for enabling the peers to aggregate themselves into a virtual society; more- over, it enforces an access control policy that is specific to the virtual society. Its purpose is both to allow only legitimate entities to participate in the exchange of trust information, and to pre- vent any entity from owning multiple instances of TAw peers simultaneously. Each TAw peer both interfaces the entity with the naming service in a trust-aware manner and, simultaneously, implements the social behaviour on behalf of that entity, i.e., it locally collects and maintains trust information, computes reputations and propagates them on his behalf. A piece of trust informa- tion held by a peer either originates from the owner entity as the result of concluded interactions, or has been obtained via the Trust Propagation Protocol. Since its completely decentralized design approach, TAw inherently meets scalability, effi- ciency, and fault-tolerance; specifically, it implements both a hybrid peer-to-peer interaction para- digm and an epidemic-like information dissemination technique (see below). Moreover, in order for our architecture to meet adaptiveness, robustness, and scalability, the SIR reputation model has been implemented within TAw. Within the framework described in Section A Generic Trust Management Framework TAw implements the trust management system. Because of the possible diversity between application requirements, it owes to the application developer to enable the application with trust evaluation and dynamic trust criteria. 2.1 TAw Peer The TAw peer is the abstraction of a principal within a virtual society. Each TAw peer (i) im- plements the social behaviour on behalf of that entity, i.e., it locally collects and maintains trust information, computes reputations and propagates them on his behalf, and (ii) employs such in- formation for enabling the associated entity to access the naming service in a trust-aware manner. A piece of trust information held by a peer either originates from the owner entity as the result of concluded interactions, or has been obtained via the Trust Propagation Protocol (TPP), which we describe below. Each peer embodies a data structure, namely the trust repository, that is used to maintain trust information; basically, it is a collection of tuples (α, β, φ, t, p) where α ∈ P is the truster, β ∈ P is the trustee, φ ∈ Φ is the context, t is the time to which the trust degree refers and p ∈ [0, 1] is the trust degree associated with R(α, β, φ, t). Autonomously each TAw peer disseminates trust information towards a subset of its neighborhood through the trust propagation protocol. When speaking of interactions between application components which are deployed in multi- institutional environments, interoperability is a key issue to address. For this reason we require TAw to be suitable for accommodating any possible naming service in a virtual enterprise sce- nario. To meet this requirement we divided the TAw peer in two parts, namely the TAw Core Peer and the TAw Peer. The former implements the trust model and metrics which is common to all the TAw Peers, providing them with a common way to compute and understand trust infor- mation. The latter is a wrapper which employs the TAw Peer Core to enable a specific naming service with trust requirements. TAw Core Peer: The TawCorePeer class is responsible for providing the Taw peer with facili- ties to implement all the trust related operations in a manner that is independent from the specific naming technology. The TawCorePeer implements the TAwCoreInterface (see the UBLCS-2006-09 31