SlideShare une entreprise Scribd logo
1  sur  15
Télécharger pour lire hors ligne
Preliminary Contributions Towards
Auto-Resilience
Vincenzo De Florio
PATS/University of Antwerp and PATS/iMinds research institute
Middelheimlaan 1, 2020 Antwerp, Belgium vincenzo.deflorio@ua.ac.be

Abstract. The variability in the conditions of deployment environments
introduces new challenges for the resilience of our computer systems.
As a response to said challenges, novel approaches must be devised so
that identity robustness be guaranteed autonomously and with minimal
overhead. This paper provides the elements of one such approach. First,
building on top of previous results, we formulate a metric framework to
compare specific aspects of the resilience of systems and environments.
Such framework is then put to use by sketching the elements of a handshake mechanism between systems declaring their resilience figures and
environments stating their minimal resilience requirements. Despite its
simple formulation it is shown how said mechanism enables scenarios in
which resilience can be autonomously enhanced, e.g., through forms of
social collaboration. This paves the way to future “auto-resilient” systems, namely systems able to reason and revise their own architectures
and organisations so as to optimally guarantee identity persistence.

1

Introduction

Self-adaptive systems are able to mutate their structure and function in order
to match “changing circumstances” [1]. When relevant changes in their deployment environment are perceived—due, for instance, to application mobility or
ambient adaptations—self-adaptive systems typically perform some form of reasoning and introspection so as to conceive a new structure best matching the
new circumstances at hand. This new structure may indeed allow the adapting
system to tolerate or even profit from the new conditions; at the same time, it
is possible that the mutation affected the identity of that system, that is, the
functional and non-functional aspects and properties characterising the expected
behaviour of that system. A relevant problem then becomes robust feature persistence, namely a system’s capability to retain certain characteristics of interest throughout changes and adaptations affecting, e.g., its constituent modules,
topology, and the environment.
The term commonly used to refer to robust feature persistence is resilience—
a concept discussed so early as in Aristotle’s Physics and Psychology [2]. Resilience is called by Aristotle as entelechy, which he defines as the ability to pursue completion (that is, one’s optimal behaviour) by continuously re-adjusting
oneself. Sachs’s translation for entelechy is particularly intriguing and pertinent
here, as it is “being-at-work-staying-the-same” [3]. So complex and central is
this idea within Aristotle’s corpus that Sachs again refers to it in the cited reference as to a “three-ring circus of a word”. In fact resilience still escapes a clear
and widely agreed understanding. Different domain-specific definitions exist or
capture but a few aspects of the whole [4]. In previous contributions [5–7] we
conjectured that some insight on this complex concept may be gained by realising its nature as a multi-attribute property, “defined and measured by a set of
different indicators” [8]. As a matter of fact, breaking down a complex property
into a set of constituent attributes proved to be beneficial with another most
elusive property—dependability, which was characterised into six constituent
properties by Laprie [9, 10]. Encouraged by this lesson in [5–7] we set to apply
the same method to try and capture some aspects of the resilience of adaptive
systems.
Building on top of the above mentioned preliminary results, this paper’s first
contribution is the definition of a number of system classes and partial orders to
enable a qualitative evaluation of system-environment fits—in other words, how
a system’s resilience features match with the resilience requirements called for
by that system’s deployment environments. This is done in Sect. 2.
A second contribution is presented in Sect. 3 through the high level description of a handshake mechanism between systems declaring their resilience figures
and environments stating their minimal resilience requirements. Said mechanism
is exemplified through an ambient intelligence case study. In particular it is
shown how putting the resilience characteristics of systems and environments in
the foreground enables scenarios in which resilience can be enhanced through
simple forms of social collaboration.
Finally, in Sect. 4, we enunciate a conjecture: resilience-oriented handshake
mechanisms such as the one presented in this paper pave the way to future
auto-resilient systems—entities, that is, that are able to reason about their own
architectures and organisations and to optimally revise them, autonomously, in
order to match the variability of conditions in their deployment environments.

2

Perception, Apperception, and Entelechism

In previous work we identified three main constituent properties for system and
organisational resilience [5–7]. Here we recall and extend said properties and
discuss the major threats associated to their failure. We also introduce system
classes and partial orders to facilitate the assessment of how a system’s resilience
architecture matches its mission and deployment environment.
2.1

Perception

What we cannot perceive, we cannot react from—hence we cannot adapt to. As a
consequence a necessary constituent attribute of resilience is given by perception,
namely a system’s ability to become timely aware of some portion of the context.
In what follows we shall represent perception through the collection of context
figures—originating within and without the system boundaries—whose changes
we can be alerted from within a reasonable amount of time. From this definition
we observe how perception may be interpreted as a measure of how “open-world”
a system is—be it biological, societal, or computer-based. Perception is carried
out through several mechanisms. We distinguish three sub-functions to perception, which we call sensors, qualia, and memory. Sensors represent a system’s
primary interface with the physical world. The sensors’ main function is to reflect
a given subset of the world’s “raw facts” into internal representations that are
then stored in some form within the system’s processing and control units—its
“brains”. Qualia [6] is the name used in literature to refer to such representations.
Qualia are then persisted—to some extent—in the system memory.
Sensors, qualia, and memory are very important towards the emergence of
resilience: the quality of reactive control strictly depends on the quality of service of the sensory system as well as that of the system components responsible for the reliable production, storage, persistence, and retrieval of trustworthy
qualia [6]. Important aspects of such quality of service include what we call the
qualia manifestation latency (namely the time between the physical appearance of a raw fact and the corresponding production of a qualia), the reflective
throughput (that is, the largest amount of raw facts that may be reliably encoded as qualia per time unit), and the qualia access time (how quickly the
control layers may access the qualia). An example of a software system using
application-level qualia to operate control is described in [11, 12].
As mentioned already, be it computer-based or organic, any system is characterised—and limited—in its resilience by the characteristics of its perception
sub-system. In particular the amount and quality of its sensors and the quality
of its qualia production, storage, and persistence services define what the system
is going to timely and reliably perceive; and consequently what it may effectively
react upon.
This concept matches well with what Leibniz referred to as a system’s “clear
representation”, as opposed to an “obscure representation” resulting from, e.g.,
sensor shortage or insufficient quality of service in the qualia layers. We refer
to this region of clear representation as to a system’s perception spectrum. A
hypothetical system of all clear representation and no obscure representation is
called by Leibniz a monad. At the other end of the spectrum we have closedworld systems—systems that is that operate in their “virtual world” completely
unaware of any physical world “raw fact”. The term we use to refer to such
context-agnostic systems is ataraxies (from “ataraxy”, namely the attitude of
taking actions without considering any external event or condition; from a-, not,
and tarassein, to disturb). Ataraxies may operate as reliably and efficiently as
monads, but they are not designed to withstand changes—they are what the
American refer to as “sitting ducks” in the face of changes. As long as their
system assumptions hold, they constitute our unquestioning avatars diligently
performing their appointed tasks; though they fail miserably when facing the
slightest perturbation in their design hypotheses1 [15]. Likewise monads, though
characterised by perfect perception, may be unable to make use of this quality to
achieve awareness and ultimately guarantee their resilience or other design goals
of interest. In what follows we shall refer to a system’s quality of perception as
to its “power of representation”—a term introduced by Leibniz [16].
In [6] we presented a simple Algebraic model for perception by considering
perception spectra as subsets of a same “perfect” perception spectrum (corresponding to the “all-seeing eye” of the fabled monad, which “could see reflected
in it all the rest of creation” [16]). Figure 1(a) depicts this by considering the
perception spectra of two systems, a and b, respectively represented as set A
and set B. Little can be said in this case about the power of representation of a
with respect to that of b: here in fact the spectra are not comparable with one
another, because it is not true that
(A ⊂ B) ∨ (B ⊂ A).
On the other hand, when for instance A ⊆ B then we shall say that b has “greater
perception” (that is, a greater power of representation) than a:
a

P

b

if and only if A ⊆ B.

(1)

This is exemplified in Fig. 1(b), in which A ⊆ B ⊆ M , the latter being the whole
context (that is, the perception spectrum of monad m). This means that a, b,
and m are endowed with a larger and larger set of perception capabilities—a
greater and greater power of representation. Expression a P b P m states
such property.
We deem it important to highlight how perception spectra such as set A and
B should be actually represented as functions of time; the mission characteristics;
and the current context. In other words, perception should not be taken as an
absolute and immutable feature but rather as the result of several dynamic
processes, e.g., the current state of the sensory subsystem, the current quality
of their services, as well as how the resulting times, throughputs, failures, and
latencies match with the current mission requirements. For the sake of simplicity
we shall nevertheless refer to perception spectra simply as sets.
Perception spectra / powers of representation may be also used to evaluate
the environmental fit of a given system with respect to a given deployment
environment—that is, to gain insight in the match between that system and its
intended execution environment. As an example, Fig. 1(a) may be interpreted
also as the perception spectrum of system a and the power of representation
called for by deployment environment b. The fact that B  A is non-empty tells
1

As discussed in [13], another problem with closed-world systems is that they are in
a sense systems “frozen in time”: verifications for any such system implicitly refer to
scenarios that may differ from the current one. We use the term frozen ducks to refer
to ataraxies with stale certifications. A typical case of frozen ducks is efficaciously
reported by engineer Bill Strauss: “A plane is designed to the right specs, but nobody
goes back and checks if it is still robust” [14].
(a) Regions of clear representation of
system a and b with respect to that of
hypothetical perfect system m. The intersection region represents the portion
of the spectrum that is in common between a and b.

(b) The region of clear representation
A is fully included in B, in turn fully
included in M . In this case we can state
that the power of representation of system a is inferior to that of b’s, which in
turn is less than m’s.

Fig. 1. Exemplification of perception spectra and regions of clear representations.

us that a will not be sufficiently aware of the context changes occurring in b.
Likewise A  B = ∅ tells us that a is designed so as to be aware of figures that
will not be subjected to change while a is in b. The corresponding extra design
complexity is (in this case) a waste of resources in that it does not contribute to
any improvement in resilience. The case study introduced in Sect. 3 makes use
of perception spectra to evaluate a system-environment fit.
As a final remark, perception spectra may be used to compare environments
with one another. This may be useful especially in ambient intelligence scenarios
in which some control may be exercised on the properties of the deployment
environment(s).
Estimating shortcoming or excess in a system’s perception capabilities provides useful information to the “upper functions” responsible for driving the
evolution of that system. Such functions may then make use of said information
to perform design trade-offs among the resilience layers. As an example, the system may reduce its perception spectrum and use the resulting complexity budget
to widen its apperception capabilities—that is, the subject of next section.
2.2

Apperception

As the perception spectrum defines the basic facts that are going to trigger
awareness and ultimately reaction and control, likewise apperception defines how
the reflected qualia are accrued, put in relation with past perception, and used to
create dynamic models of the “self” and of the “world” [17]. In turn this ability
enables higher level functions of system evolution—in particular, the planning of
reactions (e.g., parametric adaptations or system reconfigurations). Also in the
case of apperception we can introduce a ranking of sort stating different powers
of apperception. Several such rankings and classifications were introduced in the
past, the first and foremost example may be found in Aristotle’s De Anima 2 .
Leibniz also compiled a hierarchy of “substances”—as he referred to systems
and beings [16]. More recently Lycan suggested [18] that there might be at least
eight classes of apperception. An important contribution in the matter is due
to Rosenblueth, Wiener, and Bigelow, who proposed in [19] a classification of
systems according to their behaviour and purpose. In particular in their cited
work they composed a hierarchy consisting of the following behavioural classes:
1. Systems characterised by passive behaviour: no source of “output energy”
may be identified in any activity of the system.
2. Systems with active, but non-purposeful behaviour—systems, that is, that
do not have a “specific final condition toward which they strive” [19].
3. Systems with purposeful, but non-teleological (i.e., feedback-free) behaviour:
systems, that is, in which “there are no signals from the goal which modify
the activity of the object” (viz., the system) “in the course of the behaviour.”
4. Systems with teleological, but non-extrapolative behaviour: systems that are
purposeful but unable to construct models and predictions of a future state
to base their reactions upon.
5. First-order predictive systems, able to extrapolate along a single perception
dimension—i.e., a single qualia.
6. Higher-order predictive systems, or in other words systems that are able to
base their reactions on the correlation of two or more qualia dimensions,
possibly of different nature—temporal and spatial coordinates for instance.
The behaviours of systems in classes 4–6 exhibit increasing powers of apperception.
The just discussed seminal work was then continued by Boulding in his classic
paper on General Systems Theory [20]. In said paper the Author introduced
nine classes structured after a system’s perception and apperception capabilities.
More specifically, Boulding’s classes refer to the following system types:
1. Ataraxies, subdivided into so-called Frameworks and Clockworks.
2. Simple control mechanisms, e.g., thermostats, that are able to track a single
context figure.
3. Self-maintaining structures, e.g., biological cells, which are able to track multiple context features. Both thermostats and cells correspond to the systems
with purposeful, though non-teleological, behaviour of [19].
4. Simple stationary systems comprising several specialised sub-systems, like
plants, characterised by very simple forms of predictive behaviour and apperception.
5. Complex mobile systems with extensive power of representation and simple forms of apperception (especially self-awareness). Boulding refers to this
class as to “animals”. A classic example of this is a cat moving towards
2

As cleverly expressed in [2], Aristotle finds that “living things all take their place in
a cosmic hierarchy according to their abilities in the fields of nutrition, perception,
thought and purposive action.”
its prey’s extrapolated future position [19]. These systems may be characterised by “precooked apperception”, i.e., innate behaviour commonly known
as instinct. This corresponds to systems initialised with domain-specific predefined and immutable apperception capabilities and adaptation plans.
6. Complex mobile systems endowed with extensive apperception capability,
e.g., self-awareness, self-consciousness, and high order extrapolative capability. “Human beings” is the term used by Boulding for this class.
7. Collective adaptive systems, e.g. digital ecosystems, cyber-physical societies,
multi-agent systems, or social organisations [21]. Boulding refers to this class
as “a set of roles tied together with channels of communication”.
8. Totally open-world systems, namely the equivalent of Leibniz’s monads.
Transcendental systems is the name that Boulding gives to this class.
Again classes 4–6 represent (non-transcendental, non-collective) systems with
increasing powers of apperception. It is then possible to define a projection map π
returning for any such system s the class that system belongs to (or, alternatively,
the behaviour class characterising s) represented as an integer in {1, . . . , 6}.
Function π then defines a second partial order among systems—for any two
systems p and q with apperception capability we shall say that p has less power
of apperception than q when the following condition holds:
p

A

q

if and only if π(p) < π(q).

(2)

As we have done with perception, also in this case we remark how the above
partial order may apply to environments as well as to systems. As such the above
partial order may be used to detect mismatches between a system’s apperception
characteristics and those expected by a given environment. One such mismatch
is detected in the scenario discussed in Sect. 3.

2.3

Entelechism

Once trustworthy models of the endogenous conditions and exogenous scenarios
are built through perception and apperception, resilient systems typically make
use of the accrued knowledge to plan some form of reactive control. The aim
of this reactive control is to guarantee the persistence of a system’s functional
and non-functional “identity”—namely what that system is supposed to do and
under which conditions and terms. As mentioned in Sect. 1, already Aristotle
identified this quality, which he called entelechy and solely attributed to human
beings. Entelechy is in fact the driving force—the movement, or “energy”—
that makes active-behaviour systems strive towards resilience. By analogy, in
what follows we refer to a system’s entelechy as to the quality of the mechanisms responsible for planning and controlling the robust emergence of that system’s peculiar characteristics while changes and system adaptations take place.
Such characteristics may include, e.g., timeliness, determinism, security, safety,
or functional behaviours as prescribed in the system specifications.
In [7] we called evolution engine of system s the portion of s responsible for
controlling its adaptation. In what follows we shall refer to the evolution engine
as to EE(s)—or simply EE when s can be omitted without ambiguity.
We now propose a tentative classification of systems according to their entelechism—namely, according to the properties and characteristics of their EE.
Also in this case we found it convenient to isolate a number of ancillary constituent components in order to tackle separately different aspects of this “threering circus” [3] of a concept.
Meta-apperception When considered as a separate entity, system EE(s) may
be subjected to a classification such as Boulding’s or Rosenblueth’s, intended
to highlight the characteristics of the resilience logics of system s. Said characteristics may differ considerably from those of s. As an example, the adaptively
redundant data structures introduced in [22] may be regarded as a whole as a
first-order predictive behaviour mechanism [5]. On the other hand that system’s
EE(s) is remarkably simpler and only capable of purposeful active behaviours. In
fact, a system’s EE may or may not be endowed with apperception capabilities,
and it may or may not be a resilient system altogether. This feature represents
a first coordinate to assess the entelechism of evolving systems. Making use of
the partial order defined in Sect. 2.2 we shall say that, for any two systems p
and q, q is endowed with greater meta-apperception than p (written as p µA q)
if and only if the following condition holds:
p

µA

q

if and only if π(EE(p))

A

π(EE(q)).

(3)

Multiplicity and organisation of the planning entities In what follows
we propose to identify classes of resilient systems also by taking into account the
individual or social organisation of the processes that constitute their evolution
engines. Three are the aspects that—we deem—play an important role in this
context:
– The presence of a single or multiple concurrent evolution engines.
– The individual vs. social nature of the interactions between neighbouring
systems. This may range from “weak” forms of interactions [23]—e.g., as in
the individual-context middleware of [24]—up to high level forms of structured social organisation (multi-level coupling of the individual to the environment). The latter case corresponds to the social-context middleware
systems of [24].
– (When multiple concurrent EE’s contribute to the emergence of the global
system behaviour:) The organisation of control amongst the EE’s.
Table 1 provides a classification of systems according to the just enunciated
criteria. The first class is given by systems with a single EE and only capable
of individual-context planning. This means that decisions are taken in isolation
and without considering the decisions taken by neighbouring systems [24]. GPS
1) Single-logic 2) Single-logic 3) Collective-logic 4) Collective-logic 5) Bionic, holarindividualsocial-context social-context
social-context
chic, or fractal
context systems
systems
hierarchies
heterarchies
organisations

Table 1. A tentative classification of evolving systems according to the number
and the complexity of their EE’s.

planning their route only by means of digital maps of the territory are examples of said systems. The second class comprises again systems with a single
EE but this time planning is executed while taking into account the behaviour
of neighbouring systems [24]. A collision avoidance system in a smart car belongs to this class. Classes 3 to 5 all consist of systems capable of collective
planning. Class 3 includes systems where planning is centralised or hierarchical:
one or multiple decision layers exist and on each layer multiple planners submit or publish their plans to a next-layer planner. Air traffic control systems
and the ACCADA middleware [25] provide us with two examples of this type of
systems. Class 4 refers to decentralised societies with peer-to-peer planning and
management. The term used to refer to such systems is heterarchy [26]. Heterarchies are flat (i.e., layer-less) organisations characterised by multiple concurrent
system-of-values and -goals. They introduce redundant control logics from which
a system’s expected service may be distributed across a diversity of routes and
providers. Such diversity provides a “mutating factor” of sorts, useful to avoid
local minima—what Stark refers to as “lock-ins” [26]. The absence of layers
removes the typical flaws of hierarchical organisations (propagation and control
delays and failures). The distributed decision making introduces new criticalities
though, e.g., deterministic and timely behaviours are more difficult to guarantee.
“Different branches of government that have checks and balances through separation and overlap of power” [27] constitute an example of heterarchy. The fifth
and last class includes systems characterised by distributed hierarchical organisation: bionic organisations, holarchies, and fractal organisations. Said systems
are a hierarchical composition of autonomous planners—called respectively modelons, holons, and fractals—characterised by spontaneous behaviour and local
interaction. Said planners autonomously establish cooperative relationships with
one another, which ultimately produce the emerging functional and adaptive behaviours of the system. “Simultaneously a part and a whole, a container and a
contained, a controller and a controlled” [28], these organisations result in systems able to avoid the flaws of both hierarchical and heterarchical systems. The
emergence of stability, flexibility, and efficient use of the available resources have
been experienced in systems belonging to this class [29–31].
In this case the above classes can not be used to define a partial order—as
it was the case for perception, apperception and meta-apperception—but rather
to identify general characteristics exhibited by systems or expected by a hosting
environment. As an example, a digital ecosystem may have an admittance policy
granting deployment only to systems characterised by social-context capabilities.
This may be done, e.g., so as to prevent the diffusion of greedy individualistic
behaviours potentially jeopardising the whole ecosystem.
Complexity of the planned adaptive behaviours A third aspect that—
we conjecture—plays an important role in an entity’s reactive control processes
is given by the magnitude and complexity of the adaptation behaviours. We
distinguish three major cases:
1. Parametric adaptation. In this case s retains its structure and organisation whatever the adaptation processes instructed by EE(s). Adaptation
is achieved by switching among structurally equivalent configurations that
depend on one or more internal “knobs” or tunable parameters—e.g., the
number of replicas in the redundant data structures in [7]. The adaptive behaviours of parametrically adaptive systems are therefore simple3 . As done
by Rosenblueth et al. for their classification of behaviours we shall classify
here parametrically adaptive systems by considering their order, namely the
number of involved knobs. As an example, the above mentioned redundant
data structures are a first-order parametrically adaptive system.
2. Structural adaptation. In this case the adaptation processes of EE(s) bring
s to mutate its structure and/or organisation by reconfiguring the topology,
the role, and the number of its constituents. Note how said constituents may
also be part of EE(s). Clearly the adaptive behaviours of this class of systems
is more complex and thus less stable. An example of such systems is given
by Transformer, a framework for self-adaptive component-based applications
described in [32, 33].
3. Hybrid adaptation—systems that is whose adaptation plans comprise both
structural and parametric adaptation. An example of this class of systems is
given by the family of adaptive distributed gossipping algorithms described
in [34], for which the choice of a combinatorial parameter also induces a
restructuring of the roles of the involved agents.

3

Resilience Handshake Mechanisms

As well known, any system—be it made by man or by nature—is the result of
organisational and design choices in turn produced by the mechanisms of biological or machine-driven evolution [35]. Resilience is a key property emerging
from the match between these choices and a deployment environment. Regrettably, both man and nature have no complete freedom in their design choices,
as enhancing one design aspect in most cases reduces the degree of freedom on
other design aspects. Isolating the constituent attributes of resilience helps gaining insight into this problem and paves the way to approaches were perception,
3

Of course this does not mean that the effect that said adaptations is going to have on
s will also be simple. In general this will depend on the sensitivity of the parameters
and on the extent of their correlation.
apperception, and entelechism can be dynamically refined so as to optimally
match with corresponding figures expected by the target environments. In what
follows we propose a strategy to achieve said “auto-resilient” behaviours. The
main idea is to set up admission control mechanisms constraining the deployment
of a system in a target environment. This allows a system’s resilience figures to
be matched with the expected minimal resilience requirements of a deployment
environment. This is similar to defining an “adaptation contract” to be matched
with an “environment policy”—in the sense discussed, e.g., in [24].
Figure 2 exemplifies our idea through an ambient intelligence scenario. In
this case the ambient is a coal mine. Said environments are known to experience occasionally high concentrations of toxic gases—e.g., carbon monoxide and
dioxide as well as methane—that are lethal to both animals and human beings.
Regrettably human beings are not endowed with perception capabilities able to
provide early warning against the increasing presence of toxic gases. In other
words, miners are subjected to dangerous perception failures when working in
coal mines. A common way to address said problem is to make use of so-called
sentinel species [36], namely systems or animals able to compensate for another
system’s lack in perception. The English vernacular “being like a canary in a
coal mine” refers to the traditional use of canaries as sentinel species for miners.
Our scenario is inspired by the above expedient. We envision the presence of
two types of environmental agents: a Feature Register (FR) and one or more
Ambient Agents (AA).
FR is the manager of a dynamically growing associative array. It stores associations of the form
s → {Ps , As , Es },
(4)
stating the perception, apperception, and entelechy characteristics of system
s. As an example, if s is a miner, then Ps is a representation of the perception spectrum of said agent, As is his apperception class, and Es is a
triple representing the entelechism of a miner. We shall refer to the triplets
{Ps , As , Es } as to the “R-features” of s.
AA is an entity representing the R-features of a certain ecoregion, e.g., a “mine”.
Indicator species is the term used in the literature to refer to entities representative of an ecoregion [37].
In the scenario depicted in Fig. 2 we have a single AA called Mine Ambient.
We assume that every deployment in a target environment e (in this case, a
“mine”) must be authorised through a handshake with the local FR. This means
that, before processing any admittance requests, the FR first expects the AA
of e to declare their R-features. This is done in Fig. 2(a) by calling method
DclClient.
For the sake of simplicity we assume that said R-features are constant. When
that is not the case the AA is responsible to update their R-features with new
DclClient calls.
The scenario continues with a system, a Miner Agent, requesting access to e.
This is done in Fig. 2(b) through another call to DclClient. Once the FR receives
Fig. 2. Resilience handshake scenario. A Mine Ambient declares its resilience
requirements (in particular, Perception of carbon monoxide, methane or carbon
dioxide). A Miner Agent and a Canary Agent are both not qualified enough to
enter. A Feature Register detects that collaboration between them may solve the
problem. As a result a new collective system, Miner+Canary, is created, which
passes the test and is allowed into the Mine Ambient.

the corresponding R-features, a record is added to the FR associative array and
the request is evaluated. By comparing the perception spectra of e and the Miner
Agent, the FR is able to detect a perception failure: Miner Agent P e, or in
other words some of the events in e would go undetected by the Miner Agent
when deployed in e. As a consequence, a call to method PerceptionFailure
notifies the Miner Agent that the resilience handshake failed (Fig. 2(c)). Despite
this, the entry describing the R-features of the Miner Agent is not purged from
the associative array in FR.
After some time a second system, called Canary Agent, requests deployment
in the mine e by submitting their R-features. This is shown in Fig. 2(d). The
Canary Agent is comparably simpler than the Miner Agent in terms of both
apperception and entelechism, and in particular the apperception class of the
Canary Agent is insufficient with respect to the apperception expected by e:
Canary Agent A e. As a consequence, a failure is declared (see Fig. 2(e)) by
calling method ApperceptionFailure. Despite said failure, a new record stating
the R-features of Canary Agent is added to the associative array of FR.
By some strategy, e.g., a brute force analysis of every possible unions of all
stored associations, the FR realises that the union of the perception spectrum of
the Miner Agent and that of the Canary Agent optimally fulfils the admittance
requirements of e and therefore does not result in a perception failure. Both
Miner and Canary agents are then notified of this symbiotic opportunity by
means of a call to method JoinPerceptionSpectra (Fig. 2(f)). This is followed
by the creation of a simple form of social organisation: the Miner Agent monitors
the state of the Canary Agent in order to detect the presence of toxic gases. If this
monitoring process is not faulty—that is, if the Miner Agent does not fail to check
regularly and frequently enough for the state of the Canary Agent—this results
in an effective method to augment artificially one’s perception spectrum. The
resulting collective system, Miner+Canary Agent, is created in Fig. 2(g). Finally,
Fig. 2(h) and (i) show how the newly created system fulfils the admittance
requirements and is allowed in the Mine ambient.

4

Conclusions

Continuing our work reported in [5–7] we have introduced here a classification
of resilience based on several attributes. We have shown how breaking down resilience into simpler constituents makes it possible to conceive handshake mechanisms between systems declaring their resilience figures and environments stating
their minimal resilience requirements. One such mechanism has been exemplified through an ambient intelligence scenario. We have shown in particular how
identifying shortcoming and excess in resilience may be used to enhance the
system-environment fit through simple forms of social collaboration.
We observe how decomposing resilience into a set of constituent attributes
allows a set of sub-systems to be ortogonally associated to the management of
said attributes. This paves the way to strategies that
1. assess the resilience requirements called for by the current environmental
conditions; and
2. reconfigure the resilience sub-systems by optimally redistributing the available resource budgets, e.g., in terms of complexity and energy.
Fine-tuning the resilience architectures and organisations after the current environmental conditions may be used to design auto-resilient systems—system
that is whose evolution engines are able to self-guarantee identity persistence
while systematically adapting their perception, apperception, and entelechism
sub-systems. We conjecture that this in turn may help matching the challenges
introduced by the high variability in current deployment environments.
We envisage the study and the application of auto-resilience to constitute a
significant portion of our future research activity.

References
1. Jen, E.: Stable or robust? What’s the difference? In Jen, E., ed.: Robust Design:
a repertoire of biological, ecological, and engineering case studies. SFI Studies in
the Sciences of Complexity. Oxford Univ. Press (2004) 7–20
2. Aristotle, Lawson-Tancred, H.: De Anima (On the Soul). Penguin (1986)
3. Sachs, J.: Aristotle’s Physics: A Guided Study. Rutgers (1995)
4. Meyer, J.F.: Defining and evaluating resilience: A performability perspective. In:
Proc. Int.l Work. on Performability Modeling of Comp. & Comm. Sys. (2009)
5. De Florio, V.: On the constituent attributes of software and organizational resilience. Interdisciplinary Science Reviews 38(2) (2013)
6. De Florio, V.: On the role of perception and apperception in ubiquitous and pervasive environments. In: Proc. of the 3rd Work. on Service Discovery & Composition
in Ubiquitous & Pervasive Environments (SUPE’12). (2012)
7. De Florio, V.: Robust-and-evolvable resilient software systems: Open problems
and lessons learned. In: Proc. of the 8th workshop on Assurances for Self-Adaptive
Systems (ASAS’11), Szeged, Hungary, ACM (2011) 10–17
8. Costa, P., Rus, I.: Characterizing software dependability from multiple stakeholders perspective. Journal of Software Technology 6(2) (2003)
9. Laprie, J.C.: Dependable computing and fault tolerance: Concepts and terminology. In: Proc. of the 15th Int. Symp. on Fault-Tolerant Computing (FTCS-15),
Ann Arbor, Mich., IEEE Comp. Soc. Press (1985) 2–11
10. Laprie, J.C.: Dependability—its attributes, impairments and means. In Randell,
B. et al., eds.: Predictably Dependable Comp. Systems. Springer, Berlin (1995)
3–18
11. De Florio, V., Blondia, C.: Reflective and refractive variables: A model for effective
and maintainable adaptive-and-dependable software. In: Proc. of the 33rd Conf.
on Software Eng. & Adv. Appl. (SEAA 2007), L¨beck, Germany (2007)
u
12. De Florio, V., Blondia, C.: System Structure for Dependable Software Systems.
In: Proc. of the 11th Int.l Conf. on Computational Science and its Applications
(ICCSA 2011), Santander, Spain (2011)
13. De Florio, V.:
Cost-effective software reliability through autonomic
tuning of system resources (2011) http://mediasite.imec.be/mediasite/
SilverlightPlayer/Default.aspx?peid=a66bb1768e184e86b5965b13ad24b7dd.
14. Charette, R.:
Electronic devices, airplanes and interference: Significant danger or not? (2011) IEEE Spectrum blog “Risk Factor”,
http://spectrum.ieee.org/riskfactor/aerospace/aviation/electronicdevices-airplanes-and-interference-significant-danger-or-not.
15. De Florio, V.: Software assumptions failure tolerance: Role, strategies, and visions.
In Casimiro, A., de Lemos, R., Gacek, C., eds.: Architecting Dependable Sys. VII.
Vol. 6420 of LNCS. Springer (2010) 249–272
16. Leibniz, G., Strickland, L.: The shorter Leibniz texts. Continuum (2006)
17. Runes, D.D., ed.: Dictionary of Philosophy. Philosophical Library (1962)
18. Lycan, W.: Consciousness and experience. Bradford Books. MIT Press (1996)
19. Rosenblueth, A., Wiener, N., Bigelow, J.: Behavior, purpose and teleology. Philosophy of Science 10(1) (1943) 18–24
20. Boulding, K.: General systems theory—the skeleton of science. Management Science 2(3) (1956)
21. De Florio, V., Blondia, C.: Service-oriented communities: Visions and contributions
towards social organizations. In Meersman, R. et al., eds.: OTM 2010 Workshops.
Vol. 6428 of LNCS. Springer (2010) 319–328
22. De Florio, V., Blondia, C.: On the requirements of new software development. Int.l
Journal of Business Intelligence and Data Mining 3(3) (2008)
23. Pavard, B., et al.: Design of robust socio-technical systems. In: Proc. of the 2nd
Int.l Symp. on Resilience Eng., Cannes, France (2006)
24. Eugster, P.T., Garbinato, B., Holzer, A.: Middleware support for context aware
applications. In Garbinato, B., Miranda, H., Rodrigues, L., eds.: Middleware for
Network Eccentric and Mobile Appl. Springer (2009) 305–322
25. Gui, N., et al.: ACCADA: A framework for continuous context-aware deployment
and adaptation. In Proc. of the 11th Int.l Symp. on Stabilization, Safety, and
Security of Distr. Sys., (SSS 2009). Vol. 5873 of LNCS, Springer (2009) 325–340
26. Stark, D.C.: Heterarchy: Distributing Authorithy and Organizing Diversity. In:
The Biology of Business. Jossey-Bass (1999) 153–179
27. Anonymous: Heterarchy. Technical report, P2P Foundation (2010)
28. Sousa, P., Silva, N., Heikkila, T., Kallingbaum, M., Valcknears, P.: Aspects of
co-operation in distributed manufacturing systems. Studies in Informatics and
Control Journal 9(2) (2000) 89–110
29. Ryu, K.: Fractal-based Reference Model for Self-reconfigurable Manufacturing
Systems. PhD thesis, Pohang Univ. of Science and Technology, Korea (2003)
30. Tharumarajah, A., Wells, A.J., Nemes, L.: Comparison of emerging manufacturing
concepts. In: Systems, Man, and Cybernetics. 1998 IEEE Int.l Conf. on. Vol. 1.
(1998) 325–331
31. Warnecke, H., H¨ser, M.: The fractal company. Springer (1993)
u
32. Gui, N., De Florio, V.: Towards meta-adaptation support with reusable and composable adaptation components. In: Proc. of the sixth IEEE Int.l Conf. on SelfAdaptive and Self-Organizing Systems (SASO 2012), IEEE (2012)
33. Gui, N., De Florio, V., Holvoet, T.: Transformer: an adaptation framework with
contextual adaptation behavior composition support. Software Pract. Exper.
(2012)
34. De Florio, V., Blondia, C.: Robust and tuneable family of gossiping algorithms.
In: Proc. of the 20th Euromicro Int.l Conf. on Parallel, Distr., and Network-Based
Processing (PDP 2012), Garching, Germany, IEEE Comp. Soc. (2012) 154–161
35. Nilsson, T.: How neural branching solved an information bottleneck opening the
way to smart life. In: Proc. of the 10th Int.l Conf. on Cognitive and Neural Systems,
Boston Univ., MA (2008)
36. van der Schalie, W.H., et al.: Animals as sentinels of human health hazards of
environmental chemicals. Environ. Health Persp. 107(4) (1999)
37. Farr, D.: Indicator Species. In: Encycl. of Environmetrics. Wiley (2002)

Contenu connexe

Similaire à Autoresilience

Architecture for Intelligent Agents Logic-Based Architecture Logic-based arc...
Architecture for Intelligent Agents Logic-Based Architecture  Logic-based arc...Architecture for Intelligent Agents Logic-Based Architecture  Logic-based arc...
Architecture for Intelligent Agents Logic-Based Architecture Logic-based arc...
kathavera906
 
Modified System Usability Scale Please answer the fo
Modified System Usability Scale   Please answer the foModified System Usability Scale   Please answer the fo
Modified System Usability Scale Please answer the fo
IlonaThornburg83
 
Coates p: 1999 agent based modelling
Coates p: 1999 agent based modellingCoates p: 1999 agent based modelling
Coates p: 1999 agent based modelling
ArchiLab 7
 
NAFEMS_Complexity_CAE
NAFEMS_Complexity_CAENAFEMS_Complexity_CAE
NAFEMS_Complexity_CAE
Jacek Marczyk
 
Sad 1 chapter 1- additional material
Sad 1 chapter 1- additional materialSad 1 chapter 1- additional material
Sad 1 chapter 1- additional material
Birhan Atnafu
 
Efficient reasoning
Efficient reasoningEfficient reasoning
Efficient reasoning
unyil96
 

Similaire à Autoresilience (20)

On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...
On the Role of Perception and Apperception in Ubiquitous and Pervasive Enviro...
 
Biology of Language, Humberto Maturana, 1978
Biology of Language, Humberto Maturana, 1978Biology of Language, Humberto Maturana, 1978
Biology of Language, Humberto Maturana, 1978
 
Asse2001
Asse2001Asse2001
Asse2001
 
Fractal Organizations Part I – Complexity
Fractal Organizations Part I – ComplexityFractal Organizations Part I – Complexity
Fractal Organizations Part I – Complexity
 
Architecture for Intelligent Agents Logic-Based Architecture Logic-based arc...
Architecture for Intelligent Agents Logic-Based Architecture  Logic-based arc...Architecture for Intelligent Agents Logic-Based Architecture  Logic-based arc...
Architecture for Intelligent Agents Logic-Based Architecture Logic-based arc...
 
Complexity 101 by Cynthia Cavalli
Complexity 101 by Cynthia CavalliComplexity 101 by Cynthia Cavalli
Complexity 101 by Cynthia Cavalli
 
Fundamental Characteristics of a Complex System
Fundamental Characteristics of a Complex SystemFundamental Characteristics of a Complex System
Fundamental Characteristics of a Complex System
 
Modified System Usability Scale Please answer the fo
Modified System Usability Scale   Please answer the foModified System Usability Scale   Please answer the fo
Modified System Usability Scale Please answer the fo
 
TOWARD ORGANIC COMPUTING APPROACH FOR CYBERNETIC RESPONSIVE ENVIRONMENT
TOWARD ORGANIC COMPUTING APPROACH FOR CYBERNETIC RESPONSIVE ENVIRONMENTTOWARD ORGANIC COMPUTING APPROACH FOR CYBERNETIC RESPONSIVE ENVIRONMENT
TOWARD ORGANIC COMPUTING APPROACH FOR CYBERNETIC RESPONSIVE ENVIRONMENT
 
Asse2001
Asse2001Asse2001
Asse2001
 
General theory: the Problems of Construction
General theory: the Problems of ConstructionGeneral theory: the Problems of Construction
General theory: the Problems of Construction
 
UNIT2.ppt
UNIT2.pptUNIT2.ppt
UNIT2.ppt
 
Improving Tools in Artificial Intelligence
Improving Tools in Artificial IntelligenceImproving Tools in Artificial Intelligence
Improving Tools in Artificial Intelligence
 
Coates p: 1999 agent based modelling
Coates p: 1999 agent based modellingCoates p: 1999 agent based modelling
Coates p: 1999 agent based modelling
 
System Approach in Healthcare Management
System Approach in Healthcare ManagementSystem Approach in Healthcare Management
System Approach in Healthcare Management
 
Mis prsntatn.ppt
Mis prsntatn.pptMis prsntatn.ppt
Mis prsntatn.ppt
 
NAFEMS_Complexity_CAE
NAFEMS_Complexity_CAENAFEMS_Complexity_CAE
NAFEMS_Complexity_CAE
 
Sad 1 chapter 1- additional material
Sad 1 chapter 1- additional materialSad 1 chapter 1- additional material
Sad 1 chapter 1- additional material
 
State of the art of agile governance a systematic review
State of the art of agile governance a systematic reviewState of the art of agile governance a systematic review
State of the art of agile governance a systematic review
 
Efficient reasoning
Efficient reasoningEfficient reasoning
Efficient reasoning
 

Plus de Vincenzo De Florio

Considerations and ideas after reading a presentation by Ali Anani
Considerations and ideas after reading a presentation by Ali AnaniConsiderations and ideas after reading a presentation by Ali Anani
Considerations and ideas after reading a presentation by Ali Anani
Vincenzo De Florio
 
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONS
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONSA FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONS
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONS
Vincenzo De Florio
 

Plus de Vincenzo De Florio (20)

My little grundgestalten
My little grundgestaltenMy little grundgestalten
My little grundgestalten
 
Models and Concepts for Socio-technical Complex Systems: Towards Fractal Soci...
Models and Concepts for Socio-technical Complex Systems: Towards Fractal Soci...Models and Concepts for Socio-technical Complex Systems: Towards Fractal Soci...
Models and Concepts for Socio-technical Complex Systems: Towards Fractal Soci...
 
Service-oriented Communities: A Novel Organizational Architecture for Smarter...
Service-oriented Communities: A Novel Organizational Architecture for Smarter...Service-oriented Communities: A Novel Organizational Architecture for Smarter...
Service-oriented Communities: A Novel Organizational Architecture for Smarter...
 
On codes, machines, and environments: reflections and experiences
On codes, machines, and environments: reflections and experiencesOn codes, machines, and environments: reflections and experiences
On codes, machines, and environments: reflections and experiences
 
Tapping Into the Wells of Social Energy: A Case Study Based on Falls Identifi...
Tapping Into the Wells of Social Energy: A Case Study Based on Falls Identifi...Tapping Into the Wells of Social Energy: A Case Study Based on Falls Identifi...
Tapping Into the Wells of Social Energy: A Case Study Based on Falls Identifi...
 
How Resilient Are Our Societies? Analyses, Models, Preliminary Results
How Resilient Are Our Societies?Analyses, Models, Preliminary ResultsHow Resilient Are Our Societies?Analyses, Models, Preliminary Results
How Resilient Are Our Societies? Analyses, Models, Preliminary Results
 
Advanced C Language for Engineering
Advanced C Language for EngineeringAdvanced C Language for Engineering
Advanced C Language for Engineering
 
A framework for trustworthiness assessment based on fidelity in cyber and phy...
A framework for trustworthiness assessment based on fidelity in cyber and phy...A framework for trustworthiness assessment based on fidelity in cyber and phy...
A framework for trustworthiness assessment based on fidelity in cyber and phy...
 
Fractally-organized Connectionist Networks - Keynote speech @PEWET 2015
Fractally-organized Connectionist Networks - Keynote speech @PEWET 2015Fractally-organized Connectionist Networks - Keynote speech @PEWET 2015
Fractally-organized Connectionist Networks - Keynote speech @PEWET 2015
 
A behavioural model for the discussion of resilience, elasticity, and antifra...
A behavioural model for the discussion of resilience, elasticity, and antifra...A behavioural model for the discussion of resilience, elasticity, and antifra...
A behavioural model for the discussion of resilience, elasticity, and antifra...
 
Considerations and ideas after reading a presentation by Ali Anani
Considerations and ideas after reading a presentation by Ali AnaniConsiderations and ideas after reading a presentation by Ali Anani
Considerations and ideas after reading a presentation by Ali Anani
 
A Behavioral Interpretation of Resilience and Antifragility
A Behavioral Interpretation of Resilience and AntifragilityA Behavioral Interpretation of Resilience and Antifragility
A Behavioral Interpretation of Resilience and Antifragility
 
Community Resilience: Challenges, Requirements, and Organizational Models
Community Resilience: Challenges, Requirements, and Organizational ModelsCommunity Resilience: Challenges, Requirements, and Organizational Models
Community Resilience: Challenges, Requirements, and Organizational Models
 
On the Behavioral Interpretation of System-Environment Fit and Auto-Resilience
On the Behavioral Interpretation of System-Environment Fit and Auto-ResilienceOn the Behavioral Interpretation of System-Environment Fit and Auto-Resilience
On the Behavioral Interpretation of System-Environment Fit and Auto-Resilience
 
Antifragility = Elasticity + Resilience + Machine Learning. Models and Algori...
Antifragility = Elasticity + Resilience + Machine Learning. Models and Algori...Antifragility = Elasticity + Resilience + Machine Learning. Models and Algori...
Antifragility = Elasticity + Resilience + Machine Learning. Models and Algori...
 
Service-oriented Communities and Fractal Social Organizations - Models and co...
Service-oriented Communities and Fractal Social Organizations - Models and co...Service-oriented Communities and Fractal Social Organizations - Models and co...
Service-oriented Communities and Fractal Social Organizations - Models and co...
 
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013
Seminarie Computernetwerken 2012-2013: Lecture I, 26-02-2013
 
TOWARDS PARSIMONIOUS RESOURCE ALLOCATION IN CONTEXT-AWARE N-VERSION PROGRAMMING
TOWARDS PARSIMONIOUS RESOURCE ALLOCATION IN CONTEXT-AWARE N-VERSION PROGRAMMINGTOWARDS PARSIMONIOUS RESOURCE ALLOCATION IN CONTEXT-AWARE N-VERSION PROGRAMMING
TOWARDS PARSIMONIOUS RESOURCE ALLOCATION IN CONTEXT-AWARE N-VERSION PROGRAMMING
 
A Formal Model and an Algorithm for Generating the Permutations of a Multiset
A Formal Model and an Algorithm for Generating the Permutations of a MultisetA Formal Model and an Algorithm for Generating the Permutations of a Multiset
A Formal Model and an Algorithm for Generating the Permutations of a Multiset
 
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONS
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONSA FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONS
A FAULT-TOLERANCE LINGUISTIC STRUCTURE FOR DISTRIBUTED APPLICATIONS
 

Dernier

Dernier (20)

Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...Workshop - Best of Both Worlds_ Combine  KG and Vector search for  enhanced R...
Workshop - Best of Both Worlds_ Combine KG and Vector search for enhanced R...
 
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, AdobeApidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
Apidays New York 2024 - Scaling API-first by Ian Reasor and Radu Cotescu, Adobe
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
04-2024-HHUG-Sales-and-Marketing-Alignment.pptx
 
A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)A Domino Admins Adventures (Engage 2024)
A Domino Admins Adventures (Engage 2024)
 
Real Time Object Detection Using Open CV
Real Time Object Detection Using Open CVReal Time Object Detection Using Open CV
Real Time Object Detection Using Open CV
 
Scaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organizationScaling API-first – The story of a global engineering organization
Scaling API-first – The story of a global engineering organization
 
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemkeProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
ProductAnonymous-April2024-WinProductDiscovery-MelissaKlemke
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
AWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of TerraformAWS Community Day CPH - Three problems of Terraform
AWS Community Day CPH - Three problems of Terraform
 
Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)Powerful Google developer tools for immediate impact! (2023-24 C)
Powerful Google developer tools for immediate impact! (2023-24 C)
 
Automating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps ScriptAutomating Google Workspace (GWS) & more with Apps Script
Automating Google Workspace (GWS) & more with Apps Script
 
Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024Finology Group – Insurtech Innovation Award 2024
Finology Group – Insurtech Innovation Award 2024
 
Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024Axa Assurance Maroc - Insurer Innovation Award 2024
Axa Assurance Maroc - Insurer Innovation Award 2024
 
A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?A Year of the Servo Reboot: Where Are We Now?
A Year of the Servo Reboot: Where Are We Now?
 
GenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdfGenAI Risks & Security Meetup 01052024.pdf
GenAI Risks & Security Meetup 01052024.pdf
 
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
Bajaj Allianz Life Insurance Company - Insurer Innovation Award 2024
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
Connector Corner: Accelerate revenue generation using UiPath API-centric busi...
 

Autoresilience

  • 1. Preliminary Contributions Towards Auto-Resilience Vincenzo De Florio PATS/University of Antwerp and PATS/iMinds research institute Middelheimlaan 1, 2020 Antwerp, Belgium vincenzo.deflorio@ua.ac.be Abstract. The variability in the conditions of deployment environments introduces new challenges for the resilience of our computer systems. As a response to said challenges, novel approaches must be devised so that identity robustness be guaranteed autonomously and with minimal overhead. This paper provides the elements of one such approach. First, building on top of previous results, we formulate a metric framework to compare specific aspects of the resilience of systems and environments. Such framework is then put to use by sketching the elements of a handshake mechanism between systems declaring their resilience figures and environments stating their minimal resilience requirements. Despite its simple formulation it is shown how said mechanism enables scenarios in which resilience can be autonomously enhanced, e.g., through forms of social collaboration. This paves the way to future “auto-resilient” systems, namely systems able to reason and revise their own architectures and organisations so as to optimally guarantee identity persistence. 1 Introduction Self-adaptive systems are able to mutate their structure and function in order to match “changing circumstances” [1]. When relevant changes in their deployment environment are perceived—due, for instance, to application mobility or ambient adaptations—self-adaptive systems typically perform some form of reasoning and introspection so as to conceive a new structure best matching the new circumstances at hand. This new structure may indeed allow the adapting system to tolerate or even profit from the new conditions; at the same time, it is possible that the mutation affected the identity of that system, that is, the functional and non-functional aspects and properties characterising the expected behaviour of that system. A relevant problem then becomes robust feature persistence, namely a system’s capability to retain certain characteristics of interest throughout changes and adaptations affecting, e.g., its constituent modules, topology, and the environment. The term commonly used to refer to robust feature persistence is resilience— a concept discussed so early as in Aristotle’s Physics and Psychology [2]. Resilience is called by Aristotle as entelechy, which he defines as the ability to pursue completion (that is, one’s optimal behaviour) by continuously re-adjusting oneself. Sachs’s translation for entelechy is particularly intriguing and pertinent
  • 2. here, as it is “being-at-work-staying-the-same” [3]. So complex and central is this idea within Aristotle’s corpus that Sachs again refers to it in the cited reference as to a “three-ring circus of a word”. In fact resilience still escapes a clear and widely agreed understanding. Different domain-specific definitions exist or capture but a few aspects of the whole [4]. In previous contributions [5–7] we conjectured that some insight on this complex concept may be gained by realising its nature as a multi-attribute property, “defined and measured by a set of different indicators” [8]. As a matter of fact, breaking down a complex property into a set of constituent attributes proved to be beneficial with another most elusive property—dependability, which was characterised into six constituent properties by Laprie [9, 10]. Encouraged by this lesson in [5–7] we set to apply the same method to try and capture some aspects of the resilience of adaptive systems. Building on top of the above mentioned preliminary results, this paper’s first contribution is the definition of a number of system classes and partial orders to enable a qualitative evaluation of system-environment fits—in other words, how a system’s resilience features match with the resilience requirements called for by that system’s deployment environments. This is done in Sect. 2. A second contribution is presented in Sect. 3 through the high level description of a handshake mechanism between systems declaring their resilience figures and environments stating their minimal resilience requirements. Said mechanism is exemplified through an ambient intelligence case study. In particular it is shown how putting the resilience characteristics of systems and environments in the foreground enables scenarios in which resilience can be enhanced through simple forms of social collaboration. Finally, in Sect. 4, we enunciate a conjecture: resilience-oriented handshake mechanisms such as the one presented in this paper pave the way to future auto-resilient systems—entities, that is, that are able to reason about their own architectures and organisations and to optimally revise them, autonomously, in order to match the variability of conditions in their deployment environments. 2 Perception, Apperception, and Entelechism In previous work we identified three main constituent properties for system and organisational resilience [5–7]. Here we recall and extend said properties and discuss the major threats associated to their failure. We also introduce system classes and partial orders to facilitate the assessment of how a system’s resilience architecture matches its mission and deployment environment. 2.1 Perception What we cannot perceive, we cannot react from—hence we cannot adapt to. As a consequence a necessary constituent attribute of resilience is given by perception, namely a system’s ability to become timely aware of some portion of the context. In what follows we shall represent perception through the collection of context
  • 3. figures—originating within and without the system boundaries—whose changes we can be alerted from within a reasonable amount of time. From this definition we observe how perception may be interpreted as a measure of how “open-world” a system is—be it biological, societal, or computer-based. Perception is carried out through several mechanisms. We distinguish three sub-functions to perception, which we call sensors, qualia, and memory. Sensors represent a system’s primary interface with the physical world. The sensors’ main function is to reflect a given subset of the world’s “raw facts” into internal representations that are then stored in some form within the system’s processing and control units—its “brains”. Qualia [6] is the name used in literature to refer to such representations. Qualia are then persisted—to some extent—in the system memory. Sensors, qualia, and memory are very important towards the emergence of resilience: the quality of reactive control strictly depends on the quality of service of the sensory system as well as that of the system components responsible for the reliable production, storage, persistence, and retrieval of trustworthy qualia [6]. Important aspects of such quality of service include what we call the qualia manifestation latency (namely the time between the physical appearance of a raw fact and the corresponding production of a qualia), the reflective throughput (that is, the largest amount of raw facts that may be reliably encoded as qualia per time unit), and the qualia access time (how quickly the control layers may access the qualia). An example of a software system using application-level qualia to operate control is described in [11, 12]. As mentioned already, be it computer-based or organic, any system is characterised—and limited—in its resilience by the characteristics of its perception sub-system. In particular the amount and quality of its sensors and the quality of its qualia production, storage, and persistence services define what the system is going to timely and reliably perceive; and consequently what it may effectively react upon. This concept matches well with what Leibniz referred to as a system’s “clear representation”, as opposed to an “obscure representation” resulting from, e.g., sensor shortage or insufficient quality of service in the qualia layers. We refer to this region of clear representation as to a system’s perception spectrum. A hypothetical system of all clear representation and no obscure representation is called by Leibniz a monad. At the other end of the spectrum we have closedworld systems—systems that is that operate in their “virtual world” completely unaware of any physical world “raw fact”. The term we use to refer to such context-agnostic systems is ataraxies (from “ataraxy”, namely the attitude of taking actions without considering any external event or condition; from a-, not, and tarassein, to disturb). Ataraxies may operate as reliably and efficiently as monads, but they are not designed to withstand changes—they are what the American refer to as “sitting ducks” in the face of changes. As long as their system assumptions hold, they constitute our unquestioning avatars diligently performing their appointed tasks; though they fail miserably when facing the
  • 4. slightest perturbation in their design hypotheses1 [15]. Likewise monads, though characterised by perfect perception, may be unable to make use of this quality to achieve awareness and ultimately guarantee their resilience or other design goals of interest. In what follows we shall refer to a system’s quality of perception as to its “power of representation”—a term introduced by Leibniz [16]. In [6] we presented a simple Algebraic model for perception by considering perception spectra as subsets of a same “perfect” perception spectrum (corresponding to the “all-seeing eye” of the fabled monad, which “could see reflected in it all the rest of creation” [16]). Figure 1(a) depicts this by considering the perception spectra of two systems, a and b, respectively represented as set A and set B. Little can be said in this case about the power of representation of a with respect to that of b: here in fact the spectra are not comparable with one another, because it is not true that (A ⊂ B) ∨ (B ⊂ A). On the other hand, when for instance A ⊆ B then we shall say that b has “greater perception” (that is, a greater power of representation) than a: a P b if and only if A ⊆ B. (1) This is exemplified in Fig. 1(b), in which A ⊆ B ⊆ M , the latter being the whole context (that is, the perception spectrum of monad m). This means that a, b, and m are endowed with a larger and larger set of perception capabilities—a greater and greater power of representation. Expression a P b P m states such property. We deem it important to highlight how perception spectra such as set A and B should be actually represented as functions of time; the mission characteristics; and the current context. In other words, perception should not be taken as an absolute and immutable feature but rather as the result of several dynamic processes, e.g., the current state of the sensory subsystem, the current quality of their services, as well as how the resulting times, throughputs, failures, and latencies match with the current mission requirements. For the sake of simplicity we shall nevertheless refer to perception spectra simply as sets. Perception spectra / powers of representation may be also used to evaluate the environmental fit of a given system with respect to a given deployment environment—that is, to gain insight in the match between that system and its intended execution environment. As an example, Fig. 1(a) may be interpreted also as the perception spectrum of system a and the power of representation called for by deployment environment b. The fact that B A is non-empty tells 1 As discussed in [13], another problem with closed-world systems is that they are in a sense systems “frozen in time”: verifications for any such system implicitly refer to scenarios that may differ from the current one. We use the term frozen ducks to refer to ataraxies with stale certifications. A typical case of frozen ducks is efficaciously reported by engineer Bill Strauss: “A plane is designed to the right specs, but nobody goes back and checks if it is still robust” [14].
  • 5. (a) Regions of clear representation of system a and b with respect to that of hypothetical perfect system m. The intersection region represents the portion of the spectrum that is in common between a and b. (b) The region of clear representation A is fully included in B, in turn fully included in M . In this case we can state that the power of representation of system a is inferior to that of b’s, which in turn is less than m’s. Fig. 1. Exemplification of perception spectra and regions of clear representations. us that a will not be sufficiently aware of the context changes occurring in b. Likewise A B = ∅ tells us that a is designed so as to be aware of figures that will not be subjected to change while a is in b. The corresponding extra design complexity is (in this case) a waste of resources in that it does not contribute to any improvement in resilience. The case study introduced in Sect. 3 makes use of perception spectra to evaluate a system-environment fit. As a final remark, perception spectra may be used to compare environments with one another. This may be useful especially in ambient intelligence scenarios in which some control may be exercised on the properties of the deployment environment(s). Estimating shortcoming or excess in a system’s perception capabilities provides useful information to the “upper functions” responsible for driving the evolution of that system. Such functions may then make use of said information to perform design trade-offs among the resilience layers. As an example, the system may reduce its perception spectrum and use the resulting complexity budget to widen its apperception capabilities—that is, the subject of next section. 2.2 Apperception As the perception spectrum defines the basic facts that are going to trigger awareness and ultimately reaction and control, likewise apperception defines how the reflected qualia are accrued, put in relation with past perception, and used to create dynamic models of the “self” and of the “world” [17]. In turn this ability enables higher level functions of system evolution—in particular, the planning of reactions (e.g., parametric adaptations or system reconfigurations). Also in the case of apperception we can introduce a ranking of sort stating different powers of apperception. Several such rankings and classifications were introduced in the
  • 6. past, the first and foremost example may be found in Aristotle’s De Anima 2 . Leibniz also compiled a hierarchy of “substances”—as he referred to systems and beings [16]. More recently Lycan suggested [18] that there might be at least eight classes of apperception. An important contribution in the matter is due to Rosenblueth, Wiener, and Bigelow, who proposed in [19] a classification of systems according to their behaviour and purpose. In particular in their cited work they composed a hierarchy consisting of the following behavioural classes: 1. Systems characterised by passive behaviour: no source of “output energy” may be identified in any activity of the system. 2. Systems with active, but non-purposeful behaviour—systems, that is, that do not have a “specific final condition toward which they strive” [19]. 3. Systems with purposeful, but non-teleological (i.e., feedback-free) behaviour: systems, that is, in which “there are no signals from the goal which modify the activity of the object” (viz., the system) “in the course of the behaviour.” 4. Systems with teleological, but non-extrapolative behaviour: systems that are purposeful but unable to construct models and predictions of a future state to base their reactions upon. 5. First-order predictive systems, able to extrapolate along a single perception dimension—i.e., a single qualia. 6. Higher-order predictive systems, or in other words systems that are able to base their reactions on the correlation of two or more qualia dimensions, possibly of different nature—temporal and spatial coordinates for instance. The behaviours of systems in classes 4–6 exhibit increasing powers of apperception. The just discussed seminal work was then continued by Boulding in his classic paper on General Systems Theory [20]. In said paper the Author introduced nine classes structured after a system’s perception and apperception capabilities. More specifically, Boulding’s classes refer to the following system types: 1. Ataraxies, subdivided into so-called Frameworks and Clockworks. 2. Simple control mechanisms, e.g., thermostats, that are able to track a single context figure. 3. Self-maintaining structures, e.g., biological cells, which are able to track multiple context features. Both thermostats and cells correspond to the systems with purposeful, though non-teleological, behaviour of [19]. 4. Simple stationary systems comprising several specialised sub-systems, like plants, characterised by very simple forms of predictive behaviour and apperception. 5. Complex mobile systems with extensive power of representation and simple forms of apperception (especially self-awareness). Boulding refers to this class as to “animals”. A classic example of this is a cat moving towards 2 As cleverly expressed in [2], Aristotle finds that “living things all take their place in a cosmic hierarchy according to their abilities in the fields of nutrition, perception, thought and purposive action.”
  • 7. its prey’s extrapolated future position [19]. These systems may be characterised by “precooked apperception”, i.e., innate behaviour commonly known as instinct. This corresponds to systems initialised with domain-specific predefined and immutable apperception capabilities and adaptation plans. 6. Complex mobile systems endowed with extensive apperception capability, e.g., self-awareness, self-consciousness, and high order extrapolative capability. “Human beings” is the term used by Boulding for this class. 7. Collective adaptive systems, e.g. digital ecosystems, cyber-physical societies, multi-agent systems, or social organisations [21]. Boulding refers to this class as “a set of roles tied together with channels of communication”. 8. Totally open-world systems, namely the equivalent of Leibniz’s monads. Transcendental systems is the name that Boulding gives to this class. Again classes 4–6 represent (non-transcendental, non-collective) systems with increasing powers of apperception. It is then possible to define a projection map π returning for any such system s the class that system belongs to (or, alternatively, the behaviour class characterising s) represented as an integer in {1, . . . , 6}. Function π then defines a second partial order among systems—for any two systems p and q with apperception capability we shall say that p has less power of apperception than q when the following condition holds: p A q if and only if π(p) < π(q). (2) As we have done with perception, also in this case we remark how the above partial order may apply to environments as well as to systems. As such the above partial order may be used to detect mismatches between a system’s apperception characteristics and those expected by a given environment. One such mismatch is detected in the scenario discussed in Sect. 3. 2.3 Entelechism Once trustworthy models of the endogenous conditions and exogenous scenarios are built through perception and apperception, resilient systems typically make use of the accrued knowledge to plan some form of reactive control. The aim of this reactive control is to guarantee the persistence of a system’s functional and non-functional “identity”—namely what that system is supposed to do and under which conditions and terms. As mentioned in Sect. 1, already Aristotle identified this quality, which he called entelechy and solely attributed to human beings. Entelechy is in fact the driving force—the movement, or “energy”— that makes active-behaviour systems strive towards resilience. By analogy, in what follows we refer to a system’s entelechy as to the quality of the mechanisms responsible for planning and controlling the robust emergence of that system’s peculiar characteristics while changes and system adaptations take place. Such characteristics may include, e.g., timeliness, determinism, security, safety, or functional behaviours as prescribed in the system specifications.
  • 8. In [7] we called evolution engine of system s the portion of s responsible for controlling its adaptation. In what follows we shall refer to the evolution engine as to EE(s)—or simply EE when s can be omitted without ambiguity. We now propose a tentative classification of systems according to their entelechism—namely, according to the properties and characteristics of their EE. Also in this case we found it convenient to isolate a number of ancillary constituent components in order to tackle separately different aspects of this “threering circus” [3] of a concept. Meta-apperception When considered as a separate entity, system EE(s) may be subjected to a classification such as Boulding’s or Rosenblueth’s, intended to highlight the characteristics of the resilience logics of system s. Said characteristics may differ considerably from those of s. As an example, the adaptively redundant data structures introduced in [22] may be regarded as a whole as a first-order predictive behaviour mechanism [5]. On the other hand that system’s EE(s) is remarkably simpler and only capable of purposeful active behaviours. In fact, a system’s EE may or may not be endowed with apperception capabilities, and it may or may not be a resilient system altogether. This feature represents a first coordinate to assess the entelechism of evolving systems. Making use of the partial order defined in Sect. 2.2 we shall say that, for any two systems p and q, q is endowed with greater meta-apperception than p (written as p µA q) if and only if the following condition holds: p µA q if and only if π(EE(p)) A π(EE(q)). (3) Multiplicity and organisation of the planning entities In what follows we propose to identify classes of resilient systems also by taking into account the individual or social organisation of the processes that constitute their evolution engines. Three are the aspects that—we deem—play an important role in this context: – The presence of a single or multiple concurrent evolution engines. – The individual vs. social nature of the interactions between neighbouring systems. This may range from “weak” forms of interactions [23]—e.g., as in the individual-context middleware of [24]—up to high level forms of structured social organisation (multi-level coupling of the individual to the environment). The latter case corresponds to the social-context middleware systems of [24]. – (When multiple concurrent EE’s contribute to the emergence of the global system behaviour:) The organisation of control amongst the EE’s. Table 1 provides a classification of systems according to the just enunciated criteria. The first class is given by systems with a single EE and only capable of individual-context planning. This means that decisions are taken in isolation and without considering the decisions taken by neighbouring systems [24]. GPS
  • 9. 1) Single-logic 2) Single-logic 3) Collective-logic 4) Collective-logic 5) Bionic, holarindividualsocial-context social-context social-context chic, or fractal context systems systems hierarchies heterarchies organisations Table 1. A tentative classification of evolving systems according to the number and the complexity of their EE’s. planning their route only by means of digital maps of the territory are examples of said systems. The second class comprises again systems with a single EE but this time planning is executed while taking into account the behaviour of neighbouring systems [24]. A collision avoidance system in a smart car belongs to this class. Classes 3 to 5 all consist of systems capable of collective planning. Class 3 includes systems where planning is centralised or hierarchical: one or multiple decision layers exist and on each layer multiple planners submit or publish their plans to a next-layer planner. Air traffic control systems and the ACCADA middleware [25] provide us with two examples of this type of systems. Class 4 refers to decentralised societies with peer-to-peer planning and management. The term used to refer to such systems is heterarchy [26]. Heterarchies are flat (i.e., layer-less) organisations characterised by multiple concurrent system-of-values and -goals. They introduce redundant control logics from which a system’s expected service may be distributed across a diversity of routes and providers. Such diversity provides a “mutating factor” of sorts, useful to avoid local minima—what Stark refers to as “lock-ins” [26]. The absence of layers removes the typical flaws of hierarchical organisations (propagation and control delays and failures). The distributed decision making introduces new criticalities though, e.g., deterministic and timely behaviours are more difficult to guarantee. “Different branches of government that have checks and balances through separation and overlap of power” [27] constitute an example of heterarchy. The fifth and last class includes systems characterised by distributed hierarchical organisation: bionic organisations, holarchies, and fractal organisations. Said systems are a hierarchical composition of autonomous planners—called respectively modelons, holons, and fractals—characterised by spontaneous behaviour and local interaction. Said planners autonomously establish cooperative relationships with one another, which ultimately produce the emerging functional and adaptive behaviours of the system. “Simultaneously a part and a whole, a container and a contained, a controller and a controlled” [28], these organisations result in systems able to avoid the flaws of both hierarchical and heterarchical systems. The emergence of stability, flexibility, and efficient use of the available resources have been experienced in systems belonging to this class [29–31]. In this case the above classes can not be used to define a partial order—as it was the case for perception, apperception and meta-apperception—but rather to identify general characteristics exhibited by systems or expected by a hosting environment. As an example, a digital ecosystem may have an admittance policy granting deployment only to systems characterised by social-context capabilities.
  • 10. This may be done, e.g., so as to prevent the diffusion of greedy individualistic behaviours potentially jeopardising the whole ecosystem. Complexity of the planned adaptive behaviours A third aspect that— we conjecture—plays an important role in an entity’s reactive control processes is given by the magnitude and complexity of the adaptation behaviours. We distinguish three major cases: 1. Parametric adaptation. In this case s retains its structure and organisation whatever the adaptation processes instructed by EE(s). Adaptation is achieved by switching among structurally equivalent configurations that depend on one or more internal “knobs” or tunable parameters—e.g., the number of replicas in the redundant data structures in [7]. The adaptive behaviours of parametrically adaptive systems are therefore simple3 . As done by Rosenblueth et al. for their classification of behaviours we shall classify here parametrically adaptive systems by considering their order, namely the number of involved knobs. As an example, the above mentioned redundant data structures are a first-order parametrically adaptive system. 2. Structural adaptation. In this case the adaptation processes of EE(s) bring s to mutate its structure and/or organisation by reconfiguring the topology, the role, and the number of its constituents. Note how said constituents may also be part of EE(s). Clearly the adaptive behaviours of this class of systems is more complex and thus less stable. An example of such systems is given by Transformer, a framework for self-adaptive component-based applications described in [32, 33]. 3. Hybrid adaptation—systems that is whose adaptation plans comprise both structural and parametric adaptation. An example of this class of systems is given by the family of adaptive distributed gossipping algorithms described in [34], for which the choice of a combinatorial parameter also induces a restructuring of the roles of the involved agents. 3 Resilience Handshake Mechanisms As well known, any system—be it made by man or by nature—is the result of organisational and design choices in turn produced by the mechanisms of biological or machine-driven evolution [35]. Resilience is a key property emerging from the match between these choices and a deployment environment. Regrettably, both man and nature have no complete freedom in their design choices, as enhancing one design aspect in most cases reduces the degree of freedom on other design aspects. Isolating the constituent attributes of resilience helps gaining insight into this problem and paves the way to approaches were perception, 3 Of course this does not mean that the effect that said adaptations is going to have on s will also be simple. In general this will depend on the sensitivity of the parameters and on the extent of their correlation.
  • 11. apperception, and entelechism can be dynamically refined so as to optimally match with corresponding figures expected by the target environments. In what follows we propose a strategy to achieve said “auto-resilient” behaviours. The main idea is to set up admission control mechanisms constraining the deployment of a system in a target environment. This allows a system’s resilience figures to be matched with the expected minimal resilience requirements of a deployment environment. This is similar to defining an “adaptation contract” to be matched with an “environment policy”—in the sense discussed, e.g., in [24]. Figure 2 exemplifies our idea through an ambient intelligence scenario. In this case the ambient is a coal mine. Said environments are known to experience occasionally high concentrations of toxic gases—e.g., carbon monoxide and dioxide as well as methane—that are lethal to both animals and human beings. Regrettably human beings are not endowed with perception capabilities able to provide early warning against the increasing presence of toxic gases. In other words, miners are subjected to dangerous perception failures when working in coal mines. A common way to address said problem is to make use of so-called sentinel species [36], namely systems or animals able to compensate for another system’s lack in perception. The English vernacular “being like a canary in a coal mine” refers to the traditional use of canaries as sentinel species for miners. Our scenario is inspired by the above expedient. We envision the presence of two types of environmental agents: a Feature Register (FR) and one or more Ambient Agents (AA). FR is the manager of a dynamically growing associative array. It stores associations of the form s → {Ps , As , Es }, (4) stating the perception, apperception, and entelechy characteristics of system s. As an example, if s is a miner, then Ps is a representation of the perception spectrum of said agent, As is his apperception class, and Es is a triple representing the entelechism of a miner. We shall refer to the triplets {Ps , As , Es } as to the “R-features” of s. AA is an entity representing the R-features of a certain ecoregion, e.g., a “mine”. Indicator species is the term used in the literature to refer to entities representative of an ecoregion [37]. In the scenario depicted in Fig. 2 we have a single AA called Mine Ambient. We assume that every deployment in a target environment e (in this case, a “mine”) must be authorised through a handshake with the local FR. This means that, before processing any admittance requests, the FR first expects the AA of e to declare their R-features. This is done in Fig. 2(a) by calling method DclClient. For the sake of simplicity we assume that said R-features are constant. When that is not the case the AA is responsible to update their R-features with new DclClient calls. The scenario continues with a system, a Miner Agent, requesting access to e. This is done in Fig. 2(b) through another call to DclClient. Once the FR receives
  • 12. Fig. 2. Resilience handshake scenario. A Mine Ambient declares its resilience requirements (in particular, Perception of carbon monoxide, methane or carbon dioxide). A Miner Agent and a Canary Agent are both not qualified enough to enter. A Feature Register detects that collaboration between them may solve the problem. As a result a new collective system, Miner+Canary, is created, which passes the test and is allowed into the Mine Ambient. the corresponding R-features, a record is added to the FR associative array and
  • 13. the request is evaluated. By comparing the perception spectra of e and the Miner Agent, the FR is able to detect a perception failure: Miner Agent P e, or in other words some of the events in e would go undetected by the Miner Agent when deployed in e. As a consequence, a call to method PerceptionFailure notifies the Miner Agent that the resilience handshake failed (Fig. 2(c)). Despite this, the entry describing the R-features of the Miner Agent is not purged from the associative array in FR. After some time a second system, called Canary Agent, requests deployment in the mine e by submitting their R-features. This is shown in Fig. 2(d). The Canary Agent is comparably simpler than the Miner Agent in terms of both apperception and entelechism, and in particular the apperception class of the Canary Agent is insufficient with respect to the apperception expected by e: Canary Agent A e. As a consequence, a failure is declared (see Fig. 2(e)) by calling method ApperceptionFailure. Despite said failure, a new record stating the R-features of Canary Agent is added to the associative array of FR. By some strategy, e.g., a brute force analysis of every possible unions of all stored associations, the FR realises that the union of the perception spectrum of the Miner Agent and that of the Canary Agent optimally fulfils the admittance requirements of e and therefore does not result in a perception failure. Both Miner and Canary agents are then notified of this symbiotic opportunity by means of a call to method JoinPerceptionSpectra (Fig. 2(f)). This is followed by the creation of a simple form of social organisation: the Miner Agent monitors the state of the Canary Agent in order to detect the presence of toxic gases. If this monitoring process is not faulty—that is, if the Miner Agent does not fail to check regularly and frequently enough for the state of the Canary Agent—this results in an effective method to augment artificially one’s perception spectrum. The resulting collective system, Miner+Canary Agent, is created in Fig. 2(g). Finally, Fig. 2(h) and (i) show how the newly created system fulfils the admittance requirements and is allowed in the Mine ambient. 4 Conclusions Continuing our work reported in [5–7] we have introduced here a classification of resilience based on several attributes. We have shown how breaking down resilience into simpler constituents makes it possible to conceive handshake mechanisms between systems declaring their resilience figures and environments stating their minimal resilience requirements. One such mechanism has been exemplified through an ambient intelligence scenario. We have shown in particular how identifying shortcoming and excess in resilience may be used to enhance the system-environment fit through simple forms of social collaboration. We observe how decomposing resilience into a set of constituent attributes allows a set of sub-systems to be ortogonally associated to the management of said attributes. This paves the way to strategies that 1. assess the resilience requirements called for by the current environmental conditions; and
  • 14. 2. reconfigure the resilience sub-systems by optimally redistributing the available resource budgets, e.g., in terms of complexity and energy. Fine-tuning the resilience architectures and organisations after the current environmental conditions may be used to design auto-resilient systems—system that is whose evolution engines are able to self-guarantee identity persistence while systematically adapting their perception, apperception, and entelechism sub-systems. We conjecture that this in turn may help matching the challenges introduced by the high variability in current deployment environments. We envisage the study and the application of auto-resilience to constitute a significant portion of our future research activity. References 1. Jen, E.: Stable or robust? What’s the difference? In Jen, E., ed.: Robust Design: a repertoire of biological, ecological, and engineering case studies. SFI Studies in the Sciences of Complexity. Oxford Univ. Press (2004) 7–20 2. Aristotle, Lawson-Tancred, H.: De Anima (On the Soul). Penguin (1986) 3. Sachs, J.: Aristotle’s Physics: A Guided Study. Rutgers (1995) 4. Meyer, J.F.: Defining and evaluating resilience: A performability perspective. In: Proc. Int.l Work. on Performability Modeling of Comp. & Comm. Sys. (2009) 5. De Florio, V.: On the constituent attributes of software and organizational resilience. Interdisciplinary Science Reviews 38(2) (2013) 6. De Florio, V.: On the role of perception and apperception in ubiquitous and pervasive environments. In: Proc. of the 3rd Work. on Service Discovery & Composition in Ubiquitous & Pervasive Environments (SUPE’12). (2012) 7. De Florio, V.: Robust-and-evolvable resilient software systems: Open problems and lessons learned. In: Proc. of the 8th workshop on Assurances for Self-Adaptive Systems (ASAS’11), Szeged, Hungary, ACM (2011) 10–17 8. Costa, P., Rus, I.: Characterizing software dependability from multiple stakeholders perspective. Journal of Software Technology 6(2) (2003) 9. Laprie, J.C.: Dependable computing and fault tolerance: Concepts and terminology. In: Proc. of the 15th Int. Symp. on Fault-Tolerant Computing (FTCS-15), Ann Arbor, Mich., IEEE Comp. Soc. Press (1985) 2–11 10. Laprie, J.C.: Dependability—its attributes, impairments and means. In Randell, B. et al., eds.: Predictably Dependable Comp. Systems. Springer, Berlin (1995) 3–18 11. De Florio, V., Blondia, C.: Reflective and refractive variables: A model for effective and maintainable adaptive-and-dependable software. In: Proc. of the 33rd Conf. on Software Eng. & Adv. Appl. (SEAA 2007), L¨beck, Germany (2007) u 12. De Florio, V., Blondia, C.: System Structure for Dependable Software Systems. In: Proc. of the 11th Int.l Conf. on Computational Science and its Applications (ICCSA 2011), Santander, Spain (2011) 13. De Florio, V.: Cost-effective software reliability through autonomic tuning of system resources (2011) http://mediasite.imec.be/mediasite/ SilverlightPlayer/Default.aspx?peid=a66bb1768e184e86b5965b13ad24b7dd. 14. Charette, R.: Electronic devices, airplanes and interference: Significant danger or not? (2011) IEEE Spectrum blog “Risk Factor”, http://spectrum.ieee.org/riskfactor/aerospace/aviation/electronicdevices-airplanes-and-interference-significant-danger-or-not.
  • 15. 15. De Florio, V.: Software assumptions failure tolerance: Role, strategies, and visions. In Casimiro, A., de Lemos, R., Gacek, C., eds.: Architecting Dependable Sys. VII. Vol. 6420 of LNCS. Springer (2010) 249–272 16. Leibniz, G., Strickland, L.: The shorter Leibniz texts. Continuum (2006) 17. Runes, D.D., ed.: Dictionary of Philosophy. Philosophical Library (1962) 18. Lycan, W.: Consciousness and experience. Bradford Books. MIT Press (1996) 19. Rosenblueth, A., Wiener, N., Bigelow, J.: Behavior, purpose and teleology. Philosophy of Science 10(1) (1943) 18–24 20. Boulding, K.: General systems theory—the skeleton of science. Management Science 2(3) (1956) 21. De Florio, V., Blondia, C.: Service-oriented communities: Visions and contributions towards social organizations. In Meersman, R. et al., eds.: OTM 2010 Workshops. Vol. 6428 of LNCS. Springer (2010) 319–328 22. De Florio, V., Blondia, C.: On the requirements of new software development. Int.l Journal of Business Intelligence and Data Mining 3(3) (2008) 23. Pavard, B., et al.: Design of robust socio-technical systems. In: Proc. of the 2nd Int.l Symp. on Resilience Eng., Cannes, France (2006) 24. Eugster, P.T., Garbinato, B., Holzer, A.: Middleware support for context aware applications. In Garbinato, B., Miranda, H., Rodrigues, L., eds.: Middleware for Network Eccentric and Mobile Appl. Springer (2009) 305–322 25. Gui, N., et al.: ACCADA: A framework for continuous context-aware deployment and adaptation. In Proc. of the 11th Int.l Symp. on Stabilization, Safety, and Security of Distr. Sys., (SSS 2009). Vol. 5873 of LNCS, Springer (2009) 325–340 26. Stark, D.C.: Heterarchy: Distributing Authorithy and Organizing Diversity. In: The Biology of Business. Jossey-Bass (1999) 153–179 27. Anonymous: Heterarchy. Technical report, P2P Foundation (2010) 28. Sousa, P., Silva, N., Heikkila, T., Kallingbaum, M., Valcknears, P.: Aspects of co-operation in distributed manufacturing systems. Studies in Informatics and Control Journal 9(2) (2000) 89–110 29. Ryu, K.: Fractal-based Reference Model for Self-reconfigurable Manufacturing Systems. PhD thesis, Pohang Univ. of Science and Technology, Korea (2003) 30. Tharumarajah, A., Wells, A.J., Nemes, L.: Comparison of emerging manufacturing concepts. In: Systems, Man, and Cybernetics. 1998 IEEE Int.l Conf. on. Vol. 1. (1998) 325–331 31. Warnecke, H., H¨ser, M.: The fractal company. Springer (1993) u 32. Gui, N., De Florio, V.: Towards meta-adaptation support with reusable and composable adaptation components. In: Proc. of the sixth IEEE Int.l Conf. on SelfAdaptive and Self-Organizing Systems (SASO 2012), IEEE (2012) 33. Gui, N., De Florio, V., Holvoet, T.: Transformer: an adaptation framework with contextual adaptation behavior composition support. Software Pract. Exper. (2012) 34. De Florio, V., Blondia, C.: Robust and tuneable family of gossiping algorithms. In: Proc. of the 20th Euromicro Int.l Conf. on Parallel, Distr., and Network-Based Processing (PDP 2012), Garching, Germany, IEEE Comp. Soc. (2012) 154–161 35. Nilsson, T.: How neural branching solved an information bottleneck opening the way to smart life. In: Proc. of the 10th Int.l Conf. on Cognitive and Neural Systems, Boston Univ., MA (2008) 36. van der Schalie, W.H., et al.: Animals as sentinels of human health hazards of environmental chemicals. Environ. Health Persp. 107(4) (1999) 37. Farr, D.: Indicator Species. In: Encycl. of Environmetrics. Wiley (2002)