Initially proposed to interconnect computers worldwide, the Internet has significantly evolved to become in two decades a key element in almost all our activities. This (r)evolution mainly relies on the progress that has been achieved in computation and communication fields and that has led to the well-known and widely spread Cloud Computing paradigm.
With the emergence of the Internet of Things (IoT), stakeholders expect a new revolution that will push, once again, the limits of the Internet, in particular by favouring the convergence between physical and virtual worlds. This convergence is about to be made possible thanks to the development of minimalist sensors as well as complex industrial physical machines that can be connected to the Internet through edge computing infrastructures.
Among the obstacles to this new generation of Internet services is the development of a convenient and powerful framework that should allow operators, and devops, to manage the life-cycle of both the digital infrastructures and the applications deployed on top of these infrastructures, throughout the cloud to IoT continuum.
In this keynote, Frédéric Desprez and his colleague Adrien Lebre presented research issues and provide preliminary answers to identify whether the challenges brought by this new paradigm is an evolution or a revolution for our community.
Strategize a Smooth Tenant-to-tenant Migration and Copilot Takeoff
(R)evolution of the computing continuum - A few challenges
1. (R)evolution of the computing
continuum
A few challenges…
International Symposium on Stabilization, Safety, and Security of Distributed Systems
F. Desprez (INRIA), A. Lebre (IMT Atlantique)
2. Agenda
• Introduction, context, and research issues
• Some recent challenges/scientific issues addressed by the Stack team
1. How to operate a geo-distributed infrastructure
2. Services placement
3. Decentralized indexation
• Experimental infrastructures
• Conclusions
3. Why do we need a computing continuum ? Mahadev Satyanarayanan
4. Introduction and context
• Huge increase of data generated (2.5 exabytes of new data generated each
day)
• More than 50 billions of connected devices around the world
• Moving the data from IoT devices to the cloud is an issue
• New applications (time-sensitive, location aware) with ultra-low latencies
requirements
• Privacy issues
• Solution: A computing paradigm closer to the data is generated and used
Impossible !
6. Several ‘flavors’ of distributed computing
• Cloud computing
• Ubiquitous, on-demand access to shared computing resources. Virtualization. Elasticity. IaaS, PaaS, SaaS.
• Fog computing
• « Horizontal system-level architecture that distributes computing, storage, control, and networking closer to the
users along a cloud-to-thing continuum » (OpenFog consortium).
• Mobile computing
• Mobile devices, resource constrained devices, connected though Bluetooth, Wifi, ZigBee, …
• Mobile cloud computing (MCC)
• An infrastructure where both the data storage and data processing occur outside of the mobile device, bringing
mobile computing applications to not just smartphone users but a much broader range of mobile subscribers.
• Mobile and ad hoc cloud computing
• Mobile devices in an ad hoc mobile network form a highly dynamic network topology; the network formed by the
mobile devices is highly dynamic and must accommodate for devices that continuously join or leave the
network.
All one needs to know about fog computing and related edge computing paradigms: A complete survey, A. Yousefpour et al., Journal of Syst. Arch., Vol 98, Sep. 2019
7. Several ‘flavors’ of distributed computing, contd
• Edge computing
• « Computation done at the edge of the network through small data centers that are close to users »
(OpenEdge Computing).
• Multi-access Edge Computing (MEC)
• « A platform that provides IT and cloud-computing capabilities within the Radio Access Network (RAN) in
4G and 5G, in close proximity to mobile subscribers » (ETSI).
• Cloudlet computing
• Trusted resource-rich computer or a cluster of computers with strong connection to the Internet that is
utilized by nearby mobile devices (Carnegie Mellon University)
• Mist computing
• Dispersed computing at the extreme edge (the IoT devices themselves).
All one needs to know about fog computing and related edge computing paradigms: A complete survey, A. Yousefpour et al., Journal of Syst. Arch., Vol 98, Sep. 2019
8. Some common characteristics
• Low Latency
• Nodes are closer to the end users and can offer a faster analysis and response to the data generated and
requested by the users
• Geographic Distribution
• Geo-distributed deployment and management,
• Heterogeneity
• Collection and processing of information obtained from different sources and collected by several means of
network communication,
• Interoperability and Federation
• Resources must be able to interoperate with each other and services and applications must be federated
across domains,
• Real-Time Interactions
• Services and applications involve real-time interaction, not just batch processing,
• Scalability
• Fast detection of variation in workload’s response time and of changes in network and device conditions,
supporting elasticity of resources.
Orchestration in Fog Computing: A Comprehensive Survey, B. Costa et al., ACM Computing Surveys, Vol. 55, No. 2, Jan. 2022.
9. Some research issues
Application lifecycle management (initial deployment, configuration, reconfiguration,
maintenance)
• Abstracting the description of the whole application structure, globally optimize the resources used with respect to multi-criteria
objectives (price, deadline, performance, energy, etc.), models and associated languages to describe applications, their
objective functions, placement and scheduling algorithms supporting system and application-level criteria, ...
Infrastructure management
• Virtualization (hyper-converged 2.0 architecture, complexity, heterogeneity, dynamicity, scaling and locality), storage
(compromise between moving computation vs. data, files, BLOB, key/value systems, geo-distributed graph database, …), and
administration (intelligent orchestrator, geo-distributed scale, automatically adaption to users' needs, ...)
Hardware
• Trusted hardware solutions, architectural support for high level features, energy reduction solutions, new accelerators, …
Security
• Vulnerabilities in VMs, hypervisors and orchestrators, virtual network technologies (SDN, NFV), programming or access
interfaces, adapting security policies to a more complex environment, ...
Energy
• End-to-end energy analysis and management of large-scale hierarchical Cloud/Edge/Fog infrastructures on processing, network
and storage aspects, trade-offs between energy efficiency and other performance metrics in virtualized infrastructures, Eco-
design of digital applications and services, ...
• …
10. CLOUDLET/FoG/Edge/CLOUD-To-IOT/CONTINUUM Computing
Inter Micro DCs latency
[50ms-100ms]
Edge
Frontier
Edge
Frontier
Extreme Edge
Frontier
Domestic network
Enterprise network
Wired link
Wireless link
Cloud Latency
> 100ms
Cloud Computing
Micro/Nano DC
Intra DC latency
< 10ms
Hybrid network
11. CHALLENGE 1: HOW TO GEO-DISTRIBUTE
CLOUD APPLICATIONS TO THE EDGE
12. Defacto open source standard to administrate/virtualize/use
resources of one DC
Scalability?
Latency/throughput impact?
Network partitioning issues?
…
From LAN to WAN? ⇒
Bring Cloud applications to the Edge
INITIATING THE DEBATE WITH OPENSTACK (2016-2021)
13. Inter Micro DCs latency
[50ms-100ms]
Edge
Frontier
Edge
Frontier
Extreme Edge
Frontier
Domestic network
Enterprise network
Wired link
Wireless link
Cloud Latency
> 100ms
Cloud Computing
Micro/Nano DC
Intra DC latency
< 10ms
Hybrid network
WANWIDE
Collaborative?
Bring Cloud applications to the Edge
INITIATING THE DEBATE WITH OPENSTACK (2016-2021)
14. 13 Millions of LOCs,186 subservices
Designed for a single location
OPENSTACK (THE DEVIL IN
DETAILS)
16. Collaboration code is
required in every Service
A broker per service
must be implemented
DB values might be
location dependant
Bring Cloud applications to the Edge
COLLABORATION: ADDITIONAL PIECES OF CODE IS REQUIRED
18. The SCOPE lang: Andy defines the scope of the request into the CLI.
The scope specifies where the request applies.
Bring Cloud applications to the Edge
A SERVICE DEDICATED TO ON DEMAND COLLABORATIONS
19. openstack server create my-vm ——flavor m1.tiny --image cirros.uec
—-scope {compute: Nantes, image: Paris}
OpenStack Summit Berlin - Nov 2018
Hacking the Edge hosted by Open Telekom Cloud
• A complete model in order to enhance the scope description with sites compositions (e.g., AND, OR)
• List VMs on Nantes and Paris
openstack server list --scope {compute:Nantes&Paris}
Bring Cloud applications to the Edge
A SERVICE DEDICATED TO ON DEMAND COLLABORATIONS
21. • Expose consistency policies at to the user level (extend the scope syntax)
• Manage the dependencies between resources
• Notion of replication set: manage a fixed pool of resources with an automatic control loop
(implemented in a geo-distributed way at the Cheops level).
Replication overview/challenge
Bring Cloud applications to the Edge
A CHEOPS AS A BUILDING BLOCK TO DEAL WITH GEO-DISTRIBUTION
22. Manage partition issues using appropriate replication/aggregation policies
Cross overview/challenge
Bring Cloud applications to the Edge
A CHEOPS AS A BUILDING BLOCK TO DEAL WITH GEO-DISTRIBUTION
23. A bit more complicated than it looks like…
Delavergne, Marie; Antony, Geo Johns; Lebre, Adrien
Cheops, a service to bloud away Cloud applications to the Edge, To appear in ICSOC 2022
Bring Cloud applications to the Edge
TOWARD A GENERALISATION OF THE SERVICE (OpenStack/Kubernetes/…)
25. Service placement problems
How to assign the IoT applications to computing nodes (Fog nodes) which are distributed in a Fog environment ?
• Different kinds of applications
• Monolithic service, data pipeline, set of inter-dependent components, Directed Acyclic Graphs (DAGs)
• Several constraints
• Computing and networking resources are heterogeneous and dynamic, Computing and network resources are not always available,
Service cannot be processed everywhere
• Different approaches
• Centralized or distributed approaches
• Online or offline placement
• Static or dynamic
• Mobility support
• Different performance criterions
• Execution time, quality of service, latency, energy consumption
• Problem formulations
• Linear programming: Integer Linear Programming (ILP), Integer Nonlinear Programming (INLP), Mixed Integer Linear Programming
(MILP), Mixed-integer non-linear programming (MINLP), Mixed Integer Quadratic Programming (MIQP)
• Constraint programming, Markov decision process, stochastic optimization, potential games, …
An overview of service placement problems in Fog and Edge Computing. F. Ait-Salaht, F. Desprez, and. A. Lebre. ACM Computing Surveys, Vol. 53, Issue 3, May 2021
26. Service Placement Problem using Constraint programming
and Choco solver
• Goals
• Elaborate a generic and easy to upgrade model
• Define a new formulation of the placement problem considering a general definition of service and
infrastructure network through graphs using constraint programming
Service Placement in Fog Computing Using Constraint Programming. F. Ait-Salaht, F. Desprez, A. Lebre, C. Prud’homme and M. Abderrahim. IEEE
27. System model and problem formulation
• A directed graph G = <V,E> represents the Network
• V: set of vertices or nodes (server)
• E: set of edges or arcs (connections)
• Each node defines CPU and RAM capacities
• Each arc defines a latency and a bandwidth
capacity
• Infrastructure
• An application is an ordered set of components
• A component requires CPU/RAM to work
• A component can send data (bandwidth, latency)
• Some components are fixed (f-ex., cameras)
• Application
28. • CPU capacity of each node is
respected
• Same goes with RAM capacity
• Bandwidth capacity is respected on
arcs too
• Latencies are satisfied
Placement (mapping)
Assign services (each component and each edge) to network infrastructure
(node and link) such that:
System model and problem formulation
29. Constraint Programming model (CP)
What is CP ?
• CP stands for Constraint Programming
• CP is a general purpose implementation of Mathematical Programming
• MP theoretically studies optimization problems and resolution techniques
• It aims at describing real combinatorial problems in the form of Constraint
Satisfaction Problems and solving them with Constraint Programming techniques
• The problem is solved by alternating constraint filtering algorithms with a search
mechanism
• Modeling steps (3)
• Declare variables and their domain
• Find relation between them
• Declare a objective function, if any
39. Experiment 1
Infrastructure Smart bell application
91 fog
nodes
86
sensors
• Requirements
‣ Resources: CPU, RAM, DISK
‣ Networking: Latency and Bandwidth
‣ Locality
• Objective
‣ Minimize average latency
Implementation of the model on the Choco solver (Free Open-Source Java library dedicated to Constraint Program
44. Where is the content I’m looking for?
Locating the closest replica of a specific content requires indexing every live replica along with
its location
Existing solutions
• Remote services (centralized index, DHT)
In contradiction with the objectives of Edge infrastructures:
The indexing information might be stored in a node that is far away
(or even unreachable) while the replica could be in the vicinity
• Broadcast
• Maintaining such an index at every node would prove overly costly in terms of memory
and traffic (it does not confine the traffic)
• Epidemic propagation
46. Challenges
How to maintain such a logical partitioning in a dynamic environment
where…
• Nodes can ADD or DELETE content any time (no synchronization)
• Nodes can join or leave the system at any time (without any warning)
…while limiting the scope of transferred information as much as possible
48. Lock Down the Traffic of Decentralized Content Indexing at the Edge, B. Nedelec et al., ICA3PP 2022
A preliminary step toward a complete solution
• Definitions of the properties that guarantee
decentralized consistent partitioning in dynamic
infrastructures.
• Demonstration that concurrent creation and
removal of partitions may impair the propagation
of control information
• Proposal of a first algorithm solving this dynamic
partitioning problem (and its evaluation by
simulations)
50. Experimental infrastructures
SILECS/SLICES: Super Infrastructure for Large-Scale Experimental Computer Science
• The Discipline of Computing: An Experimental Science
• Studied objects are more and more complex (Hardware, Systems, Networks, Programs, Protocols, Data,
Algorithms, …)
• A good experiment should fulfill the following properties
• Reproducibility: must give the same result with the same input
• Extensibility: must target possible comparisons with other works and extensions (more/other processors,
larger data sets, different architectures)
• Applicability: must define realistic parameters and must allow for an easy calibration
• “Revisability”: when an implementation does not perform as expected, must help to identify the reasons
• ACM Artifact Review and Badging
51. SILECS/Grid’5000
• Testbed for research on distributed systems
• Born in 2003 from the observation that we need a better and larger testbed
• HPC, Grids, P2P, and now Cloud computing, and BigData systems
• A complete access to the nodes’ hardware in an exclusive mode (from one node to
the whole infrastructure)
• Dedicated network (RENATER)
• Reconfigurable: nodes with Kadeploy and network with KaVLAN
• Current status
• 8 sites, 36 clusters, 838 nodes, 15116 cores
• Memory: ~100 TiB RAM + 6.0 TiB PMEM, Storage: 1.42 PB (1515 SSDs and 953
HDDs on nodes), 617.0 TFLOPS (excluding GPUs)
• Diverse technologies/resources (Intel, AMD, Myrinet, Infiniband, two GPU clusters,
energy probes)
• Some Experiments examples
• In Situ analytics, Big Data Management,
• HPC Programming approaches, Batch scheduler optimization
• Network modeling and simulation
• Energy consumption evaluation
• Large virtual machines deployments
52. SILECS/FIT
Providing Internet players access to a
variety of fixed and mobile
technologies and services, thus
accelerating the design of advanced
technologies for the Future Internet
53. Experiments
• Discovering resources from their description
• Reconfiguring the testbed to meet experimental
needs
• Monitoring experiments, extracting and
analyzing data
• Controlling experiments: API
59. Conclusion
The disconnection is the norm
• High latency, unreliable connections,
• Logical partitioning (Edge areas/zones)
A (r)evolution of distributed systems and networks?
• Algorithms, (distributed) system building blocks should be revised to satisfy geo-
distributed constraints
• Decentralized vs collaborative (e.g. DHT, network ASes)
60. Questions / THANKS
Post-scriptum
• We are looking for students, Phd candidates,
postdocs, engineers, researchers, associate-
professors (AI/infrastructure experts, this is trendy ;-)),
use-cases, fundings, collaborations…
• We propose … a lot of fun and work!
http://stack.inria.fr
Notes de l'éditeur
CDF = cumulative Distribution Function of responses times (3 runs)
E2E = Eye to Eye
VR < 20 ms
Todo: ordres de grandeurs comparatifs
Une version simplifiée du edge mais pouvons nous déjà opérer une telle infrastructure?
First application: OpenStack
Can we operate an edge infrastructure with a single instance (aka. a single controlplane) of OpenStack?
Une version simplifiée du edge mais pouvons nous déjà opérer une telle infrastructure?
Good results and Openstack, impossible to provision VMs when having network disconnection
One version of OpenStack on every site
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
Nova is the OpenStack project that provides a way to provision compute instances (aka virtual servers). Glance: OpenStack Image service
Last scenario when Andy wants to launch a VM instance which is only available on site 2
Having a solution without changing the code
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
Cheops: new dedicated service acting as a proxy
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
Etudier un système biologique (qui évolue tout les 6 mois de maniere drastique)
K8S = Kubernete
ILP = CPLEX, First fit with backtrack, Genetic alg., Xia et al., Choco
If we look to the state of the art, when a client wants to access a specific content, it has to request a remote node to provide at least one node identity to retrieve this content from. After retrieving the content, the client can create another replica to improve the performance of future accesses, but it must then recontact the indexing service to notify of the creation of this new replica.
This approach has two drawback.
- First, accessing a remote node to request content location(s) raises hot spots and availability issues. But most importantly, it results in additional delays [3,12] that occur even before the actual download started.
Second, the client gets a list of content locations at the discretion of content indexing services. Without information about these locations, it often ends up downloading from multiple hosts, yet only keeping the fastest answer. In turn, clients either waste network resources, or face slower response time.
A naive approach would be that every node indexes and ranks every live replica along with its location information. When creating or destroying a replica, a node would notify all other nodes by broadcasting its operation
This flooding approach is counter performant as a node may acknowledge the existence of replicas at the other side of the network while there already exists a replica next to it.
A promising approach would be to use epidemic propagation by limiting the propagation only to a subset of relevant nodes. To better understand this idea, let’s discuss a concrete example.
In this example, we consider a Node R that creates a new content and that efficiently advertises its content by epidemic propagation. At the end of the epidemic phase, every node can request R to get the content if needed
Let’s consider that Node G gets the content and creates a second replica splitting the red set in two (now we have a set of nodes in red that should request R and a set in Green that should requests G in order to get the closest replica host.
In this example, we consider the geographical distance but the notion of distance can be defined in a more advanced way considering latency, throughput, robustness etc.).
Now, let’s consider that Node B creates another replica. Node B needs to notify only a small subset of nodes (resulting a 3 sets, red, green and blue)
Finally, let’s consider G destroys its replica. Nodes that belonged to its partition must find the closest partition they are in, resulting at the end in two sets (red and blue).
While it makes sense for Node G to broadcast its removal, Node B and Node R cannot afford to continuously advertise their replica to fill the gap left open by Node G. A better approach would consist in triggering notifications at bordering nodes of red and blue partitions once again. In other words, the indexing problem can be seen as a distributed and dynamic partitioning problem.
This dynamic partitioning raises additional challenges related to concurrent operations where removed partitions could block the propagation of other partitions.
So the problem that should be tackled is: how can we maintain such a logical partitioning in a dynamic environment where
nodes can add….
Node can join…
While limiting the scope of the messages between nodes as much as possible (network confinement)
Just to give you an idea of the consistency issue, let’s consider the following example.
In the first part (a)): nodes a and c create a new replica of a the same content.
The colors illustrate in which partition node belong to (black no partition).
So node a belongs to the blue partition and node c to the green one.
Both nodes send a creation message to their neighbour (here b).
b) Let’s consider node b receives the notification of node a, so it joins the blue partition and forwards the notification towards C (alpha a3,3 since distance equals AB+BC: 2+1).
Meanwhile, node a and node c delete the replica and so send a new notification related to the removal to b (respectively deltaA and deltaB). Once again nodes evolve independently from the broadcasted messages.
c) The creation notification from c to b (alpha c1) is finally received on b and so b will join the green partition since the distance is better and forward this message to A (alpha c3).
The removal message from node a is received on b. Since node b belongs to the green partition, it does not consider the notification related to the removal of the replica sent by node a (deltaA is discarded). Remind that the goal is to mitigate the network traffic as much as possible.
Meanwhile, node c receives the initial creation of node a. Since it does not have the replica anymore, it joins the blue partition (here we have a first inconsistency, since b believes its closest replica is on node C while node C believes it is on node a going through node b, which is obviously not possible). Anyway let’s continue the scenario.
d) node A receives the initial notification of node c, since it does not have the replica anymore, it joins the green partition (although the content has been already deleted at C but there is no way for node A to be aware of that). Node b receives the removal notification from node c and so leaves the green partition and forwards the notification to node A.
e) node A receives the removal notification and leaves in its turn the green partition.
f) at the end node C belongs to a partition that does not exist anymore and of C have children, they would stay in the wrong partition too.
Without diving into details, nor presenting the algorithm, the idea is to make echos to creation and removals notifications. If a node receives a notification that it has already proceed and that it knows it is deprecated it will make an echo of the previous message (in the previous case, the removal notification that has been discarded on node B will be triggered once again to node C). For further details please refer to the article.