Publicité
Publicité

Contenu connexe

Similaire à Rina IRATI GLIF Singapore 2013(20)

Publicité

Rina IRATI GLIF Singapore 2013

  1. Early RINA prototyping and deployment in the IRATI project, and future research in the PRISTINE and IRINA projects Sergi Figuerola, Innovation and Technology Director (sergi.Figuerola@i2cat.net) 13th annual Global LambdaGrid Workshop October 4th, 2013 13th Annual Global LambdaGrid Workshop © Fundació i2CAT 2013
  2. Agenda • What is RINA • Why researching RINA • Flow of research and development activities • EC-funded RINA research – IRATI – PRISTINE (in negotiations) – IRINA (in negotiations) 13th Annual Global LambdaGrid Workshop 2 © Fundació i2CAT 2013
  3. RINA is an.. Innovative approach to computer networking using inter-process communications (IPC), a set of techniques for the exchange of data among multiple threads in processes running on one or more computers connected to a network. Ref. : J. Day: “Patterns in Network Architecture: A Return to Fundamentals, Prentice Hall, 2008. The RINA principle: Networking is not a layered set of different functions but rather a single layer (DIF) of distributed IPC’s that repeats over different scopes. 13th Annual Global LambdaGrid Workshop 3 © Fundació i2CAT 2013
  4. RINA Architecture • • • Separation of mechanism from policy All layers have the same functions, with different scope and range. – • There’s a single type of layer that repeats as many times as required by the network designer • • A structure of recursive layers that provide IPC (Inter Process Communication) services to applications on top Not all instances of layers may need all functions, but don’t need more. A Layer is a Distributed Application that performs and manages IPC (a Distributed IPC Facility –DIF-) This yields a theory and an architecture that scales indefinitely, – i.e. any bounds imposed are not a property of the architecture itself . 13th Annual Global LambdaGrid Workshop © John Day, All Rights Reserved, 2011 4 © Fundació i2CAT 2013
  5. Naming and addressing in RINA • • 1 1 DIF A 2 DIF B 2 2 DIF C 3 1 2 DIF D 1 DIF E 2 1 DIF F 2 In order to facilitate its operation within a DIF, each IPC process within a DIF gets a synonym that may be structured to facilitate its use within the DIF (i.e. an address). 4 3 1 All application processes (including IPC processes) have a name that uniquely identifies them within the application process namespace.  The scope of an address is the DIF, addresses are not visible outside of the DIF.  The Flow Allocator function of the DIF finds the DIF IPC Process through which a destination Application process can be accessed.  Because the architecture is recursive, applications, nodes and PoAs are relative  For a given DIF of rank N, the IPC Process is a node, the process at the layer N+1 is an application and the process at the layer N-1 is a Point of Attachment . 13th Annual Global LambdaGrid Workshop 5 © Fundació i2CAT 2013
  6. Architectural model System (Host) System (Router) Appl. Process IPC Process Appl. Process Mgmt Agemt IPC Process DIF IPC Process Mgmt Agemt Shim DIF over TCP/UDP Shim IPC Process Shim IPC Process Shim IPC Process System (Host) Shim DIF over Ethernet Mgmt Agemt Shim IPC Process IPC API Data Transfer Data Transfer Data Transfer Data Transfer Relaying and Multiplexing SDU Protection State Vector State Vector State Vector SDU Delimiting Layer Management Data Transfer Control Transmission Control Transmission Control Transmission Control Retransmission Retransmission Retransmission Control Control Control Flow Control Flow Control Flow Control CACEP RIB Daemon RIB RIB Enrollment Authentication Flow Allocation CDAP Parser/Generator Resource Allocation Forwarding Table Generator Increasing timescale (functions performed less often) and complexity 13th Annual Global LambdaGrid Workshop 6 © Fundació i2CAT 2013
  7. Agenda • What is RINA • Why researching RINA • Flow of research and development activities • EC-funded RINA research – IRATI – PRISTINE (in negotiations) – IRINA (in negotiations) 13th Annual Global LambdaGrid Workshop 7 © Fundació i2CAT 2013
  8. Why researching RINA (I) • Architecture: – Today: 5 layers, layers “2.5”, layer violations, “overlays”, “virtual networks”, Today “middleboxes” (NATs, firewalls, application-layer gateways) Getting complex! – RINA: Repeating structure, DIF (one type of layer, repeat as needed) RINA • Naming, addressing and routing: – Today: No independent application names, no node names, just PoA names, Today routing on PoAs (multi-homing and mobility is hard to support) – RINA: Complete naming & addressing, routing on the node; support for multiRINA homing and mobility without special protocols. No need for global address space. • Congestion control: – Today: Put in TCP, not in the best place it could be, since it maximizes the delay Today and variance of the control loop (makes the system chaotic: self-similar traffic) – RINA: Each layer can perform congestion control, confining the effects of RINA congestion to that layer. The delay and variance of control loops can be bound. 13th Annual Global LambdaGrid Workshop 8 © Fundació i2CAT 2013
  9. Why researching RINA (II) • Scalability: – Today: Limited due to the fixed number of layers in the architecture Today – RINA: Recursion provides a divide and conquer approach, the way to scalability RINA • Security: – Today: No systematic approach to security, secure each protocol or add boxes in Today between to improve security (firewalls). – RINA: Strong design dictates where security functions go in the architecture RINA (encryption, authenticaiton, access control). DIFs are securable containers. • Quality of Service: – Today: Best effort is the dogma, applications cannot express desired outcomes Today – RINA: Each DIF is free to provide different QoS classes, using different policies for RINA resource allocation, routing and data transfer. Applications can request the desired characteristics for a flow (delay, loss, ordering, etc) • Management: – Today: Complex, reflecting the complexity in the architecture and the high Today number of protocols. – RINA: The commonality in the structure simplifies management by orders of RINA magnitude 13th Annual Global LambdaGrid Workshop 9 © Fundació i2CAT 2013
  10. Agenda • What is RINA • Why researching RINA • Flow of research and development activities • EC-funded RINA research – IRATI – PRISTINE (in negotiations) – IRINA (in negotiations) 13th Annual Global LambdaGrid Workshop 10 © Fundació i2CAT 2013
  11. Flow of RINA R&D activities (feedback between activities not shown for clarity reasons) Data transfer DIF creation Multiplexing Research on Application policies for discovery different Enrollment Security areas Manage ment Routing Policy specs Resource allocation Design and development of simulators Study different use cases and deployment options Research on RINA reference model Core RINA specs Simul ators Use case analy sis Proto types Prototyping Java VM Linux OS Data and conclu sions Experiment ation and validation Different Platforms Android OS NetFP GA 13th Annual Global LambdaGrid Workshop TCP/UDP /IP Coexisting VLANs with different technolog WiFi ies MPLS LTE 11 © Fundació i2CAT 2013
  12. Agenda • What is RINA • Why researching RINA • Flow of research and development activities • EC-funded RINA research – IRATI – PRISTINE (in negotiations) – IRINA (in negotiations) 13th Annual Global LambdaGrid Workshop 12 © Fundació i2CAT 2013
  13. IRATI @ a Glance • http://irati.eu What? Main goals – To advance the state of the art of RINA towards an architecture reference model and specifications that are closer to enable implementations deployable in production scenarios. – The design and implementation of a RINA prototype on top of Ethernet will enable the experimentation and evaluation of RINA in comparison to TCP/IP. Who? 4 partners 5 activities:  WP1: Project management  WP2: Arch., Use cases and Req.  WP3: SW Design and Implementation  WP4: Deployment into OFELIA  WP5: Dissemination, Standardisation and Exploitation Budget Total Cost 1.126.660 € EC Contribution 870.000 € Duration 2 years Start Date 1st January 2013 External Advisory Board Juniper Networks, ATOS, Cisco Systems, Telecom Italia, BU 13th Annual Global LambdaGrid Workshop 13 © Fundació i2CAT 2013
  14. IRATI contributions to RINA roadmap • Reference model and core specifications – Detect inconsistencies and errors • Research on policies for different areas – Routing (link-state), Shim DIF over Ethernet VLANs (802.1q) • Use cases – Corporate VPNs and cloud networking • Prototyping – Initial implementation for Linux OS (user-space and kernel) – Porting of RINA implementation to Juniper platforms • Experimentation – First experimental analysis of RINA against TCP/IP in similar conditions (focusing in LAN environments) 13th Annual Global LambdaGrid Workshop 14 © Fundació i2CAT 2013
  15. Cloud/Network provider use case (Introduction) • RINA applied to a hybrid cloud/network provider – Mixed offering of connectivity (Ethernet VPN, MPLS IP VPN, Ethernet Private Line, Internet Access) + computing (Virtual Data Center) Datacenter Design Access Network Wide Area Network 13th Annual Global LambdaGrid Workshop 15 © Fundació i2CAT 2013
  16. Cloud/Network provider use case (Modeling) PE CE Customer 1 Site A CE Customer 1 Site B PE MPLS backbone CE Customer 2 Site A CE PE Customer 1 Site C CE PE Customer 2 Site B PE Internet GW CE CE TOR TOR TOR TOR HV VM VM VM HV VM VM VM HV VM VM VM HV VM HV VM VM VM HV VM VM VM Public Internet HV VM VM VM HV VM VM VM VM VM HV VM VM VM HV VM VM VM HV VM VM VM HV VM VM VM HV VM VM VM HV VM VM VM HV VM VM VM HV VM VM VM Customer 2 Site C End user Data Center 1 13th Annual Global LambdaGrid Workshop Data Center 2 16 © Fundació i2CAT 2013
  17. Cloud/Network provider use case (Applying RINA) Scenario 1: Inter-DC Customer A DIF Inter-datacenter DIF Datacenter-wide DIF VM HV Provider top-level DIF CE TOR P CE TOR PE Backbone Provider Access network VM HV Backbone DIF PE Datacenter 1 network Datacenter-wide DIF Network Service Provider Core Network Provider Access network Datacenter 2 network Scenario 2: DC-Customer Customer A DIF Datacenter-wide DIF VM HV TOR Provider top-level DIF CE Backbone DIF PE Datacenter 1 network Access network P Interoute core network CE PE Access network 13th Annual Global LambdaGrid Workshop Customer 1 site A network 17 © Fundació i2CAT 2013
  18. Shim DIF over Ethernet General requirements • The task of a shim DIF is to put a small as possible veneer over a legacy protocol to allow a RINA DIF to use it unchanged. – Not a RINA-conformant application.  We are not trying to make legacy protocols provide full support for RINA.   – Anything more should be provided by the first full DIF. • The shim DIF should provide no more service or capability than the legacy protocol provides.   System (Host) System (Router) Appl. Process IPC Process Appl. Process Mgmt Agemt IPC Process DIF IPC Process Mgmt Agemt Shim IPC Process Shim DIF over TCP/UDP Shim IPC Process Shim IPC Process Shim DIF over Ethernet 13th Annual Global LambdaGrid Workshop System (Host) Mgmt Agemt Shim IPC Process 19 © Fundació i2CAT 2013
  19. High-level software architecture General requirements and choices • Linux has been the chosen target platform for IRATI, due to – It is widely used in different contexts – Open source OS with a great community and documentation • However the implementation aims to be as reusable as possible in similar environments – other UNIX-based Operating Systems • The implementation targets both the user-space and the kernel-space, since – Low performance penalties have to be achieved for highly-frequent tasks (such as reading and writing data) -> Some components must be placed in the kernel – There is the need to access device driver functionalities in order to be able to overlay RINA on top of Ethernet (or other networking technologies in the future) -> Some components must be placed in the kernel 13th Annual Global LambdaGrid Workshop 20 © Fundació i2CAT 2013
  20. Agenda • What is RINA • Why researching RINA • Flow of research and development activities • EC-funded RINA research – IRATI – PRISTINE (in negotiations) – IRINA (in negotiations) 13th Annual Global LambdaGrid Workshop 23 © Fundació i2CAT 2013
  21. PRISTINE @ a Glance • What? Main goals – – – To design and develop an SDK for the IRATI RINA prototype, to unleash the programmability provided by RINA. To use the SDK to design, implement and trial a set of a policies to create optimized DIFs for each of the project use cases: distributed cloud, datacenter networking and network service provider. To design and implement the first RINA multi-layer management system. Who? 15 partners 7 activities:  WP1: Project management  WP2: Use cases, req. analysis and programmable reference architecture  WP3: Programmable performanceenhancing functions and protocols  WP4: Innovative security and reliability enablers  WP5: Multi-layer management plane  WP6: System-level integration, validation, trials and assessment WIT-TSSG, i2CAT, TID, Ericsson, NXW, Thales,  Nexedi, Atos, BISDN, Juniper, Telecom SudParis, U Brno, UiO, CREATE-NET, iMinds WP7: Dissemination, standardisation and exploitation 13th Annual Global LambdaGrid Workshop Budget Total Cost 5.034.961 € EC Contribution 3.337.000 € Duration 2.5 years Start Date 1st January 2014 External Advisory Board Cisco Systems, Telecom Italia, Deutsche Telekom, Colt Telecom, BU, Interoute 24 © Fundació i2CAT 2013
  22. PRISTINE contributions to RINA roadmap • Reference model and core specifications – Detect inconsistencies and errors • Research on policies for different areas – Congestion control, distributed resource allocation, addressing, routing, authentication, access control, encryption, DIF management • Use cases – Decentralized cloud, datacentre networking, network service provider • Prototyping – Build on IRATI implementation for Linux OS. Develop SDK to allow easier customization, develop sophisticated policies with SDK. Prototype first DIF Management System • Experimentation – More realistic experimentation, with more complex deployments, coexisting with several technologies at once (IPv4, IPv6, Ethernet), usage of business applications 13th Annual Global LambdaGrid Workshop 25 © Fundació i2CAT 2013
  23. Use cases • Distributed cloud – Decentralized cloud technology; customer’s applications run in datacenters but also in servers from offices and home users. – Infrastructure interconnected through multiple ISPs, overall connectivity provided through overlay on top -> Use RINA to provide this overlay • Datacentre networking – Evaluate RINA as a technology that allows more dynamicity and tighter integration with applications (dynamic instantiation of application-optimized VPNs) • Network Service Provider – Investigate benefits of RINA for NSP: better network design, simpler management, DIFs that support different levels of QoS with stronger flow isolation, better security, programmability, etc. 13th Annual Global LambdaGrid Workshop 26 © Fundació i2CAT 2013
  24. PRISTINE Infrastructure for trials Virtual Wall EXPERIMENTA Trentino Testbed 13th Annual Global LambdaGrid Workshop 27 © Fundació i2CAT 2013
  25. Agenda • What is RINA • Why researching RINA • Flow of research and development activities • EC-funded RINA research – IRATI – PRISTINE (in negotiations) – IRINA (in negotiations) 13th Annual Global LambdaGrid Workshop 28 © Fundació i2CAT 2013
  26. IRINA @ a glance • What? Main goals – To make a study of RINA against the current networking state of the art and the most relevant clean-slate architectures under research. – To perform a use-case study of how RINA could be better used in the NREN scenario, and showcase a lab-trial of the use case – To involve the NREN and GEANT community in the different steps of the project, in order to to get valuable feedback Who? 4 partners 5 activities:  WP1: Technical coordination and interaction with GEANT3+  WP2: Comparative analysis of network architectures  WP3: Use case study and lab trials  WP4: Dissemination and workshop organization Budget Total Cost 199.940 € EC Contribution 149.955 € Duration 18 months Start Date 1st November 2013 13th Annual Global LambdaGrid Workshop 29 © Fundació i2CAT 2013
  27. IRINA contributions to RINA roadmap • Reference model and core specifications – Compare with other clean-slate architectures • Use cases – Research network operators (NRENs and GEANT environment) • Prototyping – Little adaptations to the IRATI prototype (Linux OS), to be able to trial the use case in the lab • Experimentation – Focus on the requirements of NRENs 13th Annual Global LambdaGrid Workshop 30 © Fundació i2CAT 2013
  28. GEANT and NRENs use case 13th Annual Global LambdaGrid Workshop 31 © Fundació i2CAT 2013
  29. Many Thanks ! Moltes gràcies ! Sergi Figuerola, Innovation and Technology Director (sergi.Figuerola@i2cat.net) Eduard Grasa, RINA research line leader (eduard.grasa@i2cat.net) 13th annual Global LambdaGrid Workshop October 4th, 2013 13th Annual Global LambdaGrid Workshop http://www.i2cat.cat http://dana.i2cat.net http://irati.eu © Fundació i2CAT 2013

Notes de l'éditeur

  1. En general: * Recordar que el punt de RINA es que es proporciona un building block (el DIF) adaptable a diferents requeriments (a través de policies diferents). Aquesta és la eina fonamental que es pot utilitzar tants cops com sigui necessari, anant construint estructures de DIFs. El building block serveix per separar diferents scopes (per exemple, diferents xarxes de diferents proveïdors, diferents regions de la xarxa dins un proveidor (metro, regional, backbone), diverses VPNs d’usuari, etc.) * El DIF ha de proporcionar servei a les aplicacions que te a sobre, recolzant-se en les característiques dels DIFs que té a sota (rollo casteller )
  2. * Els numeros son les adreces, A1, A2, B1, B2, etc son application names * PoAs -> Point of attachment (el punt en el qual un procés està conectat a la xarxa). Mirant el dibuix, el PoA de A1 és B1, els PoAs de A3 son C3 I D1, etc. Application name spaces are not tied to any layer or DIF. Recognizing that they may all be members of other DIFs.
  3. IPC Process Components Data transfer service API This is the only externally visible API for application processes using the IPC Process services. This API allows applications to make themselves available through a DIF and to request and use IPC services to other applications. The abstract API has six operations (implementations may have more operations for convenience of use and to adapt to the specifics of each operating system, but still logically providing the same operations): portId _allocateFlow(destAppName, List<qosParams>). This operation enables an application to allocate a flow to a destination application (identified by destAppName), specifying a list of desired QoS parameters. The operation returns a handle to the flow, the portId, used in other operations to read/write SDUs (Service Data Units, the user data) to the flow. void _write(portId, sdu). Sends an SDU through the flow identified by portId. SDUs are buffers of user data with a certain length. SDUs are delivered to the destination application as they where written by the source application. sdu _read(portId). Read an SDU from the flow identified by portId. void _registerApplication(appName, List<DIFName>). Register the application identified by appName to the DIFs identified in the list of difNames. This operation advertises the application within a DIF, so that flows can be allocated to it (it will be always up to the application to take the final decision refusing or accepting them). void _unregisterApplication(appName, List<DIFName>). Unregister an application from a set of DIFs or all the DIFs (if the second argument is not present). More information about the data transfer service API is available at the “Data Transfer Service Definition” specification, pages 179-192 of the “RINA specification handbook”. SDU Delimiting The first step in this processing path is to delimit the SDUs posted by the application; since the data transfer protocol may implement concatenation and/or fragmentation of the SDUs in order to achieve a better data transport efficiency and/or to better adapt to the DIF characteristics. More information about the SDU Delimiting component is available at the “Specification template for a DIF delimiting module” specification, pages 193-194 of the “RINA specification handbook”. Error and Flow Control Protocol (EFCP) The Error and Flow Control Protocol (EFCP) is split into two parts: the data transfer protocol (DTP) and the Data Transfer Control Protocol (DTCP), loosely coupled through the use of a state vector. DTP performs the mechanisms that are tightly coupled to the transported SDU, such as fragmentation, reassembly, sequencing, addressing, concatenation and separation. DTCP performs the mechanisms that are loosely coupled to the transported SDU, such as transmission control, retransmission control and flow control. When a flow is allocated an instance of DTP and its associated state vector are created. The flows that require flow control, transmission control or retransmission control will have a companion DTCP instance allocated. The string of octets exchanged between two protocol machines is referred to as Protocol Data Unit (PDU). PDUs comprise of two parts, Protocol Control Information (PCI) and user data. PCI is the part understood by the DIF, while the user data is incomprehensible to the DIF and is passed to its user. The PDUs generated by EFCP are passed to the relaying and multiplexing components. RINA’s EFCP is designed based on delta-t, designed by Richard Watson in 1981 [9]. Watson proved that the necessary and sufficient conditions for reliable synchronization is to bound 3 timers: Maximum Packet Lifetime (MPL), Maximum time to acknowledge and Maximum time to keep retransmitting. In other words: SYNs and FINs in TCP are unnecessary, allowing for a simpler and more secure data transfer protocol. More information about the EFCP component is available at the “Error and Flow Control Protocol Specification: Data Transfer + Data Transfer Control” specification, pages 195-232 of the “RINA specification handbook”. Relaying and Multiplexing Task (RMT) The role of the Relaying task is to forward the PDUs passing through the IPC Process to the destination EFCP Protocol Machine (PM) by checking the destination address in the PCI. The decision on forwarding is based on the routing information and the Quality of Service agreed. The Multiplexing task multiplexes PDUs from different EFCP instances onto the points of attachment of lower ranking (N-1) DIFs. There are several policies that decide when and where the PDU are forwarded (management of queues, scheduling, length of queues). These policies affect the delivered Quality of Service. More information about the RMT component is available at the “Relaying and Multiplexing Task” specification, pages 233-240 of the “RINA specification handbook”. SDU Protection SDU Protection includes all the checks necessary to determine whether or not a PDU should be processed further (for incoming PDUs) or to protect the contents of the PDU while in transit to another IPC Process that is a member of the DIF (for outgoing PDUs). It may include but is not limited to checksums, CRCs, encryption, Hop Count/Time To Live mechanisms. The SDU Protection mechanisms to be applied may change hop by hop (since they depend on the characteristics of the underlying DIFs). In RINA, Deep Packet Inspection is unnecessary and often impossible. More information about the SDU Protection component is available at the “Specification Template for a DIF SDU Protection module” specification, pages 241-244 of the “RINA specification handbook”. The Resource Information Base (RIB) and the RIB Daemon The Resource Information Base (RIB) is the logical representation of the objects that capture the information that define an application state. Looking at the IPC Process, this means objects that represent information about mappings of addresses, resource allocation, connectivity, available applications, security credentials, established flows, forwarding and routing tables, and so on. The RIB Daemon is the task that controls the access to the RIB, and also optimizes the operations on the RIB performed by other components of the IPC Processes. More information about the RIB and RIB Daemon components is available at the “Specification of Managed Objects for the Demo DIF” specification, pages 281-289 of the “RINA specification handbook”. The Common Distributed Application Protocol (CDAP) and the Common Application Connection Establishment Phase (CACEP) The Common Distributed Application Protocol, CDAP, is the canonical application protocol, similar to an assembly language that can be used to build all the distributed applications. CDAP provides six primitives to operate on remote objects: create, delete, read, write, start and stop. IPC Processes use CDAP to modify the RIBs of other IPC Processes, which triggers changes in the behaviour of the IPC Processes. CDAP is modelled after OSI’s CMIP, the Common Management Information Protocol. Any existing application protocol can use the DIF (can be transported by a flow), however we only use CDAP inside the DIF to test our theory that there is only one application protocol. More information about CACEP and CDAP is available at the “Common Application Establishment Phase” and “CDAP - Common Distributed Application Protocol” specifications, pages 106-118 and pages 119-160 of the “RINA specification handbook”, respectively. The Enrollment Task All communication goes through three phases: Enrollment, Allocation (Establishment), and Data Transfer. RINA is no exception. Enrollment is the procedure by which an IPC Process joins an existing DIF and obtains enough information to start operating as a member of this DIF. Enrollment starts when the joining IPC Process establishes an application connection with another IPC Process that is already a member of the DIF. During the application connection establishment, the IPC Process that is a DIF member may want to authenticate the joining process, depending on the DIF security requirements. The CACE component (Common Application Connection Establishment) is the one in charge of establishing and releasing application connections. Several authentication modules can be plugged into CACE, to implement different authentication policies. Once the application connection has been established, the joining IPC Process needs to acquire the DIF static information: what QoS classes are supported and what are its characteristics, what are the policies that the DIF supports, and other parameters such as the DIF’s MPL or maximum PDU size. More information about the Enrollment task component is available at the “Basic Enrollment” specification, pages 251-256 of the “RINA specification handbook”. The Flow Allocator (FA) Flow allocation is the component responsible for managing a flow’s lifecycle: allocation, monitoring and deallocation. Unlike with TCP, in RINA port allocation and data transfer are separate functions, meaning that a single flow can be supported by one or more data transport connections (in TCP a port number is mapped to one and only one TCP connection, the port numbers identify the TCP connection). The Flow Allocator (FA) component handles the flow allocation/deallocation requests. Among its tasks it has to: i) find the IPC Process through which the destination application is accessible; ii) map the requested QoS to policies that will be associated with the flow, iii) negotiate the flow allocation with the destination IPC Process FA (access control permissions, policies associated with the flow), iv) create one or more DTP and optionally DTCP instances to support the flow, v) monitor the DTP/DTCP instances to ensure the requested QoS is maintained during the flow lifetime, and take specific actions to correct any misbehaviours and vi) deallocate the resources associated to the flow once the flow is terminated. More information about the FA component is available at the “Flow Allocator” specification, pages 257-268 of the “RINA specification handbook”. The Forwarding Table Generator (Routing) The Forwarding Table Generator (or Routing) is the IPC Process component that exchanges connectivity information with other IPC processes of the DIF and applies an algorithm to generate the forwarding table used by the Relaying and Multiplexing Task (connectivity as well as QoS and resource allocation information is used to generate the forwarding table). The algorithms and information required to generate the forwarding table may be multiple, depending on the QoS classes supported by the DIF. More information about the routing component is available as one of the specifications proposed by the IRATI consortium. It can be found in section 6 of this document. The Resource Allocator (RA) The Resource Allocator is the component that decides how the resources in the IPC Process are allocated (dimensioning of the queues, creation/suspension/deletion of queues, creation/deletion of N-1 flows, and others). More information about the RA component is available at the “RINA Reference model part 3: Distributed InterProcess Communication” document, pages 79-80 of the “RINA specification handbook”.   Shim IPC Process over TCP/UDP This IPC Process wraps a TCP/UDP layer and presents it with the IPC API, allowing "normal" IPC Processes to be overlaid on IP layers. More information about the shim DIF over TCP/UDP component is available at the “Specification for shim IPC Processes over IP layers” document, pages 273-280 of the “RINA specification handbook”.   Shim IPC Process over 802.1q This IPC Process wraps an Ethernet layer and presents it with the IPC API, allowing "normal" IPC Processes to be overlaid on 8021.q layers (VLANs). More information about the shim IPC Process over 802.1q component is available as one of the specifications proposed by the IRATI consortium. It can be found in section 6 of this document.   The management agent The Management agent is used by the DIF Management System (DMS) to monitor the state of the DIF and to make configuration changes including policy changes relating to QoS and security.
  4. Layer violations -> capes que miren informació d’altres capes per fer la seva feina (e.g. TCP pseudo-header) Overlays / “Virtual Networks” -> capes que estan per sobre de transport (TCP/UDP). Per exemple protocols de tuneling com VXLAN, STP, NVGRE, … Naming addressing and routing. -------------------------------------------- IP només assigna nom a les interfícies, no als nodes (de manera que un node amb 2 o més interfícies es lo mateix que 2 o + nodes per la xarxa) -> problemes de multi-homing I mobilitat Els noms d’aplicació avui en dia es mapegen a una adreça IP i un port TCP o UDP a través de DNS, que es un sistema extern a la xarxa (La xarxa només entén d’adreces IP) -> complica la mobilitat Congestion control ------------------------------ 2 problemes: * Detecció implicita (es creu que es detecta la congestó perquè es perden paquets, però no se’n pot estar segur que realment hi hagi congestio) * El control I la detecció es fa a TCP, que es on més lluny s’està del problema (enlloc de detectar-se I arreglar-se a la xarxa on hi hagi la congestió) RINA arregla els 2: * Detecció explicita (en cada DIF) * Cada DIF controla la congestió que hi ha en el seu DIF, no en la dels DIFs dels atlres
  5. With feedback between all the different activities
  6. Shim DIF is =
  7. Explicar una mica la illa: 5 switchos NEC conectats a servidors i a les altres illes d’OFELIA (per IRATI l’altra illa relevant is iMinds).
  8. Taronja, espais més petits
  9. DISTRIBUTED CLOUD SlapOS is a decentralized cloud technology used to build a physically distributed cloud [90]. Customer's applications are run in traditional datacenters, but also in servers from offices and home users. SlapOS is in charge of managing the overall cloud from a logically centralized location: the SlapOS master (a distributed approach is currently under development). The SlapOS master controls the different computers running SlapOS slaves. In terms of networking, the master and the nodes at different locations are interconnected through multiple IPv6 providers. In order to guarantee a high reliability (99.999\%), SlapOS uses an overlay called re6st, which creates a mesh network of OpenVPN tunnels on top of several IPv6 providers and uses the Babel protocol [21] for choosing the best routes between nodes. PRISTINE will provide an alternative to the re6st overlay, by using RINA on top of IPv6. The advantadges and added value of using RINA instead of re6st are detailed in the description of task T2.1. DATACENTRE NETWORKING The datacenter space is one of the areas that has seen more virtual networking innovations during the last few years, fuelled by the flexibility requirements of cloud computing. A myriad of SDN-based virtual network solutions, usually providing L2 over L3 or L4 tunnels and a control plane, are available in the market (VXLAN [72], NVGRE [73], STT [11], etc). PRISTINE will investigate and trial the use of RINA-based solutions for intra- as well as inter-datacenter networking. Important issues to be addressed in a datacenter environment are the mobility of Virtual Machines to allow an efficient utilization of datacenter resources as well as high reliability; multi-homing support; guaranteeing the level of service in inter-data center communications and flexible allocation of flows supporting computer and storage resources. RINA provides an excellent framework to tackle these issues, and the PRISTINE project will exploit them as explained in task T2.1. NETWORK SERVICE PROVIDER The goals of this scenario are to investigate and trial the efficiencies and benefits of a Networks Service Provider (NSP) using the RINA technology, and to analyze RINA as a materialization of the Network Functions Virtualization concept within an operator network. The NSP will internally use several DIFs over Ethernet, in order to transport the traffic of the services provided to the customers: IPv4 and IPv6 Internet access, VoIP, Ethernet, etc., but also native RINA traffic. With PRISTINE solutions the NSP will benefit from DIFs that provide different levels of service, being able to consolidate separate infrastructures. He will also have the tools to better manage the congestion within their networks and provide stronger flow isolation; achieve higher realiability and other benefits detailed in T2.1.
  10. Figure 4 illustrates an example of how RINA could be applied to the NRENs and GEANT scenario. The picture doesn’t show all the DIFs that would be utilized in a real scenario; it has been simplified for the sake of clarity (for example, NRENs or GEANT would be composed by more than one DIF in reality). The underlying GEANT DIF is the backbone of the system and supports the interconnection of the different NREN DIFs. NRENs would be interconnected together by jointly operating one or more peering DIFs, that would directly interconnect their border routers through the establishment of one or more flows through the GEANT backbone DIF, as shown in Figure 5. Note that there can be multiple peering DIFs, representing different NREN federations supported by GEANT (for example 1 peering DIF supported by all the NRENs, but also different peering DIFs between subsets of NRENs). Each peering DIF can have different policies, in terms of addressing, routing, security, resource allocation, data transfer, etc. Peering DIFs support DIFs that are customized for different types of applications. For example, a Public Internet DIF that gives access to the Internet, and provides a best-effort, low-security type of service. But there can be many other “application-specific DIFs” such as DIFs tailored for radio-astronomers, high-energy physics, DIFs that provide access to scientific clouds, research-project specific VPNs, etc. Again, the characteristics of each of these DIFs can be optimized for the application (or applications) they are designed to support.
  11. Logo irati, Web irati, Blog DANA
Publicité