Publicité
Publicité

Contenu connexe

Similaire à RECAP at ETSI Experiential Network Intelligence (ENI) Meeting(20)

Publicité

RECAP at ETSI Experiential Network Intelligence (ENI) Meeting

  1. Reliable Capacity Provisioning and Enhanced Remediation for Distributed Cloud Applications http://recap-project.eu recap2020 THIS PROJECT HAS RECEIVED FUNDING FROM THE EUROPEAN UNION’S HORIZON 2020 RESEARCH AND INNOVATION PROGRAMME UNDER GRANT AGREEMENT NUMBER 732667 RECAP Presentation for ETSI ENI Warsaw, April 12th 2019
  2. Introduction of the Presenters 2 Johan Forsman, Tieto Johan Forsman is a business developer, product manager and principal solution architect at Tieto Product Development Services (www.tieto.com/pds). Johan has over 20 years of experience in development of telecommunication mobile systems and is currently involved in business opportunities introducing NFV, 5G and IoT technologies. Dr. Jörg Domaschka, Ulm University Dr. Jörg Domaschka is a senior researcher leading UULM‘s research group on cloud computing, large-scale architectures, and adaptive middleware platforms. Jörg has been involved in EC funded research project since 2006 and is the project coordinator of RECAP. Dr. Paolo Casari, IMDEA Networks Dr. Paolo Casari is a Research Assistant Professor at IMDEA Networks Institute, where he leads the Ubiquitous Wireless Networking group. He holds a PhD in Information Engineering from the University of Padova, Italy (2008). He is the Scientific Coordinator of the RECAP Project.
  3. Agenda • Introduction • Overview of RECAP ⁃ Introduction to RECAP ⁃ RECAP consortium • RECAP solution and lessons learned ⁃ Model.centric approach ⁃ A repeatable methodology for generating the models ⁃ Data veracity (telemetry quality) ⁃ Simulation in a closed control loop ⁃ Separation of concerns • NFV management use case ⁃ Objectives and challenges with the use case ⁃ Utilizing testbed and simulations to build trust ⁃ Traffic generation to train, validate and explore ⁃ Instrumentation and control for the RECAP closed loop • Summary: lessons learned 3
  4. Overview of RECAP 4
  5. Reliable Capacity Provisioning and Enhanced Remediation for Distributed Cloud Applications Next generation of agile and optimized cloud computing systems • Services are elastically instantiated and provisioned close to the users that actually need them via self-configurable cloud computing systems. • Machine learning and simulation techniques for provision of cloud services • Applied to the following use cases • Telco system for wireless & wireline • Smart city • Big data analytics 2017 - 2019 5 THIS PROJECT HAS RECEIVED FUNDING FROM THE EUROPEAN UNION’S HORIZON 2020 RESEARCH AND INNOVATION PROGRAMME UNDER GRANT AGREEMENT NUMBER 732667 https://recap-project.eu/
  6. RECAP Consortium 6 Industry • Intel Labs (Ireland) • BT (UK) • Tieto (Sweden) • Satec (Spain) • Linknovate (Spain) Academic • ULM University (Germany) • Umeå University (Sweden) • Dublin City University (Ireland) • IMDEA Networks Institute (Spain) • CERTH (Greece) Use Case Providers THESSALONIKI, GREECE CERTH
  7. RECAP Solution and Lessons Learned A reference design 7
  8. RECAP: a model-centric approach 8 Workload Application Load Translation Infrastructure User QoS
  9. The RECAP Operational Modes • The run-time operational mode, consisting of ⁃ operating applications in RECAP ⁃ the RECAP optimisation loops 9 Offline Online • The simulation and planning mode, which employs offline simulation loops that are fed with monitoring data, application and workload models, and optimisation results. • The data analytics mode, which employs the offline analysis of monitoring data, and the training of machine learning models. Offline Online
  10. A repeatable methodology for model generation: Application view 11
  11. A repeatable methodology for model generation: Infrastructure view 12
  12. RECAP Data Collection, Data Analytics and Modelling Goal: Data Veracity ⁃ Understanding resource consumption Challenges: ⁃ Ensuring high-quality data (no errors, no data gaps) ⁃ Collating and correlating information from multiple sources, formats and semantics ⁃ Prioritizing metrics applicable to the use-case ⁃ Proper probe setting to receive telemetry data ⁃ Correctly fitting models to interpret data E.g., the following are determined: • Computing Resources: o 10% utilized per user o 10x increase proportional to users • Memory Resources: o 5% utilized per user o 1x increase proportional to users Data Veracity
  13. RECAP Simulations & Planning 1. Development of two simulators to support several diverse use case requirements 2. Support to the Infrastructure Optimizer to validate placement and deployment scenarios 14 Simulation in the loop
  14. 15 vCDN Infrastructure Virtual Caches Application 1. Belong to different commercial organisations, even competitors 2. Respond in very different timescales 3. Have different topologies 4. Have different aggregate load patterns 5. Optimisation policies with different priorities Infrastructure and applications have to be managed and optimised separately Lessons Learned: Separation of Concerns Why?
  15. Infrastructure and Network Management NFV Management Use Case provided by Tieto 16 INFRASTRUCTURE AND NETWORK MANAGEMENT
  16. Introduction to Use Case A 17 Low latency High latency Low capacity High capacity Mobility Throughput Latency Availability Reliability Energy efficiency Device cost Device volume Integrity Availability Reliability Fundamental Challenges • Fulfilment of end-to-end QoS requirements • (Measurement of end-to-end QoS in live networks) • Increased networks dynamics and system complexity • On-demand service provisioning Main Objectives • Automated service and infrastructure deployment • Automated orchestration and optimization of services • Profile infrastructure & network functions
  17. + Build trust with real applications and traffic (workload) + Measured results in selected scenarios + Real-time aspects, failures and tail-response + Profiling of infrastructure and applications + Prototyping & emulation of entities is possible - Scale: Selected scope constrained by Lab(s) - Time: Lead time to get long term results & HW changes + Scalable in scope with limited hardware + Scalable in time with short lead-time + Models for infra, apps & workload + Calculated output (simulation results) - Approximations with selected granularity + Live & Real - Don’t touch - Proof /trust needed - Availability of scenarios - Availability of apps Utilizing Testbed and Simulations to build Trust 18 MME RCF UPF Virtualization VNFM VIM Communication Service Management EMS EMS EMS Hardware MANOO&M Radio Resources Demand Compute, Networking, Storage Traffic Generators City Simulator Optimization Demand Time RECAP SimulationLab Network (Testbed) Optimization Demand Time Models & Characteristics Workload Model Infrastructure Model Application Model KPIs & Metrics KPIs & Metrics User ModelUser Model Live Network Service Utilization Models (High Level, Anonymous) XL KPIs & Metrics Validation Validation Scenarios
  18. 2 Input: Realistic Artificial Data Models • Buildings & roads (Open Street Map) • Demographic data (Umeå Kommun) • Household, work, commuting data • Service usage data • Radio network models 3 Tool: Data Driven User Simulation • Mobility behavior • Service usage behavior • Service Categories - eMBB (Best effort web, VoLTE,…) - mMTC (IoT, …) - cMTC (emerging use cases) Confidential Operational cells in blue Idle users in red Connected users within cell in cyan World Model Thing Model Population Model Mobility Model Service Model Radio Network Model Application Model Infrastructure Model Traffic Generation to Train, Validate and Explore 19 1 Goal: Explore Scenarios • Realistic Artificial Telecom Workload • Disaster scenarios • Events • Region expansion
  19. 20 NFV Infrastructure Application (RAN/EPC VNFs) Demand Traffic Generators Workload Model Application Model Infrastructure Model RECAP Models RECAP Operations RECAP Management Analytics Application Orchestration Infrastructure Optimization Elastic Scaling Remediation Simulation Provisioning Tieto’s Testbed RECAP Data Science Regional User Simulator Infrastructure Landscape Data Collection Analytics Infrastructure Probes E2E KPIs Application Metrics Environment System under Test VIM VNFM Landscape QoSModel Instrumentation and control for the RECAP closed loop
  20. Summary: Lessons Learned • A repeatable methodology for generating the models ⁃ RECAP has developed a framework for this: there are tangible reusable artifacts ⁃ Configuration through the models • Telemetry quality is important in a model-driven approach ⁃ Ensure right instrumentation of the system ⁃ Ensure telemetry quality ⁃ How do you do the training ⁃ Getting models right is the key ⁃ Very use case-specific è Hard to get a generic application model • Validate the results of machine learning-based optimization ⁃ Human in the loop ⁃ Simulator in the loop • DTS (large distributed systems) • DES (fine-grained data center level) • Separation of concerns ⁃ Infrastructure operator versus application operator 21
  21. THANK YOU http://recap-project.eu recap2020 RECAP Project ■ H2020 ■ Grant Agreement #732667 Call: H2020-ICT-2016-2017 ■ Topic: ICT-06-2016 THIS PROJECT HAS RECEIVED FUNDING FROM THE EUROPEAN UNION’S HORIZON 2020 RESEARCH AND INNOVATION PROGRAMME UNDER GRANT AGREEMENT NUMBER 732667 https://recap-project.eu/ https://recap-project.eu/about/public-deliverables/ Public Deliverables
Publicité