This presentation gives a summary of SDXCentral 2017 Report on NFV Industry and its trends. The presentation gives jump start for beginners to navigate through NFV forest by getting necessary details and expand understanding elaborating each piece of puzzle.
2. • This presentation is an extract of key learnings from sdxCentral report on
NFV.
• (2017 Network FunctionsVirtualization Report Series – 3rd Edition)
DISCLAIMER
3. • Cloud computing matured andTelco operators realized cost benefit.
• When servers can be commoditized why not network?
• NFV Initiated by Operators
• Industry Specification Group (ISG) for NFV created in European
Telecommunications Standards Institute’s (ETSI) that has grown from
7 to more than 290 member companies.
• ETSI took the lead on NFV in 2012 with a breakthrough “Network Functions
Virtualization” white paper.
• ETSI published more than 50 NFV documents so far.
HOW IT STARTED
5. • NFV gained momentum after ETSI releases model framework.
• Some operators took initiatives and started development of NFV platforms
putting together funding & developed technology.
• OSM/OpenMANO from Telefonica
• ONAP comprising ECOMP from AT&T
• Open-O from China Mobile
• Most of that work was then donated to Open Source programs.
NFV GAINS MOMENTUM
6. • Vn-Nf Data Path & Nf-Vi Control Path
• VNF runs on top of NFVI
• VIM responsible for managing NFVI
NFV INFRASTRUCTURE (NFVI)
8. • Open Compute Project (OCP) creating standards & specifications to
be complied by HW vendors offering servers for Cloud Infrastructure.
• For NFV workloads, some vendors starting to provide converged and
hyper-converged workloads and starting to tune them for more I/O-
centric network services
• EDGE computing/Fog computing becoming more relevant in NFV
space.
• customer edge replacing host of devices into one light weight box
hosting vm and network functions such as vrf, dhcp, nat,etc offered as
service.
NFVI – PHYSICAL INFRA.
9. • The NFVI virtualization layer sits on top of the hardware and is a software
platform that typically involves a hypervisor.
• Hypervisors split up the resources of the physical machine and offer the
equivalent of a physical machine to the application.
• Three main functions of the hypervisor :
• split up the resources of the physical machine
• provide isolation between differentVMs (CPU level)
• emulate all the necessary peripherals e.g. NIC cards
• Main hypervisors used for NFV areVMWare vSphere®, KVM.
NFVI-VIRTUALIZATION LAYER
10. • vSphere®:
• vSphere is a proprietary hypervisor fromVMware.
• It is very mature (15+ years since the first release)
• vSphere also has several rich features suchVM migration across hypervisors and high-
availability.
• Type 1 or bare-metal hypervisor.
• KVM (Kernal-basedVirtual Machine):
• Open source hypervisor project.(10+ years since first release)
• Type 2 or hosted hypervisor typically runs on top of Linux OS.
• Commonly used Host OS operating systems are RHEL,SUSE or Ubuntu.
HYPERVISORS
11. • Virtual Machines
• VM’s are created by hypervisors.
• Hypervisors presents API’s to create,destroy,migrate and manageVMs.
• KVM – libvirt ,VMWare – vCenter
• Virtual Storage
• virtualizing block or file storage lies with the SAN, NAS or SDS software,the VM is simply
presented with a LUN (logical unit number) or file share.
• Virtual Networking
• A hypervisor contains a virtual switch or router providing functions such as security,
gateway connectivity, overlay, inter-vm communication, etc.,
VIRTUAL INFRA.
12. • NFV is a very young field with a high velocity of innovation.
• New specialized IO processors that target NFV are likely to emerge.
• The virtualization layer is going through a renaissance.
• Container technologies bringing paradigm shift in how workloads are
deployed.
• DevOps & Micro Services gaining traction.
NFVI - FUTURE TRENDS
15. • Virtualized infrastructure manager (VIM) is a key component of the NFV-
MANO architectural framework
• The NFVI layer consists of the hardware and virtualization software that the
VNFs run on.This layer is simply a data path and does not deal with any of
the scheduling, orchestration, provisioning, monitoring, service assurance etc.
intelligence.
• TheVIM manages the NFVI and the serves as a conduit for control-path
interaction betweenVNFs and NFVI.
VIRTUAL INFRASTRUCTURE MANAGER
(VIM)
17. • There are two mainVIM software stacks prevalent in
NFV:
• OpenStack®
• VMware vRealize6/vCloud NFV
• Other candidates like Cloud Stack have not been prominent
in the last 2-3 years in the NFV
VIM - PLAYERS
18. “OpenStack is open source software for managing telecommunications
infrastructure for NFV, 5G, IoT and business applications. Global telecoms
including AT&T, China Mobile, Orange, NTT DOCOMO andVerizon deploy
OpenStack as an integration engine with APIs to orchestrate bare metal, virtual
machine and container resources on a single network. OpenStack is a global
community of more than 70,000 individuals across 183 countries supported by
the OpenStack Foundation.”
(Note from OpenStack Foundation)
VIM-OPENSTACK
20. • Operators often rely onVMWare solution owing to commercial offerings
stability vis.a.vis open source quirkiness.
• High-availability andVM Migration features differentiators.
• Made some improvement's with respect to I/O performance needed by NFV.
• VMware also adds the option to run its own flavor of OpenStackVIM (VIO)
when clients desire to do so.
VIM –VCLOUD NFV
21. • TheVNFM is responsible for the life-cycle management ofVNFs under
the control of the NFVO, which it achieves by providing direction to theVIM.
• VNFM operations include:
• Instantiation ofVNFs
• Scaling ofVNFs
• Updating and/or upgradingVNFs
• Termination ofVNFs
VNFM (VNF MANAGER)
22. • NFVO performs two main functions:
• Resource Orchestration
• Network Service Orchestration
• Resource orchestration ensuresVNFs have adequate compute, storage and
network resources. NFVO interacts withVIM to manage/execute activities in
NFVI.
• To ensure service fulfilment, NFVO coordinates with differentVNFs through
VNFM as well asVIM to create end-to-end service.
• Ericsson Cloud Manager primarily plays NFVO role bridging
integration with OSS/BSS.
NFV ORCHESTRATOR
23. • NFV/SDN deployments need to be integrated with OSS/BSS to seamlessly
fulfill customer demands to provision associated services in network.
• ETSI MANO layer overlooks OSS/BSS integration and focuses more onVNF
and NFVI management.
• To operationalize NFV services and to ensure that CSPs can bill for these
services, integration with OSS/BSS systems is crucial.
• So, other standard bodies and industry groups started working on extending
ETSI MANO architecture.
OSS/BSS INTEGRATION
30. • Ericsson Network Manager & Ericsson Cloud Manager positioned as MANO
solution.
• “Ericsson Network manager is a unified multi-layer, multidomain (NFV, SDN, radio,
transport & core) management system and provides various functions such as
VNFM,VNF application & network slice orchestration and network analytics.
Ericsson Cloud manager does the cloud infrastructure management, G-VNFM and
NFVO part of MANO.”
ERICSSON MANO SOLUTION
31. • HPE Service Director
• Huawei FusionSphere
• VMWare vCloud NFV
• Amdocs Network Cloud Service Orchestrator
COMPETITION
32. • The workhorse of NFV
• These are the actual network functions that provide the desired network services
• NFV Infrastructure (NFVI) host these services and provide the appropriate virtualization
capabilities.
• NFV MANO that orchestrates and manages theVNFs.
• Services increasingly being virtualized:
• vCPE(virtual customer premise equipment)
• SD-WAN (software-defined wide-area networks)
• vEPC (virtual evolved packet core) and others including vRAN
• Typically layer 4-7 services are adapted asVNFs.
VNFS (VIRTUAL NETWORK
FUNCTIONS)
33. • Reduces total cost of management.
• Agility and simplicity
• Scale up and scale down as needed
• No vendor lock-in
• Level playing field and more options for operators
BENEFITS OFVNF
34. • VNF and NFV Washing
• VNF Licensing—Work in Progress
• Open Source vs. Commercial
• VNF Performance—Not a Solved Problem
• NFV Process and Orchestration
• Co-existence with bare metal network functions
VNF CHALLENGES
Notes de l'éditeur
operators gathered within the European Telecommunications Standards
Institute’s (ETSI) and created the Industry Specification Group (ISG) for NFV to accelerate the progress of
virtualizing network functions. Launched in January of 2013, the ETSI ISG for NFV has been working to develop the
requirements and architecture of virtualized network functions in a telecommunication’s network. It included these
components of the NFV framework:
• NFV Infrastructure (NFVI) – The physical resources (compute, storage, network) and the virtual instantiations
that make up the infrastructure.
• Virtualized Network Functions (VNFs) – The software implementation of a network function.
• NFV Management and Orchestration (NFV MANO) - The management and control layer that focuses on all
the virtualization-specific management tasks required throughout the lifecycle of the VNF.
Since the original ETSI model for NFV was released, some operators have clamored for a more rapid and
organic development of NFV platforms. Some operators have even put together their own technology
programs, including developing their own open source projects which they then donate to the community (e.g.
OSM/OpenMANO from Telefonica, ONAP comprising ECOMP from AT&T and Open-O from China Mobile).
*) The NFVI layer primarily interacts with two other NFV framework components: VNFs and the Virtualized
Infrastructure Manager (VIM).
*) the VNF software runs on NFVI
*) The VIM is responsible for provisioning and managing the virtual infrastructure.
*) VNF to NFVI interface (Vn-Nf) constitutes a datapath through which network traffic traverses, while the NFV to VIM interface
(Nf-Vi) constitutes a control-path that is used solely for management but not for any network traffic.
*) NFVI consists of three distinct layers: physical infrastructure, virtualization layer and the virtual infrastructure.
*) NFVI consists of three distinct layers: physical infrastructure, virtualization layer and the virtual infrastructure.
*) Open Compute Project (OCP)
*) the VNF characteristics dictate the VM “flavor” (or in the future, the Container environment as Containers enter the fray as a viable VNF form factor). The VM flavors in turn dictate the exact specification of the server. This choice needs to be made very carefully to avoid underutilization of hardware. Over engineering any one parameter could massively degrade your total cost of ownership economics.
*) For NFV workloads, there are major CSP vendors who are starting to provide converged and hyper-converged workloads and starting to tune them for more I/O-centric network services.
*) edge computing (also called Fog Computing) where either a carrier grade server is required (e.g. in a central office) or a light-weight box is required to host just a few VMs (e.g. for a consumer’s house).
*) the customer edge is morphing from having multiple dedicated boxes that provide different functions (line termination, firewalling, routing, WAN optimization, IP PBX etc) into foundational edge Virtual CPE (vCPE) platforms that run virtualization software including flavors of hypervisors, OSes with Linux Container support, stripped down OSes with ability to install VNF packages.
(VNFs may run in a regional data center, a central office edge location or on the CPE device itself). Corresponding business models are also being tested, including having CSPs that lease such NFVI vCPE platforms to its customers, with the ability to rent software on demand (security, acceleration, etc), or having enterprises purchase these
platforms and rent OTT (over the top) services in the cloud.
Core factors: Image size, boot up time.
Kernel sharing in container technologies creates security problems
Container in VM solves kernel sharing but creates performance overhead.
Clear container is answer to above problem. Initiated by intel to showcase container in VM but stripped down/ light weight version of VM
Unikernels are VMs; however, rather than dragging around the entire Guest OS, unikernels link the application only to the libraries used by the application.
The virtualized infrastructure manager (VIM) is a key component of the NFV-MANO architectural framework. It is responsible for controlling and managing the NFVI compute, storage, and network resources, usually within one operator’s infrastructure domain.
In reality, there are use cases in which case a VIM is not necessary - for example in a relatively static environment where longer provisioning times are acceptable. The elimination of a VIM will dramatically reduce the benefits of NFV, but may be an easier way to try out NFV before embracing all of the ETSI architecture. In a deployment without a VIM, you can use virt-manager for KVM and VMware vCenter for vSphere.
VNFs are critical to realizing the business benefits outlined by the NFV architecture. They deliver the actual network functions that create value, and include the components that make up the mobile core (Virtual EPC), or the virtual edge (Virtual CPE). But they aren’t autonomous and require management. VNFMs are critical for scaling, changing
operations, adding new resources, and communicating the states of VNFs to other functional blocks in the NFV-MANO architecture.
A VNFM may be assigned the management of a single VNF instance or multiple VNF instances. The managed VNFs can be of the same or different types.
the VNFM maintains the virtualized resources that support the VNF functionality without interfering with the logical functions performed by the VNFs. The services provided by the VNFM can be employed by authenticated and properly authorized NFV management and orchestration functions.
The MEF has started to work with its members on LSO (Lifecycle Service Orchestration) efforts to help bridge the gap between NFV and OSS/BSS.
LSO binds together legacy OSS and Billing Support Systems (BSS) services,newer SDN and NFV software, and telecom hardware infrastructure.
It is a new software platform that can take a customer order and implement service provisioning, orchestration, fulfillment, service assurance, and monitoring, serving as the central control panel for how services are delivered across the sprawling service provider network.
RIFT.io has taken the approach of commercializing the OpenMANO open-source implementation provided by ETSI with full product support.
Telefonica contributed its original implementations of OpenMano to ETSI, RIFT.io provided an open-source component for orchestration and Canonical providing Juju as a generic VNFM in this implementation.
OpenBaton enables virtual Network Services deployments on top of heterogeneous NFV Infrastructures and integrates with OpenStack, and also provides a plugin mechanism for supporting additional VIM types.
it provides auto-scaling and fault management based on monitoring information coming from the NFVI.
Tacker is an official OpenStack project building a Generic VNF Manager (VNFM) and a NFV Orchestrator (NFVO) to deploy and operate Network Services and Virtual Network Functions (VNFs) on an NFV infrastructure platform like OpenStack.
the OpenECOMP architecture which is reproduced below. Note OpenECOMP is the open-source name of the ECOMP project after AT&T contributed 8M lines of code to open-source as part of the creation of a Linux Foundation project.
The NFV framework is a virtualization framework aimed at telco workloads. Many organizations are deploying NFV frameworks in conjunction with SDN (software-defined networking) frameworks that span both physical and virtual appliances, including routers and switching gear.
a few vendors have engaged in “NFV washing” by simply porting a HW-based network product into a VM and pretending they have a full NFV VNF. Or by positioning offerings that require proprietary NFVI, or their own proprietary MANO, as an NFV VNF when in truth that product is still more akin to a virtual appliance rather than a true NFV application.
Managing the underlying NFVI and pairing it with the appropriate VNFs (workload placement) is an area of active development. Other elements that continue to complicate matters include dealing with noisy neighbors on a VM implementation (where VMs that take more resources crowd out other VMs), and managing the mix of bare metal, VMs and container implementations of various VNFs.