Monday, August 23, 2021

CNF Migration for Telco..

 

This Blog discuss about possible approaches deploy Telco CNF (Container network function)

In late 2014, Telecom operators started new technological trend to implement application virtualization principles under name NFV i.e. network functions virtualization. NFV primarily implemented using OpenStack Cloud infrastructure. Under NFV framework, virtualized instance of telecom application is referred as VNF (virtual network function).

As other enterprise infrastructure are moving from virtual machines to containers, Telco providers has also started thinking about deploying CNF i.e. Container network function. Just as Openstack was cloud orchestration framework for VNFs, container orchestration framework such as  Kubernetes  can be used for CNF orchestration.

CNF Migration

There are mainly three specifications are published in providing high level architectural guidance for CNF migration:

1)   Cloud Native Task Force(CNTT)’s Reference Architecture 2 (RA-2):

CNTT RA-2 describes the high level system components and their interactions based on CNTT Reference Model and mapping them to real-world containerized components, including Kubernetes (and related) elements. One key aspect of RA-2 is to identify the relevant Kubernetes features (e.g SRIOV, CPU pinning etc) and extensions that are best used to support telecom network deployments (Ref https://cntt-n.github.io/CNTT/)

2) GSMA NG.126 Cloud Infrastructure Reference Model v1.0: This document has is based on CNTT RA2. This specification provides common set of virtual infrastructure resources details  that Telco cloud infrastructure will need to host typical VNF/CNF telco workloads.    Ref: https://www.gsma.com/newsroom/wp-content/uploads//NG.126-v1.0-1.pdf 

3)   ETSI GR NFV-IFA 029 V3.3.1 (2019-11): This specification provides the potential impact on the NFV architecture of providing "PaaS"-type capabilities and supporting VNFs which follow "cloud-native" design principles, in particular the utilization of container technologies. Ref: https://www.etsi.org/deliver/etsi_gr/NFV-IFA/001_099/029/03.03.01_60/gr_NFV-IFA029v030301p.pdf 


GSMA NG. 126  describes three approaches to deploy CNF workloads in Telco cloud (see Section 3.7 of GSMA NG. 126), as shown in figure below:

1)       a typical IaaS using VMs and a hypervisor for virtualization

2)       a CaaS on VM/hypervisor

3)       a CaaS on bare metal.



This picture can be simplified as below:



Container inside VM (Approach B)

This blog is dedicated to discuss Approach B as specified by ETSI GR NFV-IFA 029 V3.3.1 (2019-11), section 5.3.4.





Telco has spent considerable time in transforming their legacy applications to run as virtual machines. Therefor transforming from openstack based VNF to Kubernetes based CNF will not be overnight transition. Approach B (as suggested above) can provide smooth  migration roadmap for CNF deployment, for following reasons:

-      Application can be refactored for microservices using VM’s host OS. This will provide less development work compared to running directly on bare-metal.

 

-      There will no shared kernel among Telco PoDs. This will address security concerns. Each CNF can run on it’s own namespace.

 

-     Running k8 cluster on VM, will enhance portability. For example if CNF requires to run on hyperscaler such as  Azure Kuberneters Service(AKS), User can create  Azure virtual machine (VM)/k8 Node to host the CNF without worrying about underlying OS structure.

Approach B Requirements:

Requirements to implement Approach B, are as follow:

-      Telco workloads monolithic software architecture should  to be broken down into smaller microservices.

-      Those microservice should be able to be hosted as Pod, orchestrated by Kubernetes

-      Number of Pods will make up CNF( as number or VMs made VNF). Each CNF should have it’s own namespace.

-      Kubernetes PaaS/CaaS elements should be able to manage microservices for ingress/egress traffic management, log and metrics collections, state preservation by external DB, tracing, Secured communication between microservices, etc

-      K8 Node VM should have sufficient capacity to host designed CNF Pods.

-      Number of CNFs can create k8 cluster. Single k8 cluster can consist of all the pods delivering network functionality e.g UPF. Or single k8 cluster belong to particular Telco Vendor, providing multiple functionalities.  

-      Each k8 cluster will have master & worker node. Master node will host k8 PaaS services such as etcd server, istioD, kublet k8 scheduler etc Worker node will host the application pods(more details on k8 cluster : https://kubernetes.io/docs/concepts/overview/components/)


Figure below shows illustration of approach B implementation, where:

 

1)   3 Linux VMs are created to form single k8 cluster

2)   VM 1 is acting as k8 worker node, where four Pods/Microservices delivers 5G element SMF functionality. All SMF pods are running on SMF namespace.

3)   VM 2 is acting as k8 worker node, where four Pods/Microservices delivers  5G element AMF functionality. All AMF pods are running AMF  namespace.

4)   VM 3 is acting as k8 control node where four Pods/Microservices delivers  k8 control  services (e.g Etcd, kublet, kube-schedular, etc)  functionality.

5)   Microservices are interconnected via service mesh provided by Istio.

6)   Pod-level redundancy are provided by replica-set feature of Kubernetes




As I have explained, ETSI has tentatively defined three approaches for Telco’s CNF migration. We have discussed about approach where Application containers run inside host VM. In this scenario, Telco requires openstack and Kubernetes orchestration systems work in parallel.

In upcoming blogs, I will share my knowledge on CNF networking and K8 PaaS services. 


 

No comments:

Post a Comment