Saturday, March 7, 2015

NFV Considerations: LXC or KVM ???


Telcos have option to select KVM, hypervisor based virtualization or LXC, container based virtualization to virtualized their telecom functions called, NFV.

Hypervisor based Virtualization:
In Hypervisor virtualization on Linux, we're running a full operating system on top of a host operating system, usually a mixture of operating systems. Hypervisor virtualization in Linux is realized by adding kernel module, called KVM. KVM (Kernel Virtual Machine) allows user space program to utilize the hardware virtualization features of various processors. KVM consists of a loadable kernel module (kvm.ko), that provides the core virtualization infrastructure, and a processor-specific module e.g kvm-intel.ko or kvm-amd.ko. KVM is used in conjunction with Qemu to emulate other hardware resources such as the network card, hard disk, RAM, CPU etc. When used as a virtualizer, QEMU achieves traditional-like performances by executing the guest code directly on the host CPU.
 Qemu is software sitting in linux user-space as hardware emulator for guest OS. It emulates virtual CPU, vDisk and vRAM for guest OS,  while KVM (Kernel Virtual Machine) allows user space programs (virtual instances) to access hardware virtualization features of various processors. Qemu uses KVM’s hardware acceleration features to virtual instances
KVM & Qemu both together provide, virtualization capability to linux environment.
 
Container based Virtualization:
In container based Virtualizatoin (also called operating system virtualization), system runs application as user space instances, on same OS. In contrast to hypervisor Virtualization, container virtualization runs directly on kernel. Kernel provides process isolation and performs resource management. All containers under a host are running under the same kernel, as opposed to virtualization solutions like Xen or KVM where each VM runs its own guest OS’s kernel. Applications above will on hosted as separate user space instances.  LXC relies on the Linux kernel control groups (cGroups) functionality and namespace isolation to provide each user space instance( VM application) have their own network stack(sockets, routing tables, etc), filesystem, memory etc.

 
Docker
Docker is an open-source project that automates the creation and deployment of linux containers. Docker utilizes the LXC toolkit which is currently available only for Linux.


In nutshell, VM sits on guest OS and guest OS is installed as separate instance with own virtualized hardware resources, while container is deployed as separate user-space process, which directly interacts with kernel, as shown in figure below:









KVM Packet forwarding path

 
1)      Packet received on physical interface(NIC). NIC copies the packet to main memory of the host system.

2)      NAPI schedules the packet at  queue for CPU for further processing.

3)      Correct Virtual Interface (VIF) is identified  during physical to virtual mapping process by openVswitch/linux bridge.

 
4)      In KVM environment, Qemu system emulation provides a full virtual machine inside the Qemu user-space process, the details of what processes are running inside the guest are not directly visible from the host. Qemu provides a slab of guest RAM, ability to execute guest code and emulated hardware devices, and therefor host operating system has no ability to peek inside guest domain. As a result packet payload require to transfer from kernel memory space to guest virtual memory space. This require copy operation to move the packet from kernel to the guest OS.  Context switch e.g transferring packet state information from physical  CPU to virtual CPU is also required. This increases significant overhead.

5)      TAP device is established between Kernel space to guest VM at user space. Tap device is virtual layer 2 device(i.e. virtual Ethernet)  which bridges kernel and user space. Packet is written at one end and available for reading at other end.




LXC Packet forwarding path

LXC has similar packet forwarding path as shown in figure below. In contrast to KVM, LXC performs packet forwarding in kernel space. It does not require any packet copying or context switching operation and packet is immediately available to a container after switching(mapping). Removing Packet Copying and  Context switching from packet forwarding path eliminates related overhead, resulting in faster processing.  Kernel space processing is an advantage from performance point of view, but potential drawbacks from resource isolation perspective. The packet handling inside the kernel is mainly done through SoftIRQ processing. SoftIRQ serves all virtual interfaces in round robin fashion (CPU queuing)  with equal priority. This may lead to poor resource isolation & contention.






NFV Considerations
LXC is limited to a single operating system with containers. With hypervisor based system, there is larger choices of guest operating system.
 
While VM excel at isolation, they add overhead when sharing data between hypervisor and guest operation system.  Container based Virtualization is much faster than Hypervisor virtualization. Booting of VM is in seconds, compared to minutes with KVM.  LXC architecture removes guest OS as result VM density per host is higher than Hypervisor architecture. VM isolation is more stringent in hypervisor than container isolation in LXC environment.
 
From NFV perspective, LXC containers provide faster packet processing, but locked to single host OS, meanwhile KVM provides flexibility in that regard. KVM has better resource isolation mechanism as well. LXC’s faster packet processing, helps NFV to deal with neat native latency requirement. LXC can be beneficial to virtualized those telecom functions which uses similar flavor Linux OS. LXC avoids DPDK and others performance acceleration complexities to achieve near native NFV performance. Of course, LXC will have own complexities to deal with.

 

No comments:

Post a Comment