As Hardware is getting commoditized, responsibility of functional delivery of Network, more lies on Software Engineering. Many networking tasks are moving to Software from hardware domain, e.g Switching. Open vSwitch (where ‘v’ stands for virtual) is such conversion, where hardwired switching fabric converted into intelligent software code.
Open Vswitch is a key function for many NFV deployments. Being open source, Open Vswitch is Telco’s first choice. One of the limitations of open Vswitch is scalability. Open Vswitch requires to match throughput of 40Gbps (two 10GE uplink). Apart from that, Open Vswitch also able to handle huge packets(>1500 bytes) for heavy Telco workload e.g performance data..
OVS Regular Operation
In Linux world, VM instance is placed at user-space. Copy
operation is required to move packet from user-space to kernel. All the packet
switching for VM traffic (internal/external) is happening at Kernel, as
OpenVswitch (OVS) is residing at Kernel. For inter VM traffic packet is moved from user-space to
kernel than vSwitched to new memory space. This movement has minimum two
context switch operations, as shown in figure 1. Copy operation aka context
switching adds overhead & latency, thus not suitable for
Telco-grade scaling.
High
volume workload such as Telco workloads require throughput optimized
datapaths to deliver traffic to the guest. The standard kernel vhost
implementation is not able to provide adequate performance to support Telco
workloads. Intel(R) DPDK UserSpace vhost is a high-throughput implementation of the standard qemu vhost interface for qemu versions <2.0.
datapaths to deliver traffic to the guest. The standard kernel vhost
implementation is not able to provide adequate performance to support Telco
workloads. Intel(R) DPDK UserSpace vhost is a high-throughput implementation of the standard qemu vhost interface for qemu versions <2.0.
Zero
Copy Operation
"Zero-copy" describes computer operations
in which the CPU does not perform the task of copying data from one memory area
to another. This is frequently used to save CPU cycles and memory bandwidth
when transmitting a file over a network. Goal is to minimize latency by
reducing number of context switching between Guest OS’s virtual memory to Kernel
space memory. Zero Copy operation provides significant improvement, specially
for large data(huge memory page), by concept of ‘shared memory among VMs’.
Each VM has access to shared memory. All shared
memory has been mapped to the virtual address space of each VM, and VMs Read
& Write directly to the shared memory. Guest do not copy shared object to its own memory
because shared memory is a part of its memory and coping overhead of this
method is zero. With this VM can obtain transparency in guest OS’s kernel, it
means that the guest OS does not aware of shared memory properties and
behaviors. ( read http://ccsenet.org/journal/index.php/cis/article/view/6209)
DPDK deployment in
Openstack
There can be two options to deploy DPDK in Openstack
environment
1)
DPDK Unaware Openstack deployment
2)
DPDK Aware Openstack deployment
DPDK Unaware Openstack
deployment
During instance creation, Nova compute collects network L2
and L3 information from Neutron. Nova
passes collected Network information to Open Vswitch, residing in Compute node’s
Hypervisor through Libvert APIs. In this deployment, NO modification in Openstack services are as
required. Neutron service will be unaware of the fact that OVS agent is
residing in user-space and have DPDK capabilities. Neutron will treat user-space
OVS agent as normal OVS agent ( as shown in figure below)
Negative aspect of these deployment, is that all VM instances
on compute node, will use DPDK,
regardless of their application requirement. As shown in figure below, VM C
does not require DPDK capabilities, but it will be connected to user-space OVS
agent as Openstack services are not aware of OVS deployment. ( read
: https://lists.01.org/pipermail/dpdk-ovs/2014-September/001826.html )
DPDK Aware Openstack
deployment
To support Dpdk aware OVS user-space vHost agent & kernel
OVS Agent in single Openstack deployment, OVS agent has to provide node
specific info ( dpdkovssuppport True OR False)
to Neutron Plugin. This
information is passed to Nova network by adding new field (
OVS_USE_Dpdk) in port_biding details. To
achieve above mentioned changes, Neutron Nova API services and
Plugin-Agent ML2 driver (mech_openvswitch)
requires updates. ( as shown in figure below)
Application developer can describe requirement for DPDK in Glance
Image metadata. Following image metadata instruction & affinity rules, Nova scheduler will select OVS DPDK user vHost agent, using
information in updated Port_binding details.
BY using correct affinity rules, Administrator will able to
launch VM instance which requires with DPDK, along side VM instances which does
not require DPDK in single Openstack deployment. (read https://review.openstack.org/#/c/107797/1/specs/juno/ml2-use-dpdkvhost.rst ).
Hi Vadan, thank you for your post.
ReplyDeleteCould you please provide the link you used for instructions and download to add the mentioned support: "to achieve above mentioned changes, Neutron Nova API services and Plugin-Agent ML2 driver (mech_openvswitch) requires updates".
Thank you,
Ricardo
Hi Ricardo,
DeleteDiagram is self drawn after going through various links, as mentioned below:
https://review.openstack.org/#/c/138742/
http://openstack.10931.n7.nabble.com/Neutron-ML2-Support-dpdk-ovs-with-ml2-plugin-td45677.html
https://lists.01.org/pipermail/dpdk-ovs/2014-September/001826.html
https://github.com/01org/dpdk-ovs/blob/development/docs/00_Overview.md
I hope this helps...
So does this mean that DPDK is only used between VMs on the same Host?
ReplyDeletehow about VM on two hosts
ReplyDelete