Tuesday, March 31, 2015

Business Case for Telecom Equipment Vendor in NFV/SDN world


NFV/SDN will separate Telecom  hardware from software and as result, e  Equipment Vendors(EV) may become merely software suppliers. At first glance, we feel that EV will dislike the idea of NFV/SDN since its reducing ambit of  their business. But in that’s not the case, as it seems.

Look at the following news stories:

·        Swedish communications technology and services provider giant Ericsson has signed a five-year software licensing deal with Mirantis that is worth a reported $30 million and is thought to be the largest OpenStack-related deal on record. (https://www.mirantis.com/openstack-portal/external-news/ericsson-engages-mirantis-record-breaking-openstack-deal/ )

·        Mirantis and Juniper Networks are partnering to enable rapid delivery of high-performance, massively-scalable OpenStack clouds with Juniper Contrail software-defined networking (SDN).( https://www.mirantis.com/partners/mirantis-technology-partners/mirantis-partners-juniper/ )

·        Huawei Technologies Co. Ltd. outlined its "SoftCOM" software-defined networking (SDN) strategy to industry analysts and media in London. (http://www.lightreading.com/carrier-sdn/sdn-architectures/huawei-unfolds-sdn-roadmap/d/d-id/701139)

 
Ericsson, Alcatel Lucent, Huawei, NEC and most of leading Telecom EVs have adopted NFV/SDN as future technology. EVs have embraced it quite fast. WHY ?

Emerging Business  Case
When Telcos move to NFV/SDN domain, they will use commoditized hardware for cost benefits, while they need to set-up cloud orchestration platform for virtual applications, thus software expense will be higher. As shown in chart below, in NFV/SDN domain Telco’s hardware expense  will decrease, but correspondingly software  expense will rise, due to Cloud orchestration platform development. Software SDLC has higher operational cost(upgrades, feature expansion, etc) than hardware. EVs seeing their profits coming  from software plug-ins & services, rather than traditional hardware/software integrated box.
 

 
 
 
EVs are targeting Telco’s cloud architecture to find newer opportunities. Openstack will be first choice for cloud management as its no-cost,  open sourced and available for Telco specific customization. Ericsson and Juniper have designed their cloud platform integrated with Openstack APIs and announced partnership with Mirantis, leading Openstack distributor.  Including Openstack their offering, EVs are claiming their product .. cloud-ready.
Openstack Neutron Plug In
Cloud manager, Openstack will provide API framework to configure hypervisors, storage and network elements. Openstack network service, called Neutron API will configure EV’s virtualized network applications (Switches, routers etc) . EVs have developed Openstack Neutron specific plug-ins to integrate with Neutron API service. This plug-in will pass API calls to hardware switches (as Arista) or to SDN controller as open contrail, Nuage etc.
In either case EV’s core business will remain to provide network functionality. This time though Neutron plug-in, SDN and SDN agent as vSwitch, Open Vswitch or  vRouter at hypervisor, rather than proprietary  integrated appliance.
Neutron Plug-in  upgrades, API enhancement and SDN  Application development will be ongoing service business for EVs the same way as current licensed software upgrades
 
Most of EVs have already have clear SDN offerings, as Juniper OpenContrail, ALU Nuage SDN controller, Huawei’s ONOS Softcome controller, NEC Programmable flow-controller etc
 
EV Cloud
NFV SDN will enable EV to have their own cloud for  Tier 2/3 Telcos  MVNOs ,Enterprise customers etc. In stead of deploying and managing physical boxes on-premise, EV can share its products’ virtual instances with customers, saving significant Capex and Opex. Cloud enabled managed services will open new business opportunities for both Telcos and Vendors, as shown in figure below:

Summary
In NFV/SDN world, EVs will compensate their loss in hardware by offering cloud specific software products as Neutron Plugin, SDN Applications and its hypervisor agents. Software based products will require cyclic upgrades and feature enhancements, as in traditional network, which will keep Telecom ecosystem evolving. EVs who can adapt changed conditions, will survive.  Surely NFV/SDN will break the closed telecom ecosystem and bring new players (e.g. Affirmed Networks) who will challenge established Telecom Equipment Vendors. Innovation will thrive and competition will be stiff

Saturday, March 7, 2015

NFV Considerations: LXC or KVM ???


Telcos have option to select KVM, hypervisor based virtualization or LXC, container based virtualization to virtualized their telecom functions called, NFV.

Hypervisor based Virtualization:
In Hypervisor virtualization on Linux, we're running a full operating system on top of a host operating system, usually a mixture of operating systems. Hypervisor virtualization in Linux is realized by adding kernel module, called KVM. KVM (Kernel Virtual Machine) allows user space program to utilize the hardware virtualization features of various processors. KVM consists of a loadable kernel module (kvm.ko), that provides the core virtualization infrastructure, and a processor-specific module e.g kvm-intel.ko or kvm-amd.ko. KVM is used in conjunction with Qemu to emulate other hardware resources such as the network card, hard disk, RAM, CPU etc. When used as a virtualizer, QEMU achieves traditional-like performances by executing the guest code directly on the host CPU.
 Qemu is software sitting in linux user-space as hardware emulator for guest OS. It emulates virtual CPU, vDisk and vRAM for guest OS,  while KVM (Kernel Virtual Machine) allows user space programs (virtual instances) to access hardware virtualization features of various processors. Qemu uses KVM’s hardware acceleration features to virtual instances
KVM & Qemu both together provide, virtualization capability to linux environment.
 
Container based Virtualization:
In container based Virtualizatoin (also called operating system virtualization), system runs application as user space instances, on same OS. In contrast to hypervisor Virtualization, container virtualization runs directly on kernel. Kernel provides process isolation and performs resource management. All containers under a host are running under the same kernel, as opposed to virtualization solutions like Xen or KVM where each VM runs its own guest OS’s kernel. Applications above will on hosted as separate user space instances.  LXC relies on the Linux kernel control groups (cGroups) functionality and namespace isolation to provide each user space instance( VM application) have their own network stack(sockets, routing tables, etc), filesystem, memory etc.

 
Docker
Docker is an open-source project that automates the creation and deployment of linux containers. Docker utilizes the LXC toolkit which is currently available only for Linux.


In nutshell, VM sits on guest OS and guest OS is installed as separate instance with own virtualized hardware resources, while container is deployed as separate user-space process, which directly interacts with kernel, as shown in figure below:









KVM Packet forwarding path

 
1)      Packet received on physical interface(NIC). NIC copies the packet to main memory of the host system.

2)      NAPI schedules the packet at  queue for CPU for further processing.

3)      Correct Virtual Interface (VIF) is identified  during physical to virtual mapping process by openVswitch/linux bridge.

 
4)      In KVM environment, Qemu system emulation provides a full virtual machine inside the Qemu user-space process, the details of what processes are running inside the guest are not directly visible from the host. Qemu provides a slab of guest RAM, ability to execute guest code and emulated hardware devices, and therefor host operating system has no ability to peek inside guest domain. As a result packet payload require to transfer from kernel memory space to guest virtual memory space. This require copy operation to move the packet from kernel to the guest OS.  Context switch e.g transferring packet state information from physical  CPU to virtual CPU is also required. This increases significant overhead.

5)      TAP device is established between Kernel space to guest VM at user space. Tap device is virtual layer 2 device(i.e. virtual Ethernet)  which bridges kernel and user space. Packet is written at one end and available for reading at other end.




LXC Packet forwarding path

LXC has similar packet forwarding path as shown in figure below. In contrast to KVM, LXC performs packet forwarding in kernel space. It does not require any packet copying or context switching operation and packet is immediately available to a container after switching(mapping). Removing Packet Copying and  Context switching from packet forwarding path eliminates related overhead, resulting in faster processing.  Kernel space processing is an advantage from performance point of view, but potential drawbacks from resource isolation perspective. The packet handling inside the kernel is mainly done through SoftIRQ processing. SoftIRQ serves all virtual interfaces in round robin fashion (CPU queuing)  with equal priority. This may lead to poor resource isolation & contention.






NFV Considerations
LXC is limited to a single operating system with containers. With hypervisor based system, there is larger choices of guest operating system.
 
While VM excel at isolation, they add overhead when sharing data between hypervisor and guest operation system.  Container based Virtualization is much faster than Hypervisor virtualization. Booting of VM is in seconds, compared to minutes with KVM.  LXC architecture removes guest OS as result VM density per host is higher than Hypervisor architecture. VM isolation is more stringent in hypervisor than container isolation in LXC environment.
 
From NFV perspective, LXC containers provide faster packet processing, but locked to single host OS, meanwhile KVM provides flexibility in that regard. KVM has better resource isolation mechanism as well. LXC’s faster packet processing, helps NFV to deal with neat native latency requirement. LXC can be beneficial to virtualized those telecom functions which uses similar flavor Linux OS. LXC avoids DPDK and others performance acceleration complexities to achieve near native NFV performance. Of course, LXC will have own complexities to deal with.

 

Tuesday, March 3, 2015

Devstak 1st Experience

Spending nearly 14 years in core telecom, I was surrounded by terms as RNC, MME, VoLTE, IMS, Bearers, CDRs etc.. I was totally disconnected with terms as Linux, Ubuntu, Sudo, Python, REST etc…
But as Cloud is coming to Telecom, it was essential for me to upgrade my skill to maintain my marketability.
So I entered into world of Openstack. I am sharing my first experience in deploying Devstack on by old Pantium Laptop with 60 GB harddisk and 3GB RAM.
 
About  DEVSTACK

From https://wiki.openstack.org/wiki/DevStack , “ DevStack's mission is to provide and maintain tools used for the installation of the central OpenStack services from source (git repository master, or specific branches) suitable for development and operational testing. DevStack is an opinionated script to quickly create an OpenStack development environment……” ..

So what I understand is Devstack is development framework for Openstack. If you want hands-on experience in Openstack, install Devstack in laptop and do some hands-on exercises.

What confused me is word opinionated script..   So when in doubt --- > Google.. One has to be avid  Google searchers to learn coding.

After reading few blogs (God bless those authors !!), I found that software can be opinionated or un-opinionated. Opinionated software have defined workflows/execution steps, e.g Devstack has defined installation steps 1) install git 2) git clone Devstack 3) configure localRc 4) run stack.sh..

If you stick to these steps, your life will be easy otherwise, you will face roadblocks in Devstack deployment.

Infra Requirements

1)     Need Ubuntu 12.04 or newer ( devstack works with other Oss as well)

2)     Need 3GB (minimum ) or more(8 GB is cool ) RAM

3)     We need 10 GB of storage

4)     If Devstack is  installed on VM, Oracle virtual box is good option ( at least 3 GB of RAM for VM)

Linux requirements

1)     User need Sudo( Sudo is superuser with admin rights) authority to deploy Devstack

2)     Internet connectivity and Proxies should allow downloading of Ubuntu’s  git-client and  websites as git.openstack.org and pypl.python.org websites ( if proxy setting are troubling, use Unset Proxy command)

Detailed Workflow

1)      Start Ubuntu ( on standalone node or VM)

2)      Open  Terminal( If you are totally new to Ubuntu, find Terminal in Search option)

3) Log into Terminal 


  4) Write following commands:
a.      Sudo apt-get update –y
We use Sudo because we need admin right to install/update packages. APT stands for Advanced packaging tool.. Our command updates index of installed Ubuntu package.
b. Sudo apt-get install git   - >>We are installing Git client from Ubuntu website


About GIT

Version control systems allow  to keep track changes, revert to previous stages, and branch to create alternate versions of files and directories.Git is one of the most populate version control system, developped by Linus Torvalds. Many projects maintain their files in a git repository, and sites like GitHub and Bitbucket for sharing and contributing code. We need to install Git clint to use Git repository.

5) Execute Git clone

We download (git clone) Devstack files from https://github.com/openstack -dev/devstack.git and we want icehouse branch.

Type command: git clone ‘https://github.com/openstack-dev/devstack.git’ –b stable/icehouse



6)      Configure LocalRC



Once copying (cloning) is done, we need to visit localRC file.
We make local configurations such as timezone, Wifi access point, user password.. etc. while installing  windows/Ubuntu. Similarly Devstack gives user option to configure certain Local parameters stored in Local.Conf file. If we don’t do any config change, defaults values will be taken.  Its good to know about. Local.conf file as we might need to change floating IP ranges, module passwords.. and many more things.

I used CP command but learnt that CP is mostly used to copy file. CP command did not worked to open LocalRC file.





Finally I found Nano command. Nano Command will trigger text editor. I used “Nano Local.Conf” to edit Local.conf file. Nano editor will open.



7)       Change the local.conf file.
I added two lines to store log file and capture screen session, for study purpose.
Type command at end of file: 
LOGFILE =/opt/vadan/logs/stack.sh.log # logging file location
USE_SCREEN=True  


 
8)     Run ./ Stack.sh
Type : ./ stack.sh (under devstack directory)

Now we are ready to run  ./stack.sh. It installs and configures various combinations of Ceilometer, Cinder, Glance, Heat, Horizon, Keystone, Nova, Neutron, and Swift. Its like .exe file. It. Successful run of Stack.sh means devstack is deployed.

Developer can make any changes in Openstack modules (ceilometer, Cider, Nova..etc) and run Stack.sh. Successful run of stack.sh will deploy Devstack with updated code. Stack.sh will ask for password for database and other modules.



9)     Bump 1:  Oslo.Middleware Error
During stack.sh run, I encountered Oslo MiddleWare Requirement Error.





I had no clue about this error.  I learnt that Oslo code produce set of python libraries. I assumed that may be my editing of Local.Conf file is not good. So decided to use local.conf default values.
I deleted my comments from step 7) and re-ran stack.sh.

10)     Bump 2: Permission Error
After correcting local.conf file, my Oslo error went away. But I landed up in another Permission  error. Mysql permission issue. My account (vadan) did not had permission to deploy Mysql dB.




After i googling I found (God bless Bloggers) that, I need to add my name under Sudoer’s List. And add No Password command.Type Visudo in editor..

Visudo will open following screen. I added following commands in Visudo screen,

#under User Privilege specification:

vadan  ALL=(ALL:ALL) ALL # This is to give all previledges to user vadan

# includedir  /etc/sudoers.d
vadan ALL=(ALL) NOPASSWD:ALL # This is to remove passwords restrictions




11)     Successful Stack.sh

Finally Stack.sh ran successfully J  J J J It ended with providing IP address of Horizon dashboard.





12)     Launching Horizon Dashboard

I logged into Horizon from Firefox by using IP address, user name and password, mentioned at Stack.sh, to access horizon Dashboard. If you are using VM, than use Firefox of VM, not from host





13)     Launch Exercise.sh

I ran exercise.sh.We can see the one Virtual Machine spawned in Horizon.






Deploying Devstack on personal laptop was wonderful experience. It has connected to world of Cloud management and orchestration. Happy Stacking :)