- February 22, 2016
- Posted by: Sivan Barzily
- Category: Uncategorized
We hear the term data plane acceleration (DPA) tossed around a lot in the context of NFV (Network Functions Virtualization), and this is because the successful adoption of NFV highly depends on the ability of a virtualized cloud infrastructure to deliver sufficient performance compared to today’s proprietary network appliances.
Intelligent NFV orchestration for any telecom environment. Download today. Go
This is basically talking about network functions that are I/O intensive, where many packets are sent over a short time frame – e.g. EPC, and today’s generic hardware is simply not equipped nor configured to handle this kind of load and with the needed performance telcos and carriers require. In order to achieve this kind of performance, you need hardware optimized for DPA – advanced processor and networking technologies embedded in silicon and in PCIe devices that enable the extreme throughput required for NFV workloads, but the story doesn’t end there. Even if your hardware is optimized for I/O intensive workloads, you need to ensure your VNFs are ultimately able to leverage these capabilities.
This means that your VNFs will need to declare their needs from the hardware, and the environment should be set to support such declarations. Imagine you are deploying a vRouter which can run at great performance – providing that DPDK is enabled on the hosts it is running on. There needs to be some way to connect between what the vRouter requires (DPDK) and what the environment supports and can provide. This is where Enhanced Platform Awareness coupled with intelligent workload-aware orchestration come into play.
EPA and EPA-enabled orchestration are the glue that ties between your workload requirements, and your environment’s capabilities, while also enabling automation and operational efficiency in an NFV environment without compromising telco grade performance.
So, how does it all work?
In a blueprint describing your VNF or service you can specify the hardware requirements (SR-IOV, DPDK, etc). The orchestrator will receive the requirements and check what the optimized placement is for the workload, based on what the EPA exposes as the platform capabilities. The NFVI will then be configured to support the acceleration methods required – and that’s it!
Below you can see a reference architecture for how this would work.
You’ll then get optimized placement and configuration for your workload resulting in the performance you are after, in a manner supporting optimized lifecycle management of your service in the NFV environment and without restricting that environment to only one VIM as the solution spans OpenStack and VMware.
Leveraging the proper EPA methods for the relevant VNFs is where TOSCA-based orchestration comes in as the smart orchestrator, through the ability to define nodes and types and the placement within the service chain and more, this enables the level of intelligent matching between hardware, calling the right APIs, and the proper placement on relevant hosts.
So, with many cloud data centers being built around heterogeneous platforms offering a variety of technologies – some with only basic computing functions and others with accelerated performance and higher efficiency for intensive workloads, achieving uniform and extreme performance constantly is often a challenge. EPA combined with DPA are enablers for such extreme performance, while increasing workload availability in a geographically distributed cloud environment. This intelligent placement of VNFs and hardware optimization is critical to the performance and reliability of large-scale NFV systems these days.
Watch the video demo below.
Stay tuned for some more great content during Mobile World Congress on everything NFV.
And don’t forget to join us for the 8th OpenStack & Beyond podcast directly from the event on Thursday, Feb 25th at 11am CET.