Kubernetes Cloud Native Orchestration

Learn how Cloudify connects Kubernetes with the rest of the world.

Click Here To Register For Our Upcoming K8s Webinar

Making Kubernetes Interoperable With Any Service

Kubernetes is a cloud native platform by design. That means it was built for the cloud world. However, most companies have heterogeneous architectures and workloads that require each piece to talk with one another. With open source orchestration, Cloudify makes Kubernetes interoperable with external services such as databases, services running on VMs, legacy applications, serverless workloads, and more enabling end-to-end automation that is not bound by any one specific platform. Read more about Kubernetes use cases with Cloudify below.

Watch Kubernetes Webinar

Edge Computing with Kubernetes

Edge computing and Internet of Things (IoT) are the new distributed networks and changing how network automation is evolving. With millions of devices a lot of data is generated at the edge and real-time decisions need to be made locally, and not in one centralized place.

Orchestration for Edge Computing presents some difficulties such as the need for edge orchestration to continue without an active internet connection, the need for each point to be secure with proper access, and the ability to scale to millions, even billions, of devices.

Orchestrating Kubernetes at the Edge

Kubernetes is a light, fast, and scalable container orchestrator that is ideal for use on edge devices. With our low footprint capabilities orchestrating and managing Kubernetes clusters and application deployments from day 0 to day 2, Cloudify is a perfect fit for any edge computing workload.

Read The Whitepaper

Multi-Cloud Kubernetes Orchestration

Why orchestrate Kubernetes?

Kubernetes is the poster child for container scheduling, management, and orchestration. However, because it was built to , so it is going to be incredibly difficult and time consuming, if even possible, to get two clusters – one on AWS and the other on OpenStack, let’s say – to talk with each other, scale together, etc. Cloudify brings a higher level of orchestration – it understands the whole ecosystem of clouds, cloud management tools, and the total DevOps stack, including your applications. If you’re running apps on multiple clouds with different tools, you need a global orchestrator to ensure your container orchestration tasks are working simultaneously between cloud environments.

Cloudify provides infrastructure management capabilities such as installation, auto-healing, and auto-scaling. When orchestrating services on these platforms, Cloudify integrates seamlessly with native descriptors to not only support container cluster service deployment, but also enable orchestration that encompasses systems beyond the edges of the container cluster.

Access FREE Lab

ONAP and Kubernetes

ONAP has made significant progress with its plan to build the de facto open source platform for orchestration and automation of physical and virtual network functions. One of the ways it has taken this mission a step further is by integrating Kubernetes which is utilizes to contain the various ONAP components.

Cloudify is proud to be a code contributor to a number of core components including the OOM, and with the integration of ARIA on the service orchestration layer.

Telcos are already lining up to implement ONAP and Cloudify is the orchestrator of choice for those workloads.

Learn More About ONAP

Kubernetes Plugin

This Cloudify Kubernetes Plugin enables users to create and delete resources hosted by a Kubernetes cluster (which you can deploy using Cloudify as well).

With this plugin, users can add new nodes to a cluster, orchestrate hybrid cloud scenarios with regular services and microservices in a single deployment, associate pods with particular nodes in your cluster, and more.
 
Learn More

Kubernetes Provider

The Cloudify Kubernetes Provider will allow users to utilize Cloudify as the infrastructure manager for K8s, meaning you can scale and autoscale nodes natively, configure networking and load balancing, have storage and compute customization, and native multi-cloud support. The Provider also enables auto-scaling of nodes, Kubernetes management of infrastructure lifecycle, and an open infrastructure with native Kubernetes interface.

See The Code