Finally I have some time to write a few lines about my experience at KubeconEU18, which I attended last month in Copenhagen, Denmark.
It was not only my first Kubecon, but also my first non-strictly-networking-oriented conference I’ve ever attended.
Have I felt like a fish out of water? Not really, there was plenty of networking going on and, actually, I felt pretty good that networks re-gained a front-row seat in IT.
Yes, there was a lot of it. Inter-container comms are really en-vogue these days, with service-meshes being at the forefront of the hype of the moment.
My understanding of a service mesh is a constellation of elements (the sidecar containers) who know well how to talk to each other through a centralised coordination function. The real application micro-services talk to their sidecars, instead of having to talk directly to other micro-services.
All in all, to me, this is a L7 router, that routes application-level data flows using L7 rules, rather than our beloved L3/L4 rules.
What caught my attention was, however, the sheer amount of presentations on containers and K8s networking, most of which were about (or related to) the CNI (Container Networking Interface).
This is a particularly interesting concept to me as it allows (amongst many other things) to use multiple CNI-plugins, such as Calico, Contiv, Romana, Cilium at the same time, assigning multiple vNICs to the same container, each provided by a different plugin.
Being able to mix different types of networks and routing designs in the same deployment is a great step forward towards non-siloed clusters, where applications with very different networking requirements could easily coexist.
The popularity of BGP+EVPN as a control plane for the VXLAN overlay in DC architectures, makes plugins such as Calico even more attractive, as the BGP domain can be further extended down to the application.
The real cherry-on-the-cake was the discovery of Kubevirt and Kata Containers. Kubevirt is a wrapper for libvirt that allows to manage the lifecycle of a VM using the K8s APIs and their networking using the CNI.
Why is this important? I think we’re at an important crossroads in Cloud technologies today, with Openstack being seriously challenged by Kubernetes as a VIM (not just as a container scheduler).
Being able to streamline under the same API multiple technologies, including VMs, gives the opportunity to adopt K8s from day-0 without the complicated overhead of having to manage a very complex Openstack instance and manage VM workloads exactly the same way we manage container pods.
The usual stopper for that was the perception that the networking stack for containerised applications was less flexible and rock-solid than the one for VMs. Well…I think those days are way gone.
The other “excuse” for delaying container adoption was security. With docker containers there’s a lot of dodgy stuff you can really do to compromise your own security.
Kata Containers come to rescue here, as they promise to “feel and perform like containers, but provide the workload isolation and security advantages of VMs.”
Let’s wrap it up then. Why is this all important?
I work for a big Telco and one of our biggest concerns is the lack of cloud-readiness of the NFV solutions most of the vendors offer, that still rely on very old HA concepts (VRRP/active-standby anyone?) that make those really hard to scale horizontally and difficult to deploy in a private-cloud infrastructure.
My personal belief is that if we adopted K8s right away we could do a better job at pushing the VNF suppliers to work harder to containerise their solutions and make them cloud-native (see the definition of that word here) whilst having a stop-gap solution with Kubevirt where containerisation is not immediately doable.