Decoupled microservices based applications rely heavily on networking in order to mimic the tight-coupling once available in the monolithic era. Networking, in general, is not the easiest to understand and implement.
Kubernetes is no exception – as a containerized microservices orchestrator is needs to address 4 distinct networking challenges:
- Container-to-container communication inside Pods
- Pod-to-Pod communication on the same node and across cluster nodes
- Pod-to-Service communication within the same namespace and across cluster namespaces
- External-to-Service communication for clients to access applications in a cluster
All these networking challenges must be addressed before deploying a Kubernetes cluster. Let’s see how we solve these challenges.
1 Container-to-Container Communication Inside Pods
Making use of the underlying host operating system’s kernel features, a container runtime creates an isolated network space for each container it starts.
On Linux, that isolated network space is referred to as a network namespace. A network namespace is shared across containers, or with the host operating system.
When a Pod is started, a network namespace is created inside the Pod, and all containers running inside the Pod will share that network namespace so that they can talk to each other via localhost.
2 Pod-to-Pod Communication Across Nodes
In a Kubernetes cluster Pods are scheduled on nodes randomly. Regardless of their host node, Pods are expected to be able to communicate with all other Pods in the cluster, all this without the implementation of Network Address Translation (NAT).
This is a fundamental requirement of any networking implementation in Kubernetes.
The Kubernetes network model aims to reduce complexity, and it treats Pods as VMs on a network, where each VM receives an IP address – thus each Pod receiving an IP address.
This model is called “IP-per-Pod” and ensures Pod-to-Pod communication, just as VMs are able to communicate with each other.
3 Pod-to-Service communication
Let’s not forget about containers though. They share the Pod’s network namespace and must coordinate ports assignment inside the Pod just as applications would on a VM, all while being able to communicate with each other on localhost – inside the Pod.
However, containers are integrated with the overall Kubernetes networking model through the use of the Container Network Interface (CNI) supported by CNI plugins. CNI is a set of a specification and libraries which allow plugins to configure the networking for containers.
While there are a few core plugins, most CNI plugins are 3rd-party Software Defined Networking (SDN) solutions implementing the Kubernetes networking model.
In addition to addressing the fundamental requirement of the networking model, some networking solutions offer support for Network Policies. Flannel, Weave, Calico are only a few of the SDN solutions available for Kubernetes clusters.
The container runtime offloads the IP assignment to CNI, which connects to the underlying configured plugin, such as Bridge or MACvlan, to get the IP address.
Once the IP address is given by the respective plugin, CNI forwards it back to the requested container runtime.
For more details, you can explore the Kubernetes documentation.
4 Pod-to-External World Communication
For a successfully deployed containerized applications running in Pods inside a Kubernetes cluster, it requires accessibility from the outside world.
Kubernetes enables external accessibility through services, complex constructs which encapsulate networking rules definitions on cluster nodes.
By exposing services to the external world with kube-proxy, applications become accessible from outside the cluster over a virtual IP.