A worker node provides a running environment for client applications. Though containerized microservices, these applications are encapsulated in Pods, controlled by the cluster control plane agents running on the master node.
Pods are scheduled on worker nodes, where they find required compute, memory and storage resources to run, and networking to talk to each other and the outside world.
A Pod is the smallest scheduling unit in Kubernetes. It is a logical collection of one or more containers scheduled together. We will explore them further in later chapters.
Also, to access the applications from the external world, we connect to worker nodes and not to the master node.
Worker Node Components
A worker node has the following components:
- Container runtime
- Addons for DNS, Dashboard, cluster-level monitoring and logging.
Now, let’s discuss them in more detail.
1 Container Runtime
Although Kubernetes is described as a “container orchestration engine”, it does not have the capability to directly handle containers.
In order to run and manage a container’s lifecycle, Kubernetes requires a container runtime on the node where a Pod and its containers are to be scheduled. Kubernetes supports many container runtimes:
- Docker – although a container platform which uses containerd as a container runtime, it is the most widely used container runtime with Kubernetes
- CRI-O – a lightweight container runtime for Kubernetes, it also supports Docker image registries
- containerd – a simple and portable container runtime providing robustness
- rkt – a pod-native container engine, it also runs Docker images
- rktlet – a Kubernetes Container Runtime Interface (CRI) implementation using rkt.
The kubelet is an agent running on each node and communicates with the control plane components from the master node.
It receives Pod definitions, primarily from the API server, and interacts with the container runtime on the node to run containers associated with the Pod. It also monitors the health of the Pod’s running containers.
The kubelet connects to the container runtime using Container Runtime Interface (CRI). CRI consists of protocol buffers, gRPC API, and libraries.
As shown above, the kubelet acting as grpc client connects to the CRI shim acting as grpc server to perform container and image operations.
CRI implements two services: ImageService and RuntimeService.
The ImageService is responsible for all the image-related operations, while the RuntimeService is responsible for all the Pod and container-related operations.
Container runtimes used to be hard-coded in Kubernetes, but with the development of CRI, Kubernetes is more flexible now and uses different container runtimes without the need to recompile.
Any container runtime that implements CRI can be used by Kubernetes to manage Pods, containers, and container images.
kubelet – CRI shims
Below you will find some examples of CRI shims:
With dockershim, containers are created using Docker installed on the worker nodes. Internally, Docker uses containerd to create and manage containers.
With cri-containerd, we can directly use Docker’s smaller offspring containerd to create and manage containers.
CRI-O enables using any Open Container Initiative (OCI) compatible runtimes with Kubernetes. At the time this course was created, CRI-O supported runC and Clear Containers as container runtimes. However, in principle, any OCI-compliant runtime can be plugged-in.
The kube-proxy is the network agent which runs on each node responsible for dynamic updates and maintenance of all networking rules on the node. It abstracts the details of Pods networking and forwards connection requests to Pods.
Addons are cluster features and functionality not yet available in Kubernetes, therefore implemented through 3rd-party pods and services.
- DNS – cluster DNS is a DNS server required to assign DNS records to Kubernetes objects and resources
- Dashboard – a general purposed web-based user interface for cluster management
- Monitoring – collects cluster-level container metrics and saves them to a central data store
- Logging – collects cluster-level container logs and saves them to a central log store for analysis.