Kubernetes Node Distribution and Availability
Introduction
A Kubernetes instance is provided as a cluster, which consists of a set of machines, so-called nodes. A cluster is composed of a control plane and at least one worker node. The control plane manages the worker nodes and therefore the pods in the cluster by making decisions about scheduling, event detection and rights management. Inside the control plane, multiple components exist, which can be duplicated and distributed over multiple nodes inside the cluster. Typically, no user workloads are run on these nodes in order to separate the controller component from user workloads, which could pose a security risk.
Glossary
The following terms are used throughout this document:
Term | Meaning |
---|---|
Worker | Virtual or bare-metal machine, which hosts workloads of customers |
Control Plane | Virtual or bare-metal machine, which hosts the container orchestration layer that exposes the API and interfaces to define, deploy, and manage the lifecycle of containers. |
Machine | Virtual or bare-metal entity with computational capabilities |
Motivation
In normal day-to-day operation, it is not unusual for some operational failures, either due to wear and tear of hardware, software misconfigurations, external problems or user errors. Whichever was the source of such an outage, it always means down-time for operations and users and possible even data loss. Therefore, a Kubernetes cluster in a productive environment should be distributed over multiple "failure zones" in order to provide fault-tolerance and high availability. This is especially important for the control plane of the cluster, since it contains the state of the whole cluster. A failure of this component could mean an unrecoverable failure of the whole cluster.
Design Considerations
Most design considerations of this standard follow the previously written Decision Record Kubernetes Nodes Anti Affinity as well as the Kubernetes documents about High Availability and Best practices for large clusters.
SCS wishes to prefer distributed, highly-available systems due to their obvious advantages like fault-tolerance and data redundancy. But it also understands the costs and overhead for the providers associated with this effort, since the infrastructure needs to have hardware which will just be used to provide fail-over safety or duplication.
The document Best practices for large clusters describes the concept of a failure zone. This term isn't defined any further, but can in this context be described as a number of physical (computing) machines in such a vicinity to each other (either through physical or logical interconnection in some way), that specific problems inside this zone would put all these machines at risk of failure/shutdown. It is therefore necessary for important data or services to not be present just on one failure zone. How such a failure zone should be defined is dependent on the risk model of the service/data and its owner as well as the capabilities of the provider. Zones could be set from things like single machines or racks up to whole datacenters or even regions, which could be coupled by things like electrical grids. They're therefore purely logical entities, which shouldn't be defined further in this document.
Decision
This standard formulates the requirement for the distribution of Kubernetes nodes in order to provide a fault-tolerant and available Kubernetes cluster infrastructure.
The control plane nodes MUST be distributed over multiple physical machines. Kubernetes provides best-practices on this topic, which are also RECOMMENDED by SCS.
At least one control plane instance MUST be run in each "failure zone" used for the cluster, more instances per "failure zone" are possible to provide fault-tolerance inside a zone.
Worker nodes are RECOMMENDED to be distributed over multiple zones. This policy makes it OPTIONAL to provide a worker node in each "failure zone", meaning that worker nodes can also be scaled vertically first before scaling horizontally.
To provide metadata about the node distribution and possibly provide the ability to schedule workloads efficiently, which also enables testing of this standard, providers MUST annotate their K8s nodes with the labels listed below. These labels MUST be kept up to date with the current state of the deployment.
-
topology.kubernetes.io/zone
Corresponds with the label described in K8s labels documentation. It provides a logical zone of failure on the side of the provider, e.g. a server rack in the same electrical circuit or multiple machines bound to the internet through a singular network structure. How this is defined exactly is up to the plans of the provider. The field gets autopopulated most of the time by either the kubelet or external mechanisms like the cloud controller.
-
topology.kubernetes.io/region
Corresponds with the label described in K8s labels documentation. It describes the combination of one or more failure zones into a region or domain, therefore showing a larger entity of logical failure zone. An example for this could be a building containing racks that are put into such a zone, since they're all prone to failure, if e.g. the power for the building is cut. How this is defined exactly is also up to the provider. The field gets autopopulated most of the time by either the kubelet or external mechanisms like the cloud controller.
-
topology.scs.community/host-id
This is an SCS-specific label; it MUST contain the hostID of the physical machine running the hypervisor (NOT: the hostID of a virtual machine). Here, the hostID is an arbitrary identifier, which need not contain the actual hostname, but it should nonetheless be unique to the host. This helps identify the distribution over underlying physical machines, which would be masked if VM hostIDs were used.
Conformance Tests
The script k8s-node-distribution-check.py
checks the nodes available with a user-provided
kubeconfig file. Based on the labels topology.scs.community/host-id
,
topology.kubernetes.io/zone
, topology.kubernetes.io/region
and node-role.kubernetes.io/control-plane
,
the script then determines whether the nodes are distributed according to this standard.
If this isn't the case, the script produces an error.
It also produces warnings and informational outputs, e.g., if labels don't seem to be set.
Previous standard versions
This is version 2 of the standard; it extends version 1 with the requirements regarding node labeling.