loading...

Kubernetes – Scaling the cluster

As with PaaS versus hosted clusters, you have several options for scaling up your production Kubernetes cluster.

On GKE and AKS

When upgrading a GKE cluster, all you need to do is issue a scaling command that modifies the number of instances in your minion group. You can resize the node pools that control your cluster with the following:

gcloud container clusters resize [CLUSTER_NAME] \
 --node-pool [POOL_NAME]
 --size [SIZE]

Keep in mind that new nodes are created with the same configuration as the current machines in your node pool. When additional pods are scheduled, they’ll be scheduled on the new nodes. Existing pods will not be relocated or rebalanced to the new nodes.

Scaling up the AKS cluster engine is a similar exercise, where you’ll need to specify the --resource-group node count to your required number of nodes:

az aks scale --name myAKSCluster --resource-group gsw-k8s-group --node-count 1

DIY clusters

When you add resources to your hand-rolled Kubernetes cluster, you’ll need to do more work. In order to have nodes join in as you add them automatically via a scaling group, or manually via Infrastructure as code, you’ll need to ensure that automatic registration of nodes is enabled via the¬†--register-node flag. If ¬†this flag is turned on, new nodes will attempt to auto-register themselves. This is the default behavior.

You can also join nodes manually, using a pre-vetted token, to your clusters. If you initialize kubeadm with the following token:

kubeadm init --token=101tester101 --kubernetes-version $(kubeadm version -o short)

You can then add additional nodes to your clusters with this command:

kubeadm join --discovery-token-unsafe-skip-ca-verification --token=101tester101:6443

Normally in a production install of kubeadm, you would not specify the token and need to extract it and store it from the kubeadm init command.

Node maintenance

If you’re scaling your cluster up or down, it’s essential to know how the manual process of node deregistration and draining works. We’ll use the kubectl drain command here to remove all pods from your node before removing the node from your cluster. Removing all pods from your nodes ensures that there are not running workloads on your instance or VM when you remove it.

Let’s get a list of available nodes using the following command:

kubectl get nodes

Once we have the node list, the command to drain nodes is fairly simple:

kubectl drain <node>

This command will take some time to execute, as it has to reschedule the workloads on the node onto other machines that have available resources. Once the draining is complete, you can remove the node via your preferred programmatic API. If you’re merely removing the node for maintenance, you can add it back to the available nodes with the uncordon command:

kubectl uncordon <node>

Comments are closed.

loading...