Kubernetes – Upgrading the cluster

How to install Ubuntu Server 19.10″ href=”https://devtutorial.io/how-to-install-ubuntu-server-19-10.html” target=”_blank”>How to install Ubuntu Server 19.10

In order to run your cluster over long periods of time, you’ll need to update your cluster as needed. There are several ways to manage cluster upgrades, and the difficulty level of the upgrades is determined by the platform you’ve chosen previously. As a general rule, hosted Platform as a service (PaaS) options are simpler, while roll your own options rely on you to manage your cluster upgrades.

Upgrading PaaS clusters

Upgrading PaaS clusters is a lot simpler than updating your hand-rolled clusters. Let’s check out how the major cloud service providers update their hosted Kubernetes platforms.

With Azure, it’s relatively straightforward to manage an upgrade of both the control plane and nodes of your cluster. You can check which upgrades are available for your cluster with the following command:

az aks get-upgrades --name “myAKSCluster” --resource-group myResourceGroup --output table
Name ResourceGroup MasterVersion NodePoolVersion Upgrades

------- --------------- --------------- ----------------- -------------------

default gsw-k8s-aks 1.8.10 1.8.10 1.9.1, 1.9.2, 1.9.6

When upgrading AKS clusters, you have to upgrade through minor versions. AKS handles adding a new node to your cluster and manages to cordon and drain process in order to prevent any disruption to your running applications. You can see how the drain process works in a following section.

You can run the upgrade command as follows. You should experiment with this feature before running on production workloads so you can understand the impact on running applications:

az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version 1.9.6

You should see a lot of output that identifies the update, which will look something like this:

  "id": "/subscriptions/<Subscription ID>/resourcegroups/myResourceGroup/providers/Microsoft.ContainerService/managedClusters/myAKSCluster",
  "location": "eastus",
  "name": "myAKSCluster",
  "properties": {
    "accessProfiles": {
      "clusterAdmin": {
        "kubeConfig": "..."
      "clusterUser": {
        "kubeConfig": "..."
    "agentPoolProfiles": [
        "count": 1,
        "dnsPrefix": null,
        "fqdn": null,
        "name": "myAKSCluster",
        "osDiskSizeGb": null,
        "osType": "Linux",
        "ports": null,
        "storageProfile": "ManagedDisks",
        "vmSize": "Standard_D2_v2",
        "vnetSubnetId": null
    "dnsPrefix": "myK8sClust-myResourceGroup-4f48ee",
    "fqdn": "myk8sclust-myresourcegroup-4f48ee-406cc140.hcp.eastus.azmk8s.io",
    "kubernetesVersion": "1.9.6",
    "linuxProfile": {
      "adminUsername": "azureuser",
      "ssh": {
        "publicKeys": [
            "keyData": "..."
    "provisioningState": "Succeeded",
    "servicePrincipalProfile": {
      "clientId": "e70c1c1c-0ca4-4e0a-be5e-aea5225af017",
      "keyVaultSecretRef": null,
      "secret": null
  "resourceGroup": "myResourceGroup",
  "tags": null,
  "type": "Microsoft.ContainerService/ManagedClusters"

You can additionally show the current version:

az aks show --name myAKSCluster --resource-group myResourceGroup --output table

To upgrade a GCE cluster, you’ll follow a similar procedure. In GCE’s case, there are two mechanisms that allow you update your cluster:

  • For manager node upgrades, GCP deletes and recreates the master nodes using the same Persistent Disk (PD) to preserve your state across upgrades
  • With your worker nodes, you’ll use GCP’s manage instance groups and perform a rolling upgrade of your cluster, wherein each node is destroyed and replaced to avoid interruption to your workloads

You can upgrade your cluster master to a specific version:

cluster/gce/upgrade.sh -M v1.0.2

Or, you can update your full cluster with this command:

cluster/gce/upgrade.sh -M v1.0.2

To upgrade a Google Kubernetes Engine cluster, you have a simple, user-initiated option. You’ll need to set your project ID:

gcloud config set project [PROJECT_ID]

And, make sure that you have the latest set of gcloud components:

gcloud components update

When updating Kubernetes clusters on GCP, you get the following benefits. You can downgrade your nodes, but you cannot downgrade your master:

  • GKE will handle node and pod drainage without application interruption
  • Replacement nodes will be recreated with the same node and configuration as their predecessors
  • GKE will update software for the following pieces of the cluster:
    • kubelet
    • kube-proxy
    • Docker daemon
    • OS

You can see what options your server has for upgrades with this command:

gcloud container get-server-config

Keep in mind that data stored in the hostPath and emptyDir directories will be deleted during the upgrade, and only PDs will be preserved during it. You can turn on automatic node updates for your cluster with GKE, or you can perform them manually.

To turn on automatic node automatic upgrades read this: https://cloud.google.com/kubernetes-engine/docs/concepts/node-auto-upgrades.

You can also create clusters with this set to default with the --enable-autoupgrade command:

gcloud container clusters create [CLUSTER_NAME] --zone [COMPUTE_ZONE] \

If you’d like to update your clusters manually, you can issue specific commands. It is recommended for production systems to turn off automatic upgrades and to perform them during periods of low traffic or during maintenance windows to ensure minimal disruption for your applications. Once you build confidence in updates, you may be able to experiment with auto-upgrades.

To manually kick off a node upgrade, you can run the following command:

gcloud container clusters upgrade [CLUSTER_NAME]

If you’d like to upgrade to a specific version of Kubernetes, you can add the --cluster-version tag.

You can see a running list of operations on your cluster to keep track of the update operation:

gcloud beta container operations list
operation-1505407677851-8039e369 CREATE_CLUSTER us-west1-a my-cluster DONE 20xx-xx-xxT16:47:57.851933021Z 20xx-xx-xxT16:50:52.898305883Z
operation-1505500805136-e7c64af4 UPGRADE_CLUSTER us-west1-a my-cluster DONE 20xx-xx-xxT18:40:05.136739989Z 20xx-xx-xxT18:41:09.321483832Z
operation-1505500913918-5802c989 DELETE_CLUSTER us-west1-a my-cluster DONE 20xx-xx-xxT18:41:53.918825764Z 20xx-xx-xxT18:43:48.639506814Z

You can then describe your particular upgrade operation with the following:

gcloud beta container operations describe [OPERATION_ID]

The previous command will tell you details about the cluster upgrade action:

gcloud beta container operations describe operation-1507325726639-981f0ed6
endTime: '20xx-xx-xxT21:40:05.324124385Z'
name: operation-1507325726639-981f0ed6
operationType: UPGRADE_CLUSTER
selfLink: https://container.googleapis.com/v1/projects/.../kubernetes-engine/docs/zones/us-central1-a/operations/operation-1507325726639-981f0ed6
startTime: '20xx-xx-xxT21:35:26.639453776Z'
status: DONE
targetLink: https://container.googleapis.com/v1/projects/.../kubernetes-engine/docs/zones/us-central1-a/clusters/...
zone: us-central1-a

Comments are closed.