loading...

Google Cloud Platform – Administering a cluster

How to set a static IP address on Windows Server 2019

Let’s look at doing some hands-on labs to deploy our cluster. Our goal here is to deploy a simple Kubernetes cluster and review its different functions.

When you create a cluster, you specify the number of node instances. This number becomes the default node pool. A node pool is basically a grouping of nodes that are identical to each other. You can create multiple node pools if needed or add to the node pools if required. Node pools are useful when you need to schedule pods that require different sets of resources. For example, you can create a node pool of small compute instances and another node pool with SSD-backed storage instances. Node pools also allow you to make changes to nodes without affecting the entire cluster and other node pools:

  1. Log in to your GCP account and create a new project called KubeCluster.
  2. If this is the first time, Google Cloud will take some time to prepare and enable its backend API services.
  3. Click on CREATE CLUSTER. What you are doing here is deploying a master and its nodes.
  4. Pick a Name for the cluster. Pick either Zonal or Regional deployment. Zonal deployment is tied to a single zone in a region, while Regional deployment can spread the cluster’s master VMs to multiple zones with-in a region for higher availability:
  1. You can select the Kubernetes version type and set the machine Size and the number of nodes:
  1. Click Create when done:
  1. Our cluster is being deployed and will take a few minutes. When you click compute engine, you will see three nodes being deployed that form the default node pool:
  1. In the Instance groups, you will notice a new instance group created with your Kubernetes cluster name. This group will have your three nodes in it:
  1. Clicking on the group shows the three nodes that are currently being deployed:

When your cluster is deployed, this is what you should see:

  1. Click on the cluster to get more details and perform other actions such as upgrades and/or deletions:

You see that the master version is 1.9.7-gke.6 and that an upgrade is available. Scrolling down shows you more details about your node pools:

  1. To add more node pools, you can click on EDIT and click on the  Add node pool button.

Let’s connect to our cluster:

  1. Click on CONNECT which should give you the command line that you need to connect to your cluster. Remember that you need to have the Google Cloud SDK installed on your machine. You will also need the kubectl components installed for the Cloud SDK:
  1. Once done, you can run the command in the popup:

This gets the credentials and stores them, so now you can run the kubectl commands and manage the cluster.

Let’s first deploy some workloads:

  1. In the side panel menu under Kubernetes, click on the Workloads.
  1. Here you can deploy workloads. Also, take a moment to look at the system workloads and review the different system workloads that make Kubernetes run:

  1. Clicking on Deploy shows the deployment pane. I will simply pick the defaults in place and click Deploy. You can add more containers (pods or workloads) to this deployment or stay with the default nginx:latest deployment. For custom images, you can upload them into the container registry.
  2. Once deployed, you will see your containers (workloads or pods) deployed and in a running state:

At this point, you are considered to have a deployment, which is a replicated, stateless application on your cluster. You can create stateful applications as well. Stateful applications save the internal state of the application and require a persistent storage mapped to your pods. A container’s root filesystem is not suitable for storing persistent data. Remember that containers are disposable entities and based on a scenario, a cluster manager may delete or reschedule any container at any time. Any data stored locally on a container will be lost and is not suitable for storing a state of an application.

This is why we create PersistentVolume (PV) and PersistentVolumeClaims (PVC) to store persistent data. A PV is a storage unit in a cluster that can be dynamically provisioned by Kubernetes or manually provisioned by an administrator. Persistent volumes are backed by GCP persistent disk or an NFS share and so on. A PersistentVolumeClaim is a request for storage by a user that can be filled by a PersistentVolume. For example, if a persistent volume is 250 GB, a user can create a PersistentVolumeClaim of 10 GB if that is all his application needs. This claim can now be mapped to a mount points in a pod (container or workload).

The important thing to remember is that PersistentVolumes and PersistentVolumeClaims are independent of a pod’s life cycle. Events such as restarts and deletions of pods will not delete any persistent data stored on these volumes. For the most part, you will not have to create PersistentVolumes and PersistentVolumeClaims separately. Kubernetes will automatically provision a persistent disk for you when you create a PersistentVolumeClaim.

Let’s create a PersistentVolume and a PersistentVolumeClaim using Kubernetes. To do this open your terminal so we can write a YAML file and use the kubectl commands to execute it:

         kind: PersistentVolumeClaim 
   apiVersion: v1 
   metadata: 
         name: myvolumeclaim 
   spec: 
         accessModes: 
          - ReadWriteOnce 
         resources: 
               requests: 
         storage: 250Gi 

Save this code in a file with the name pvc.yaml.

In the terminal, type:

You can now see a volume claim is created with a volume and is 250 GB in size. Kubernetes automatically created this disk for us when we requested the claim be created. This reduces management. It also has ReadWriteOnce access and the storage class is standard. This is the default storage class if no storage class is specified.

Three different access modes are supported:

  • ReadWriteOnce: The volume can be mounted as a read-write by a single node
  • ReadOnlyMany: The volume can be mounted as a read-only by many nodes
  • ReadWriteMany: The volume can be mounted as read-write by many nodes; volumes backed by compute engine persistent disks do not support this mode

Under the Kubernetes engine, click on S torage. You will see a PVC created called myvolumeclaim:

The Storage classes tab shows the storage class information:

Go back to your Kubernetes clusters view and click on your cluster. Go to the S torage tab:

You can see that  Persistent volumes has been provisioned. Now, any new container that gets deployed will have this volume presented to it and have access to all the data.

Notice the active revisions with a revision number and a name. This is the replica set. A replica set ensures that a number of pod replicas are running at a time. You can define the minimum desired and maximum replicas and Kubernetes will ensure that the pods are deployed to satisfy those conditions. It is important to remember that a deployment manages replica sets.

Trying the command in the terminal shows three pods running:.

One more step is to mount the newly created volume to our pods. For this, we will need to do the following steps:

  1. Edit the deployment YAML file so we can add the volume mount point and the volume claim. This is followed when we deploy the pods (containers or workloads).
  2. Kubernetes will automatically deploy a new container once YAML is updated and saved. Alternatively, we then redeploy the containers (you can also redeploy the entire deployment if needed).
  1. We will then log in to a specific container to see if our disk was mounted and write a sample file to it.
  2. Let’s edit the YAML file for our already created deployment. In an ideal environment, you will have all the disks needed by your application created and mapped as part of your application deployment, so editing YAML is needed only if the application has added dependencies.
  3. Get on your terminal and type:
$ kubectl get deployments 
NAME         DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE 
mywebapp-1   1         1         1            1           5m 
For this exercise, I recreated my deployment and called it mywebapp-1.
  1. Let’s edit this deployment so we can map it to the volume claim we created earlier:
$ kubectl edit deployment mywebapp-1 
  1. This opens up my YAML for the deployment mywebapp-1. Scroll down to the Spec: Containers: section and add the following lines:
volumeMounts: 
        - mountPath: /mnt/ 
name: myvol-mount 
  1. Under the Spec: section, map the volumeMounts to a volume claim:
volumes: 
      - name: myvol-mount 
persistentVolumeClaim: 
claimName: myvolumeclaim 

Notice that the claimName is the same claim we created earlier. Also, make sure the name in both volumeMounts and volumes is the same.

Following is what the final YAML block should look like:

spec: 
containers: 
      - image: nginx:latest 
imagePullPolicy: Always 
name: nginx 
resources: {} 
terminationMessagePath: /dev/termination-log 
terminationMessagePolicy: File 
volumeMounts: 
        - mountPath: /mnt/ 
name: myvol-mount 
dnsPolicy: ClusterFirst 
restartPolicy: Always 
schedulerName: default-scheduler 
securityContext: {} 
terminationGracePeriodSeconds: 30 
volumes: 
      - name: myvol-mount 
persistentVolumeClaim: 
claimName: myvolumeclaim 

Save this in the Terminal using :wq!. Kubernetes does a YAML check and if it fails, you will see an error.

Once successful, you will see a deployment mywebapp-1 edited message.

You should see Kubernetes redeploying the containers once the YAML is saved.

Let’s log in to our container to see whether the disk was mounted:

$ kubectl get pods 
NAME                          READY     STATUS    RESTARTS   AGE 
mywebapp-1-68fb69df68-4tcpp   1/1       Running   0          5m 

We see one pod running. Let’s quickly log in to this container. We should now see the disk mounted:

In a production environment, you should never log in to the shell of the container.
$ kubectl exec -it mywebapp-1-68fb69df68-4tcpp -- /bin/bash 

This let’s us bash into our container:

root@mywebapp-1-68fb69df68-4tcpp:/# ls 
bin   dev  home  lib64     mnt  proc  run    srvtmpvar 
boot  etc  lib  media      opt  root sbin  sys  usr 
 
root@mywebapp-1-68fb69df68-4tcpp:/# df -h 
Filesystem      Size  Used Avail Use% Mounted on 
overlay          95G  2.7G   92G   3% / 
tmpfs           1.9G     0  1.9G   0% /dev 
tmpfs           1.9G     0  1.9G   0% /sys/fs/cgroup 
/dev/sdb    246G   61M  233G   1% /mnt 
/dev/sda1        95G  2.7G   92G   3% /etc/hosts 
shm              64M     0   64M   0% /dev/shm 
tmpfs           1.9G   12K  1.9G   1% /run/secrets/kubernetes.io/serviceaccount 
tmpfs           1.9G     0  1.9G   0% /sys/firmware 

As you can see, the volume is now mounted to the container. If this container is deleted, everything except the volume will get deleted. I stored a sample file in this volume earlier called helloWorld, let’s see if it exits:

root@mywebapp-1-68fb69df68-4tcpp:/# ls mnt 
helloWorldlost+found 
 
root@mywebapp-1-68fb69df68-4tcpp:/# cat /mnt/helloWorld 
Hello World! 

You have now created and attached a persistent volume to your container for stateful applications!

Comments are closed.

loading...