Google Cloud Platform – Kubernetes concepts

How to manage remote IIS on Windows Server 2019

In the first chapter, we briefly looked at Kubernetes, its concepts, and even deploye a cluster. Before we dive deeper into Kubernetes, let’s review it’s concepts once again.

Kubernetes provides a managed environment for deploying and managing your containerized applications. Multiple Google Compute Engine instances are grouped together to form a container cluster that are managed by the Kubernetes engine. It is important to note that the Kubernetes engine only works with containerized applications. This means you must package your applications into containers before you deploy them on a Kubernetes engine. In Kubernetes, these containers are called workloads. At the time of this writing, Kubernetes only supports Docker containers.

Kubernetes is one of the most sought after skill in the market today.

The basic architecture of a Kubernetes engine is made up of a cluster master and worker machines called nodes. The master and the node machines together form the cluster orchestration system. The cluster master is the core of the Kubernetes engine and runs the control panel processes that include the API server, scheduler, and core resource controllers. The cluster master’s API process is the hub of all communication as all interactions are done via the Kubernetes API calls. This makes the API server, the single source of truth for the entire cluster.

The cluster master is the single source of truth for the entire cluster.

A container cluster typically contains one or more nodes, which are compute engine VMs called worker machines that run your containerized workloads. These compute virtual machines are of standard VM type with 1 virtual CPU and 3.75 GB of RAM, but these values are customizable. These nodes (worker machines) are automatically created by the Kubernetes engine when you create a cluster. The master controls each of these nodes and the Kubernetes engine can perform automatic repairs and upgrades on these cluster nodes. All necessary services to support Docker containers run on these nodes. The nodes also run the Kubernetes node agent (Kubelet), which communicates with the master and also starts and stops the containers.

One thing to remember is that every node has some resources allocated to run Kubernetes services, so there will be a difference in the node’s total resources and the node’s allocatable resources:

Let’s describe the different components in this illustration:

  • Kubernetes master: This functions as the API server receiving requests from developers. These requests can range from creating new container pods or adding more nodes (compute engine virtual machines).
  • etcd: This functions as a backend system and is part of the API server that uses a distributed key-value system to store the cluster state. Ideally, etcd is run on a different machine to ensure availability.
  • Controller manager:¬†This has many controller functions such as the replication controller, endpoint controller, and namespace controller. The primary function of a controller manager is to watch over the shared state of the cluster and attempt to move it to the final state. For example, the replication controller ensures that there are a right number of replica pods running for each application.
  • Scheduler: This takes care of pod placement on the nodes and ensures a balanced deployment.
  • Node components: This run on every node that is part of the cluster. A node at a minimum contains the container runtime, such as Docker, to run the containers.
  • Kubelet service: This¬†executes containers (pods) on the node and ensures its health.
  • Kube-proxy: This functions as a network proxy or load balancer that allows for service abstraction.

Kubernetes nodes run pods. A pod is the most basic object in Kubernetes. A pod can be one or more container but typically one container runs per pod. Multiple containers can be run in one pod and these containers together function as a single entity. In essence, a pod can be visualized as a self-contained logical host that has all the specific needs in place for a container to run. Pods also allow access to networking and storage resources to the container. Each pod automatically gets assigned a unique IP address and containers in the pod share this IP address and network ports.

It is important to remember that pods are ephemeral and once a pod is deleted, it cannot be recovered or brought back. If a host running multiple pods fails, that host is re-deployed and the controller determines if the pod needs to be deployed or not. Typically, you do not create or delete a pod. This is done by the controller deployed to manage the creation and deletion of pods. The controller manages the entire process for you, such as rolling updates for all pods.

Comments are closed.