Google Cloud Platform – Kubernetes engine

How to manage remote IIS on Windows Server 2019

One of the most interesting and sought-after features of Google cloud platform is its Kubernetes engine. The Google Kubernetes engine provides a way to deploy, manage, and scale your containerized applications. Kubernetes provides an environment that contains multiple compute engine instances that work together as a container cluster. Kubernetes engine gives you all the benefits of running clusters such as load balancing and automatic scaling and upgrades.

Kubernetes is a hot topic and is one of the most sought after skill in the market. Learning Kubernetes the GCP way is a valuable skill.

In Kubernetes, a container cluster consists of one or more cluster master and multiple machine instances called nodes. These machine instances are compute engine instances that work as a cluster node. All your containers run on top of this container cluster. You can create multiple container clusters to run different containerized applications as needed:

All the control plane processes are run and handled by the cluster master. This includes the Kubernetes API server, scheduler, and the core resource controllers. The Kubernetes engine manages the cluster master including the upgrades to the Kubernetes version running on the cluster master. The cluster master’s API acts as a hub for all communication to the cluster. All internal processes act as clients of the API server, which makes the API server a single source of truth for the entire cluster.

Upgrades can be set to automatic or manual depending on your preference.

The cluster master is responsible for scheduling workloads on each of the cluster’s nodes. The cluster master schedules workloads (containerized applications) and manages their life cycle, scaling, and upgrades.

Let’s look at how a Kubernetes cluster is deployed using the Kubernetes engine:

  1. Log in to your GCP portal and click on the top menu. Make sure a project is selected.
  2. Click on Kubernetes Engine:

If this is the first time deploying Kubernetes, GCP takes a few minutes to initialize the service.

  1. Select Kubernetes clusters and click Create Cluster:
  1. Enter a Name for the cluster and a Description. Clusters can be either Zonal or Regional (beta). A zonal cluster allows you to deploy nodes in multiple zones in a region. In such a deployment, you will have one cluster master managing nodes in multiple zones. In a regional cluster, you can have one or more cluster master managing nodes in multiple regions.

Cluster Version here shows you the version of Kubernetes that will get deployed on the cluster master. You can choose a different version if you like:

Google offers two supported node images that are deployed on your compute engine—nodes that are part of this Kubernetes cluster. You can either pick from a Container-Optimized OS (cos) which is maintained and managed by Google, or Ubuntu:

Next select the Size of this cluster. Entering 3 here causes the cluster to deploy three nodes (compute engine instances). You will be billed for three nodes. These three nodes will be deployed with the specifications you picked in the machine type. You will see the total cores and the total memory for the three nodes combined. Also take note that the cluster instances (nodes) use ephemeral local disks, meaning if the node is lost, then so is the data on the disk. If you need persistent disks attached to your containers, you will need to add them:

Automatic node upgrades allow you to automatically upgrade the Kubernetes version whenever an upgrade is available. You can Enable or Disable this option during deployment or at a later time:

The Automatic node repair (beta) feature helps you to keep the nodes in the cluster in a functional state. The Kubernetes engine makes periodic checks on the health state of each of the cluster nodes. If the Kubernetes engine detects a node failure, it initiates repair processes that involve recreating the node:

Stackdriver Logging and Stackdriver Monitoring allow you to capture all the logs and monitor the cluster performance:

Under the Advanced options, you can select and opt for more customizations including additional zones, autoscaling, preemptible nodes, and even boot disk size. In addition to these, you can define custom networks, select SSD disks, and select a M aintenance window (beta).

  1. Click Create to create your cluster:

You will see your cluster along with three nodes deployed in the compute engine:

Each node is managed by the cluster master (cluster-1 in this preceding example) and receives updates as needed. Each of these nodes runs the necessary services to support Docker containers that make up the cluster workloads. A workload includes the Docker runtime and the Kubernetes node agent (kubelet), which communicates with the master and is responsible for starting and running Docker containers within that node. To run these services (and some other services/agents, which include log collection and intra-cluster network connectivity), some CPU and memory is consumed from the nodes. The resources consumed are minimal but need to be considered when deciding upon the resources needed for your containers.

At this time, the Kubernetes engine only supports Docker containers on the nodes.

Comments are closed.