loading...

Kubernetes – Cluster life cycle

There are a few more key items that we should cover so that you’re armed with full knowledge about the items that can help you with creating highly available Kubernetes clusters. Let’s discuss how you can use admission controllers, workloads, and custom resource definitions to extend your cluster.

Admission controllers

Admission controllers are Kubernetes code that allows you to intercept a call to the Kubernetes API server after it has been authenticated and authorized. There are standard admission controllers that are included with the core Kubernetes system, and people also write their own. There are two controllers that are more important than the rest:

  • The MutatingAdmissionWebhook is responsible for calling Webhooks that mutate, in serial, a given request. This controller only runs during the mutating phase of cluster operating. You can use a controller like this in order to build business logic into your cluster to customize admission logic with operations such as CREATE, DELETE, and UPDATE. You can also do things like automate the provisioning of storage with the StorageClassSay that a deployment creates a PersistentVolumeClaim; a webhoook can automate the provisioning of the StorageClass in response. With the MutatingAdmissionWebhook, you can also do things such as injecting a sidecar into a container prior to it being built.
  • The ValidatingAdmissionWebhook is what the admission controller runs in the validation phase, and calls any webhooks that will validate a given request. Here, webhooks are called in parallel, in contrast to the serial nature of the MutatingAdmissionWebhook. It is key to understand that none of the webhooks that it calls are allowed to mutate the original object. An example of a validating webhook such as this is incrementing a quota.

Admission controllers and their mutating and validating webhooks are very powerful, and importantly provide Kubernetes operators with additional control without having to recompile binaries such as the kube-apiserver.  The most powerful example is Istio, which uses webhooks to inject its Envoy sidecar in order to implement load balancing, circuit breaking, and deployment capabilities. You can also use webhooks to restrict namespaces that are created in multi-tenant systems.

You can think of mutation as a change and validation as a check in the Kubernetes system. As the associated ecosystem of software grows, it will become increasingly important from a security and validation standpoint to use these types of capabilities. You can use controllers, with their change and check capabilities to do things such as override image pull policies in order to enable or prevent certain images from being used on your cluster. 

These admission controllers are essentially part of the cluster control plane, and can only be run by cluster administrators. 

Here’s a very simple example where we’ll check that a namespace exists in the admission controller.

NamespaceExists: This admission controller checks all requests on namespaced resources other than Namespace itself. If the namespace referenced from a request doesn’t exist, the request is rejected. You can read more about this at https://kubernetes.io/docs/reference/access-authn-authz/admission-controllers/#namespaceexists.

First, let’s grab Minikube for our cluster and check which namespaces exist:

master $ kubectl get namespaces
 NAME STATUS AGE
 default Active 23m
 kube-public Active 23m
 kube-system Active 23m

Great! Now, let’s try and create a simple deployment, where we put it into a namespace that doesn’t exist. What do you think will happen?

master $ kubectl run nodejs --image nodej2 --namespace not-here
 Error from server (NotFound): namespaces "not-here" not found

So, why did that happen? If you guessed that our ValidatingAdmissionWebhook picked up on that request and blocked it, you’d be correct!

Using admission controllers

You can turn admission controllers on and off in your server with two different commands.  Depending on how your server was configured and how you started kube-apiserver, you may need to make changes against systemd, or against a manifest that you created to start up the API server in the first place. 

Generally, to enable the server, you’ll execute the following:

kube-apiserver --enable-admission-plugins

And to disable it, you’ll change that to the following:

kube-apiserver --disable-admission-plugins=

If you’re running Kubernetes 1.10 or later, there is a set of recommended admission controllers for you. You can enable them with the following:

kube-apiserver --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota

In earlier version of Kubernetes, there weren’t separate concepts of mutating and validating, so you’ll have to read the documentation to understand the implication of using admission controllers on earlier versions of the software.

The workloads API

The workloads API is an important concept to grasp in order to understand how managing objects has stabilized over the course of many releases in Kubernetes. In the early days of Kubernetes, pods and their workloads were tightly coupled with containers that shared the CPU, networking, storage, and life cycle events. Kubernetes introduced concepts such as replication, then deployment, and then labels, and helped manage 12-factor applications.  StatefulSets were introduced as Kubernetes operators moved into stateful workloads. 

Over time, the concept of the Kubernetes workload became a collective of several parts:

  • Pods
  • ReplicationController
  • ReplicaSet
  • Deployment
  • DaemonSet
  • StatefulSet

These pieces are the current state of the art for orchestrating a reasonable swath of workload types in Kubernetes, but unfortunately the API was spread across many different parts of the Kubernetes codebase. The solution to this was many months of hard work to centralize all of this code, after making many backwards compatibility breaking changes, into apps/v1 API. Several key decisions were made when making the move to apps/v1:

  • Default selector behavior: Unspecified label selectors are used to default to an auto-generated selector culled from the template labels
  • Immutable selectors: While changing selectors is useful in some cases for deployment, it has always been against Kubernetes recommendations to mutate a selector, so the change was made to enable promoted canary-type deployments and pod relabeling, which is orchestrated by Kubernetes
  • Default rolling updates: The Kubernetes programmers wanted RollingUpdate to be the default form, and now it is
  • Garbage collection: In 1.9 and apps/v1, garbage collection is more aggressive, and you won’t see pods hanging around any more after DaemonSets, ReplicaSets, StatefulSets, or Deployments are deleted

If you’d like more input into these decisions, you can join the Apps Special Interest Group, which can be found here: https://github.com/kubernetes/community/tree/master/sig-apps:

For now, you can consider the workloads API to be stable and backwards compatible.

Custom resource definitions

The last piece we’ll touch on in our HA chapter is custom resources. These are an extension of the Kubernetes API, and are compliment with the admission controllers we discussed previously. There are several methods for adding custom resources to your cluster, and we’ll discuss those here.

As a refresher, keep in mind that a non-custom resource in Kubernetes is an endpoint in the Kubernetes API that stores a collection of similar API objects. You can use custom resources to enhance a particular Kubernetes installation. We’ll see examples of this with Istio in later chapters, which uses CRDs to put prerequisites into place. Custom resources can be modified, changed, and removed with kubectl.

When you pair custom resources with controllers, you have the ability to create a declarative API, which allows you to set the state for your gathered resources outside of the cluster’s own life cycle. We touched on an example of the custom controller and custom resource pattern earlier in this book with the operator pattern. You have a couple of options when deciding whether or not to create a custom resource with Kubernetes. The documentation recommends the following decision table when choosing:

Image credit: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#should-i-add-a-custom-resource-to-my-kubernetes-cluster

A key point in deciding to write a custom resource is to ensure that your API is declarative. If it’s declarative, it’s a good fit for a custom resource. You can write custom resources in two ways, with custom resource definitions or through API aggregation. API aggregation requires programming, and we won’t be getting into that topic for this chapter, but you can read more about it here: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/apiserver-aggregation/.

Using CRDs

While Aggregated APIs are more flexible, CRDs are easier to user. Let’s try and create the example CRD from the Kubernetes documentation.

First, you’ll need to spin up your Minikube cluster and the GKE cluster on GCP, which will be one of your own clusters or a playground such as Katacoda. Let’s jump into a Google Cloud Shell and give this a try.

Once on your GCP home page, click the CLI icon, which is circled in red in the following screenshot:

Once you’re in the shell, create a quick Kubernetes cluster. You may need to modify the cluster version in case older versions aren’t supported:

gcloud container clusters create gswk8s \
  --cluster-version 1.10.6-gke.2 \
  --zone us-east1-b \
  --num-nodes 1 \
  --machine-type n1-standard-1
<lots of text>
...
Creating cluster gsk8s...done.
Created [https://container.googleapis.com/v1/projects/gsw-k8s-3/zones/us-east1-b/clusters/gsk8s].
To inspect the contents of your cluster, go to: https://console.cloud.google.com/kubernetes/workload_/gcloud/us-east1-b/gsk8s?project=gsw-k8s-3
kubeconfig entry generated for gsk8s.
NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS
gsk8s us-east1-b 1.10.6-gke.2 35.196.63.146 n1-standard-1 1.10.6-gke.2 1 RUNNING

Next, add the following text to resourcedefinition.yaml:

apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
  # name must match the spec fields below, and be in the form: <plural>.<group>
  name: crontabs.stable.example.com
spec:
  # group name to use for REST API: /apis/<group>/<version>
  group: stable.example.com
  # list of versions supported by this CustomResourceDefinition
  version: v1
  # either Namespaced or Cluster
  scope: Namespaced
  names:
    # plural name to be used in the URL: /apis/<group>/<version>/<plural>
    plural: crontabs
    # singular name to be used as an alias on the CLI and for display
    singular: crontab
    # kind is normally the CamelCased singular type. Your resource
manifests use this.
    kind: CronTab
    # shortNames allow shorter string to match your resource on the CLI
    shortNames:
    - cront

Once you’ve added that, we can create it:

anonymuse@cloudshell:~ (gsw-k8s-3)$ kubectl apply -f resourcedefinition.yaml
customresourcedefinition "crontabs.stable.example.com" created

Great! Now, this means that our RESTful endpoint will be available at the following URI:
/apis/stable.example.com/v1/namespaces/*/crontabs/. We can now use this endpoint to manage custom objects, which is the other half of our key CRD value.

Let’s create a custom object called os-crontab.yaml so that we can insert some arbitrary JSON data into the object. In our case, we’re going to add the OS metadata for cron and the crontab interval.

Add the following:

apiVersion: "stable.example.com/v1"
kind: CronTab
metadata:
  name: cron-object-os-01
spec:
  intervalSpec: "* * 8 * *"
  os: ubuntu
anonymuse@cloudshell:~ (gsw-k8s-3)$ kubectl create -f os-crontab.yaml
crontab "cron-object-os-01" created

Once you’ve created the resource, you can get it as you would any other Deployment, StatefulSet, or other Kubernetes object:

anonymuse@cloudshell:~ (gsw-k8s-3)$ kubectl get crontab
NAME                AGE
cron-object-os-01   38s

If we inspect the object, we would expect to see a bunch of standard configuration, plus the intervalSpec and OS data that we encoded into the CRD. Let’s check and see if it’s there.

We can use the alternative name, cront, that we gave in the CRD in order to look it up. I’ve highlighted the data as follows—nice work!

anonymuse@cloudshell:~ (gsw-k8s-3)$ kubectl get cront-o yaml
apiVersion: v1
items:
- apiVersion: stable.example.com/v1
  kind: CronTab
  metadata:
    clusterName: ""
    creationTimestamp: 2018-09-03T23:27:27Z
    generation: 1
    name: cron-object-os-01
    namespace: default
    resourceVersion: "2449"
    selfLink: /apis/stable.example.com/v1/namespaces/default/crontabs/cron-object-os-01
    uid: eb5dd081-afd0-11e8-b133-42010a8e0095
  spec:
    intervalSpec: '* * 8 * *'
    os: Ubuntu
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

Comments are closed.

loading...