loading...

Kubernetes – Managing applications

At the time of this book’s writing, new software has emerged that hopes to tackle the problem of managing Kubernetes applications from a holistic perspective. As application installation and continued management grows more complex, software such as Helm hopes to ease the pain for cluster operators creating, versioning, publishing, and exporting application installation and configuration for other operators. You may have also heard the term GitOps, which uses Git as the source of truth from which all Kubernetes instances can be managed.

While we’ll jump deeper into Continuous Integration and Continuous Delivery (CI/CD) in the next chapter, let’s see what advantages can be gained by taking advantage of package management within the Kubernetes ecosystem. First, it’s important to understand what problem we’re trying to solve when it comes to package management within the Kubernetes ecosystem. Helm and programs like it have a lot in common with package managers such as apt, yum, rpm, dpgk, Aptitude, and Zypper. These pieces of software helped users cope during the early days of Linux, where programs were simply distributed as source code, with installation documents, configuration files, and the necessary moving pieces left to the operator to set up. These days of course Linux distributions use a great many pre-built packages, which are made available to the user community for consumption on their operating system of choice. In many ways, we’re in those early days of software management for Kubernetes, with many different methods for installing software within many different layers of the Kubernetes system. But are there other reasons for  wanting a GNU Linux-style package manager for Kubernetes? Perhaps you feel confident that by using containers, or Git and configuration management, you can manage on your own.

Keep in mind the that there several important dimensions to consider when it comes to application management in a Kubernetes cluster:

  1. You want to be able to leverage the experience of others. When you install software in your cluster, you want to be able to take advantage of the expertise of the teams that built the software you’re running, or experts who’ve set it up in a way to perform best.
  1. You want a repeatable, auditable method of maintaining the application-specific configuration of your cluster across environments. It’s difficult to build in environment-specific memory settings, for example, across environments using simpler tools such as cURL, or within a makefile or other package compilation tools.

In short, we want to take advantage of the expertise of the ecosystem when deploying technologies such as databases, caching layers, web servers, key/value stores, and other technologies that you’re likely to run on your Kubernetes cluster. There are a lot of potential players in this part of the ecosystem, such as Landscaper (https://github.com/Eneco/landscaper), Kubepack (https://github.com/kubepack/pack), Flux (https://github.com/weaveworks/flux), Armada (https://github.com/att-comdev/armada), and helmfile (https://cdp.packtpub.com/getting_started_with_kubernetes__third_edition/wp-admin/post.php?post=29&action=pdfpreview). In this section in particular, we’re going to look at Helm (https://github.com/helm/helm), which has recently been accepted into the CNCF as an incubating project, and its approach to the problems we’ve described here.

Getting started with Helm

We’ll see how Helm makes it easier to manage Kubernetes applications using charts, which are packages that contain a description of the package in the form of chart.yml, and several templates that contain manifests Kubernetes can use to manipulate objects within its systems.

Note: Kubernetes is built with a philosophy of the operator defining a desired end state, with Kubernetes working over time and eventual consistency to enforce that state. Helm’s approach to application management follows the same principles. Just as you can manage objects via kubectl with imperative commands, imperative objective configuration, and declarative object configuration, Helm takes advantage of the declarative object style, which has the highest functionality curve and highest difficulty.

Let’s get started quickly with Helm. First, make sure that you SSH into your Kubernetes cluster that we’ve been using. You’ll notice that as with many Kubernetes pieces, we’re going to use Kubernetes to install Helm and its components. You can also use a local installation of Kubernetes from Minikube. First, check and make sure that kubectl is set to use the correct cluster:

$ kubectl config current-context
kubernetes-admin@kubernetes
Next up, let's grab the helm install script and install it locally. Make sure to read the script through first so you're comfortable with that it does!

Next up, let’s grab the Helm install script and install it locally. Make sure to read the script through first so you’re comfortable with what it does!

You can read through the script contents here: https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get.

Now, let’s run the install script and grab the pieces:

master $ curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 6740 100 6740 0 0 22217 0 --:--:-- --:--:-- --:--:-- 22244
master $ chmod 700 get_helm.sh
$ ./get_helm.sh
master $ ./get_helm.sh
Helm v2.9.1 is available. Changing from version v2.8.2.
Downloading https://kubernetes-helm.storage.googleapis.com/helm-v2.9.1-linux-amd64.tar.gz
Preparing to install into /usr/local/bin
helm installed into /usr/local/bin/helm
Run 'helm init' to configure helm

Now that we’ve pulled and installed Helm, we can install Tiller on the cluster using helm init. You can also run Tiller locally for development, but for production installations and this demo, we’ll run Tiller inside the cluster directly as a component itself. Tiller will use the previous context when configuring itself, so make sure that you’re using the correct endpoint:

master $ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
master $ helm init
Creating /root/.helm
Creating /root/.helm/repository
Creating /root/.helm/repository/cache
Creating /root/.helm/repository/local
Creating /root/.helm/plugins
Creating /root/.helm/starters
Creating /root/.helm/cache/archive
Creating /root/.helm/repository/repositories.yaml
Adding stable repo with URL: https://kubernetes-charts.storage.googleapis.com
Adding local repo with URL: http://127.0.0.1:8879/charts
$HELM_HOME has been configured at /root/.helm.
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Please note: by default, Tiller is deployed with an insecure 'allow unauthenticated users' policy.
For more information on securing your installation see: https://docs.helm.sh/using_helm/#securing-your-helm-installation
Happy Helming!

Now that we’ve installed Helm, let’s see what it’s like to manage applications directly by installing MySQL using one of the official stable charts. We’ll make sure we have the latest repositories and then install it:

$ helm repo update
Hang tight while we grab the latest from your chart repositories...
...Skip local chart repository
...Successfully got an update from the "stable" chart repository
Update Complete.  Happy Helming!

You can get a sneak preview of the power of Helm managed MySQL by running the install command, helm install stable/mysql, which is helm’s version of man pages for the application install:

$ helm install stable/mysql
NAME:   guilded-otter
LAST DEPLOYED: Mon Jun  4 01:49:46 2018
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1beta1/Deployment
NAME                 DESIRED CURRENT UP-TO-DATE  AVAILABLE AGE
guilded-otter-mysql  1 1 1         0 0s
==> v1/Pod(related)
NAME                                  READY STATUS RESTARTS AGE
guilded-otter-mysql-5dd65c77c6-46hd4  0/1 Pending 0 0s
==> v1/Secret
NAME                 TYPE DATA AGE
guilded-otter-mysql  Opaque 2 0s
==> v1/ConfigMap
NAME                      DATA AGE
guilded-otter-mysql-test  1 0s
==> v1/PersistentVolumeClaim
NAME                 STATUS VOLUME CAPACITY  ACCESS MODES STORAGECLASS AGE
guilded-otter-mysql  Pending 0s
==> v1/Service
NAME                 TYPE CLUSTER-IP    EXTERNAL-IP PORT(S) AGE
guilded-otter-mysql  ClusterIP 10.105.59.60  <none> 3306/TCP 0s

Helm installs a number of pieces here, which we recognize as Kubernetes objects, including Deployment, Secret, and ConfigMap. You can view your installation of MySQL with helm ls, and delete your MySQL installation with helm delete <cluster_name>. You can also create your own charts with helm init <chart_name> and lint those charts with Helm lint.

If you’d like to learn more about the powerful tools available to you with Helm, check out the docs: https://docs.helm.sh/. We’ll also dive into more comprehensive examples in the next chapter when we look at CI/CD.

Comments are closed.

loading...