Kubernetes – Setting up federation

While we can use the cluster we had running for the rest of the examples, I would highly recommend that you start fresh. The default naming of the clusters and contexts can be problematic for the federation system. Note that the --cluster-context and --secret-name flags are there to help you work around the default naming, but for first-time federation, it can still be confusing and less than straightforward.

Hence, starting fresh is how we will walk through the examples in this chapter. Either use new and separate cloud provider (AWS and/or GCE) accounts or tear down the current cluster and reset your Kubernetes control environment by running the following commands:

$ kubectl config unset contexts
$ kubectl config unset clusters

Double-check that nothing is listed using the following commands:

$ kubectl config get-contexts
$ kubectl config get-clusters

Next, we will want to get the kubefed command on our path and make it executable. Navigate back to the folder where you have the Kubernetes download extracted. The kubefed command is located in the /kubernetes/client/bin folder. Run the following commands to get in the bin folder and change the execution permissions:

$ sudo cp kubernetes/client/bin/kubefed /usr/local/bin
$ sudo chmod +x /usr/local/bin/kubefed

Contexts

Contexts are used by the Kubernetes control plane to keep authentication and cluster configuration stored for multiple clusters. This allows us to access and manage multiple clusters accessible from the same kubectl. You can always see the contexts available with the get-contexts command that we used earlier.

New clusters for federation

Again, make sure you navigate to wherever Kubernetes was downloaded and move into the cluster sub-folder:

$ cd kubernetes/cluster/
Before we proceed, make sure you have the GCE command line and the AWS command line installed, authenticated, and configured. Refer to Chapter 1, Introduction to Kubernetes, if you need assistance doing so on a new box.

First, we will create the AWS cluster. Note that we are adding an environment variable named OVERRIDE_CONTEXT, which will allow us to set the context name to something that complies with the DNS naming standards. DNS is a critical component for federation as it allows us to do cross-cluster discovery and service communication. This is important in a federated world where clusters may be in different data centers and even providers.

Run these commands to create your AWS cluster:

$ export KUBERNETES_PROVIDER=aws
$ export OVERRIDE_CONTEXT=awsk8s
$ ./kube-up.sh

Next, we will create a GCE cluster, once again using the OVERRIDE_CONTEXT environment variable:

$ export KUBERNETES_PROVIDER=gce
$ export OVERRIDE_CONTEXT=gcek8s
$ ./kube-up.sh

If we take a look at our contexts now, we will notice both awsk8s and gcek8s, which we just created. The star in front of gcek8s denotes that it’s where kubectl is currently pointing and executing against:

$ kubectl config get-contexts

The preceding command should produce something like the following:

Initializing the federation control plane

Now that we have two clusters, let’s set up the federation control plane in the GCE cluster. First, we’ll need to make sure that we are in the GCE context, and then we will initialize the federation control plane:

$ kubectl config use-context gcek8s
$ kubefed init master-control --host-cluster-context=gcek8s --dns-zone-name="mydomain.com" 

The preceding command creates a new context just for federation called master-control. It uses the gcek8s cluster/context to host the federation components (such as API server and controller). It assumes GCE DNS as the federation’s DNS service. You’ll need to update dns-zone-name with a domain suffix you manage.

By default, the DNS provider is GCE. You can use --dns-provider="aws-route53" to set it to AWS route53; however, out of the box implementation still has issues for many users.

If we check our contexts once again, we will now see three contexts:

$ kubectl config get-contexts

The preceding command should produce something like the following:

Let’s make sure we have all of the federation components running before we proceed. The federation control plane uses the federation-system namespace. Use the kubectl get pods command with the namespace specified to monitor the progress. Once you see two API server pods and one controller pod, you should be set:

$ kubectl get pods --namespace=federation-system

Now that we have the federation components set up and running, let’s switch to that context for the next steps:

$ kubectl config use-context master-control

Adding clusters to the federation system

Now that we have our federation control plane, we can add the clusters to the federation system. First, we will join the GCE cluster and then the AWS cluster:

$ kubefed join gcek8s --host-cluster-context=gcek8s --secret-name=fed-secret-gce
$ kubefed join awsk8s --host-cluster-context=gcek8s --secret-name=fed-secret-aws

Federated resources

Federated resources allow us to deploy across multiple clusters and/or regions. Currently, version 1.5 of Kubernetes support a number of core resource types in the federation API, including ConfigMap, DaemonSets, Deployment, Events, Ingress, Namespaces, ReplicaSets, Secrets, and Services.

Let’s take a look at a federated deployment that will allow us to schedule pods across both AWS and GCE. Save the following file as node-js-deploy-fed.yaml:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: node-js-deploy
  labels:
    name: node-js-deploy
spec:
  replicas: 3
  template:
    metadata:
      labels:
        name: node-js-deploy
    spec: 
      containers: 
      - name: node-js-deploy 
        image: jonbaier/pod-scaling:latest 
        ports: 
        - containerPort: 80

Create this deployment with the following command:

$ kubectl create -f node-js-deploy-fed.yaml

Now, let’s try listing the pods from this deployment:

$ kubectl get pods

 

We should see a message like the preceding one depicted. This is because we are still using master-control or federation context, which does not itself run pods. We will, however, see the deployment in the federation plane and, if we inspect the events, we will see that the deployment was in fact created on both of our federated clusters:

$ kubectl get deployments
$ kubectl describe deployments node-js-deploy

We should see something like the following. Notice that the Events: section shows deployments in both our GCE and AWS contexts:

We can also see the federated events using the following command:

$ kubectl get events

It may take a moment for all three pods to run. Once that happens, we can switch to each cluster context and see some of the pods on each. Note that we can now use get pods since we are on the individual clusters and not on the control plane:

$ kubectl config use-context awsk8s
$ kubectl get pods
$ kubectl config use-context gcek8s
$ kubectl get pods

We should see the three pods spread across the clusters with two on one and a third on the other. Kubernetes has spread them across the cluster without any manual intervention. Any pods that fail will be restarted, but now we have the added redundancy of two cloud providers.

Federated configurations

In modern software development, it is common to separate configuration variables from the application code itself. In this way, it is easier to make updates to service URLs, credentials, common paths, and so on. Having these values in external configuration files means we can easily update configuration without rebuilding the entire application.

This separation solves the initial problem, but true portability comes when you can remove the dependency from the application completely. Kubernetes offers a configuration store for exactly this purpose. ConfigMaps are simple constructs that store key-value pairs. 

Kubernetes also supports Secrets for more sensitive configuration data. This will be covered in more detail in Chapter 10, Cluster Authentication, Authorization, and Container Security. You can use the example there in both single clusters or on the federation control plane as we are demonstrating with ConfigMaps here.

Let’s take a look at an example that will allow us to store some configuration and then consume it in various pods. The following listings will work for both federated and single clusters, but we will continue using a federated setup for this example.

The ConfigMap kind can be created using literal values, flat files and directories, and finally YAML definition files. The following listing is a YAML definition of the configmap-fed.yaml file:

apiVersion: v1
kind: ConfigMap
metadata:
  name: my-application-config
  namespace: default
data:
  backend-service.url: my-backend-service

Let’s first switch back to our federation plane:

$ kubectl config use-context master-control

Now, create this listing with the following command:

$ kubectl create -f configmap-fed.yaml

Let’s display the configmap object that we just created. The -o yaml flag helps us to display the full information: 

$ kubectl get configmap my-application-config -o yaml

Now that we have a ConfigMap object, let’s start up a federated ReplicaSet that can use the ConfigMap object. This will create replicas of pods across our cluster that can access the ConfigMap object. ConfigMaps can be accessed via environment variables or mount volumes. This example will use a mount volume that provides a folder hierarchy and the files for each key with the contents representing the values. Save the following file as configmap-rs-fed.yaml:

apiVersion: extensions/v1beta1
kind: ReplicaSet
metadata:
  name: node-js-rs
spec:
  replicas: 3
  selector:
    matchLabels:
      name: node-js-configmap-rs
  template:
    metadata:
      labels:
        name: node-js-configmap-rs
    spec:
      containers:
      - name: configmap-pod
        image: jonbaier/node-express-info:latest
        ports:
        - containerPort: 80
          name: web
        volumeMounts:
        - name: configmap-volume
          mountPath: /etc/config
volumes:
      - name: configmap-volume
        configMap:
          name: my-application-config

Create this pod with kubectl create -f configmap-rs-fed.yaml. After creation, we will need to switch contexts to one of the clusters where the pods are running. You can choose either, but we will use the GCE context here:

$ kubectl config use-context gcek8s

Now that we are on the GCE cluster specifically, let’s check configmaps here:

$ kubectl get configmaps

As you can see, the ConfigMap is propagated locally to each cluster. Next, let’s find a pod from our federated ReplicaSet:

$ kubectl get pods

Let’s take one of the node-js-rs pod names from the listing and run a bash shell with kubectl exec:

$ kubectl exec -it node-js-rs-6g7nj bash

Then, let’s change directories to the /etc/config folder that we set up in the pod definition. Listing this directory reveals a single file with the name of the ConfigMap we defined earlier:

$ cd /etc/config
$ ls

If we then display the contents of the files with the following command, we should see the value we entered earlier, my-backend-service:

$ echo $(cat backend-service.url)

If we were to look in any of the pods across our federated cluster, we would see the same values. This is a great way to decouple configuration from an application and distribute it across our fleet of clusters.

Federated horizontal pod autoscalers

Let’s look at another example of a newer resource that you can use with the federated model: horizontal pod autoscalers (HPAs).  

Here’s what the architecture of these looks like in a single cluster:

Credithttps://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/#how-does-the-horizontal-pod-autoscaler-work.

These HPAs will act in a similar fashion to normal HPAs, with the same functionality and same API-based compatibility—only, with federation, the management will traverse your clusters. This is an alpha feature, so it is not enabled by default on your cluster. In order to enable it, you’ll need to run federation-apiserver with the --runtime-config=api/all=true option. Currently, the only metrics that work to manage HPAs are CPU utilization metrics.

First, let’s create a file that contains the HPA configuration, called node-hpa-fed.yaml:

apiVersion: autoscaling/v1
kind: HorizontalPodAutoscaler
metadata:
 name: nodejs
 namespace: default
spec:
 scaleTargetRef:
   apiVersion: apps/v1beta1   kind: Deployment
name: nodejs
 minReplicas: 5
 maxReplicas: 20
 targetCPUUtilizationPercentage: 70

We can add this to our cluster with the following command:

kubectl --context=federation-cluster create -f node-hpa-fed.yaml

In this case, --context=federation-cluster is telling kubectl to send the request to federation-apiserver instead of kube-apiserver.

If, for example, you wanted to restrict this HPA to a subset of your Kubernetes clusters, you can use cluster selectors to restrict the federated object by using the federation.alpha.kubernetes.io/cluster-selector annotation. It’s similar in function to nodeSelector, but acts upon full Kubernetes clusters. Cool! You’ll need to create an annotation in JSON format. Here’s a specific example of a ClusterSelector annotation:

metadata:
  annotations:
     federation.alpha.kubernetes.io/cluster-selector: '[{"key": "hipaa", "operator":
       "In", "values": ["true"]}, {"key": "environment", "operator": "NotIn", "values": ["nonprod"]}]'

This example is going to keep workloads with the hipaa label out of environments with the nonprod label.

For a full list of Top Level Federation API objects, see the following: https://kubernetes.io/docs/reference/federation/

You can check your clusters to see whether the HPA was created in an individual location by specifying the context:

kubectl --context=gce-cluster-01 get HPA nodejs

Once you’re finished with the HPA, it can be deleted with the following kubectl command:

kubectl --context=federation-cluster delete HPA nodejs

How to use federated HPAs

HPAs used in the previous manner are an essential tool for ensuring that your clusters scale up as their workloads increase. The default behavior for HPA spreading in clusters ensure that maximum replicas are spread evenly first in all clusters. Let’s say that you have 10 registered Kubernetes clusters in your federation control plane. If you have spec.maxReplicas = 30, each of the clusters will receive the following HPA spec:

spec.maxReplicas = 10

If you were to then set spec.minReplicas = 5, then some of the clusters will receive the following:

spec.minReplicas = 1

This is due to being unable to have a replica sum of 0. It’s important to note that federation manipulates the minx/mix replicas it creates on the federated clusters, not by directly monitoring the target object metrics (in our case, CPU). The federated HPA controller is relying on HPAs within the federated cluster to monitor CPU utilization, which then makes changes to specs such as current and desired replicas.

Other federated resources

So far, we have seen federated Deployments, ReplicaSets, Events, and ConfigMaps in action. DaemonSets, Ingress, Namespaces, Secrets, and Services are also supported. Your specific setup will vary and you may have a set of clusters that differ from our example here. As mentioned earlier, these resources are still in beta, so it’s worth spending some time to experiment with the various resource types and understand how well the federation constructs are supported for your particular mix of infrastructure.

Let’s look at some examples that we can use to leverage other common Kubernetes API objects from a federated perspective.

Events

If you want to see what events are only stored in the federation control plane, you can use the following command:

kubectl --context=federation-cluster get events

Jobs

When you go to create a job, you’ll use similar concepts as before. Here’s what that looks like when you create a job within the federation context:

kubectl --context=federation-cluster create -f fedjob.yaml

You can get the list of these jobs within the federated context with the following:

kubectl --context=gce-cluster-01 get job fedjob

As with HPAs, you can spread your jobs across multiple underlying clusters with the appropriate specs. The relevant definitions are spec.parallelism and spec.completions, and they can be modified by specifying the correct ReplicaAllocationPreferences with the federation.kubernetes.io/job-preferences key.

Comments are closed.