loading...

Kubernetes – DaemonSets

While ReplicationControllers and Deployments are great at making sure that a specific number of application instances are running, they do so in the context of the best fit. This means that the scheduler looks for nodes that meet resource requirements (available CPU, particular storage volumes, and so on) and tries to spread across the nodes and zones.

This works well for creating highly available and fault tolerant applications, but what about cases where we need an agent to run on every single node in the cluster? While the default spread does attempt to use different nodes, it does not guarantee that every node will have a replica and, indeed, will only fill a number of nodes equivalent to the quantity specified in the ReplicationController or Deployment specification.

To ease this burden, Kubernetes introduced DaemonSet, which simply defines a pod to run on every single node in the cluster or a defined subset of those nodes. This can be very useful for a number of production–related activities, such as monitoring and logging agents, security agents, and filesystem daemons.

In Kubernetes version 1.6, RollingUpdate was added as an update strategy for the DaemonSet object. This functionality allows you to perform serial updates to your pods based on updates to spec.template. In the next version, 1.7, history was added so that operators could roll back an update based on a history of revisions to spec.template.

You would roll back a rollout with the following kubectl example command:

$ kubectl rollout history ds example-app --revision=2 

In fact, Kubernetes already uses these capabilities for some of its core system components. If we recall from Chapter 1, Introduction to Kubernetes, we saw node-problem-detector running on the nodes. This pod is actually running on every node in the cluster as DaemonSet. We can see this by querying DaemonSets in the kube-system namespace:

$ kubectl get ds --namespace=kube-system
kube-system DaemonSets

You can find more information about node-problem-detector, as well as yaml, in the following node-problem-detector definition listing at http://kubernetes.io/docs/admin/node-problem/#node-problem-detector:

apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: node-problem-detector-v0.1
  namespace: kube-system
  labels:
    k8s-app: node-problem-detector
    version: v0.1
    kubernetes.io/cluster-service: "true"
spec:
  template:
    metadata:
      labels:
        k8s-app: node-problem-detector
        version: v0.1
        kubernetes.io/cluster-service: "true"
    spec:
      hostNetwork: true
      containers:
      - name: node-problem-detector
        image: gcr.io/google_containers/node-problem-detector:v0.1
        securityContext:
          privileged: true
        resources:
          limits:
            cpu: "200m"
            memory: "100Mi"
          requests:
            cpu: "20m"
            memory: "20Mi"
        volumeMounts:
        - name: log
          mountPath: /log
          readOnly: true
        volumes:
        - name: log
          hostPath:
            path: /var/log/

Comments are closed.

loading...