loading...

Kubernetes – Planning a cluster

Looking back over the work we’ve done up till now in this book, there are a lot of options when it comes to building a cluster with Kubernetes. Let’s briefly highlight the options you have available to you when you’re planning on building your cluster. We have a few key areas to investigate when planning ahead.

Picking what’s right

The first and arguably most important step when choosing a cluster is to pick the right hosted platform for your Kubernetes cluster. At a high level, here are the choices you have:

  • Local solutions include the following:
    • Minikube: A single-node Kubernetes cluster
    • Ubuntu on LXD: This uses LXD to deploy a nine-instance cluster of Kubernetes
    • IBM’s Cloud Private-CE: This uses VirtualBox to deploy Kubernetes on n+1 instances
    • kubeadm-dind (Docker-in-Docker): This allows for multi-node Kubernetes clusters
  • Hosted solutions include the following:
    • Google Kubernetes Engine
    • Amazon Elastic Container Services
    • Azure Kubernetes Service
    • Stackpoint
    • Openshift online
    • IBM Cloud Kubernetes Services
    • Giant Swarm
  • On all of the aforementioned clouds and more, there are many turnkey solutions that allow you to spin up full clusters with community-maintained scripts

As of this book’s publishing, here’s a list of projects and solutions:

Check out this link for more turnkey solutions: https://kubernetes.io/docs/setup/pick-right-solution/#turnkey-cloud-solutions.

Securing the cluster

As we’ve discussed, there are several areas of focus when securing a cluster. Ensure that you have read through and made configuration changes (in code) to your cluster configuration in the following areas:

  • Logging: Ensure that your Kubernetes logs are enabled. You can read more about audit logging here: https://kubernetes.io/docs/tasks/debug-application-cluster/audit/.
  • Make sure you have authentication enabled so that your users, operators, and services identify themselves as unique identifiers. Read more about authentication here: https://kubernetes.io/docs/reference/access-authn-authz/authentication/.
  • Ensure that you have proper separation of duties, role-based access control, and fine grained privileges using authorization. You can read more about HTTP-based controls here: https://kubernetes.io/docs/reference/access-authn-authz/authorization/.
  • Make sure that you have locked down the API to specific permissions and groups. You can read more about access to the API here: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/.
  • When appropriate, enable an admission controller to further re-validate requests after they pass through the authentication and authorization controls. These controllers can take additional, business-logic based validation steps in order to secure your cluster further. Read more about admission controllers here: https://kubernetes.io/docs/reference/access-authn-authz/controlling-access.
  • Tune Linux system parameters via the sysctl interface. This allows you to modify kernel parameters for node-level and namespaced sysctl features. There are safe and unsafe system parameters. There are several subsystems that can be tweaked with sysctls. Possible parameters are as follows:
    • abi: Execution domains and personalities
    • fs: Specific filesystems, filehandle, inode, dentry, and quota tuning
    • kernel: Global kernel information/tuning
    • net: Networking
    • sunrpc: SUN Remote Procedure Call (RPC)
    • vm: Memory management tuning, buffer, and cache management
    • user: Per user per user namespace limits

You can read more about sysctl calls here: https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/.

You can enable unsafe sysctl values by running the following command:

kubelet --allowed-unsafe-sysctls ‘net.ipv4.route.min_pmtu'

Here’s a diagram of the authorization, authentication, and admission control working together:

Tuning examples

If you’d like to experiment with modifying sysctls, you can set a security context as follows, per pod:

apiVersion: v1
kind: Pod
metadata:
 name: sysctl-example
spec:
 securityContext:
   sysctls:
   - name: kernel.shm_rmid_forced
     value: "0"
   - name: net.core.somaxconn
     value: "10000"
   - name: kernel.msgmax
     value: "65536"
   - name: ipv4.ip_local_port_range
      value: ‘1024 65535'

You can also tune variables such as the ARP cache, as Kubernetes consumes a lot of IPs at scale, which can exhaust space in the ARP cache. Changing these settings is common in large scale HPC clusters and can help with address exhaustion with Kubernetes as well. You can set these values, as follows:

net.ipv4.neigh.default.gc_thresh1 = 90000
net.ipv4.neigh.default.gc_thresh2 = 100000
net.ipv4.neigh.default.gc_thresh3 = 120000

Comments are closed.

loading...