loading...

Kubernetes – Kubernetes cluster security

Kubernetes has continued to add a number of security features in their latest releases and has a well-rounded set of control points that can be used in your cluster  everything from secure node communication to pod security and even the storage of sensitive configuration data.

Secure API calls

During every API call, Kubernetes applies a number of security controls. This security life cycle is depicted here:

API call life cycle

After secure TLS communication is established, the API server runs through authorization and authentication. Finally, an admission controller loop is applied to the request before it reaches the API server.

Secure node communication

Kubernetes supports the use of secure communication channels between the API server and any client, including the nodes themselves. Whether it’s a GUI or command-line utility such as kubectl, we can use certificates to communicate with the API server. Hence, the API server is the central interaction point for any changes to the cluster and is a critical component to secure.

In deployments such as GCE, the kubelet on each node is deployed for secure communication by default. This setup uses TLS bootstrapping and the new certificates’ API to establish a secure connection with the API server using TLS client certificates and a Certificate Authority (CA) cluster. 

Authorization and authentication plugins

The plugin mechanisms for authentication and authorization in Kubernetes are still being developed. They have come a long way, but still have plugins in beta stages and enhancements in the works. There are also third-party providers that integrate with the features here, so bear that in mind when building your hardening strategy.

Authentication is currently supported in the form of tokens, passwords, and certificates, with plans to add the plugin capability at a later stage. OpenID Connect tokens are supported and several third-party implementations, such as Dex from CoreOS and user account and authentication from Cloud Foundry, are available.

Authorization already supports three modes. The role-based access control (RBAC) mode recently went to general availability in the 1.8 release and brings the standard role-based authentication model to Kubernetes. Attribute-based access control (ABAC) has long been supported and lets a user define privileges via attributes in a file.

Additionally, a Webhook mechanism is supported, which allows for integration with third-party authorization via REST web service calls. Finally, we have the new node authorization method, which grants permissions to kubelets based on the pods they are scheduled to run.

You can learn more about each area at the following links:

  • http://kubernetes.io/docs/admin/authorization/
  • http://kubernetes.io/docs/admin/authentication/
  • https://kubernetes.io/docs/reference/access-authn-authz/node/

Admission controllers

Kubernetes also provides a mechanism for integrating, with additional verification as a final step. This could be in the form of image scanning, signature checks, or anything that is able to respond in the specified fashion.

When an API call is made, the hook is called and that server can run its verification. Admission controllers can also be used to transform requests and add or alter the original request. Once the operations are run, a response is then sent back with a status that instructs Kubernetes to allow or deny the call.

This can be especially helpful for verifying or testing images, as we mentioned in the last section. The ImagePolicyWebhook plugin provides an admission controller that allows for integration with additional image inspection.

For more information, visit the Using Admission Controller page in the following documentation: https://kubernetes.io/docs/admin/admission-controllers/.

RBAC

As mentioned earlier in this chapter, Kubernetes has now made RBAC a central component to authorization within the cluster. Kubernetes offers two levels for this kind of control. First, there is a ClusterRole, which provides cluster-wide authorization to resources. This is handy for enforcing access control across multiple teams, products, or to cluster-wide resources such as the underlying cluster nodes. Second, we have a Role, which simply provides access to resources within a specific namespace.

Once you have a role, you need a way to provide users with membership to that role. These are referred to as Bindings, and again we have ClusterRoleBinding and RoleBinding. As with the roles themselves, the former is meant for cluster-wide access and the latter is meant to apply within a specific namespace.

We will not dive into the details of RBAC in this book, but it is something you’ll want to explore as you get ready for production grade deployments. The PodSecurityPolicy discussed in the next section typically utilizes Roles and RoleBindings to control which policies each user has access to.

For more information, please refer to the documentation here: https://kubernetes.io/docs/reference/access-authn-authz/rbac/.

Pod security policies and context

One of the latest additions to the Kubernetes’ security arsenal is that of pod security policies and contexts. These allow users to control users and groups for container processes and attached volumes, limit the use of host networks or namespaces, and even set the root filesystem to read-only. Additionally, we can limit the capabilities available and also set SELinux options for the labels that are applied to the containers in each pod. 

In addition to SELinux, Kubernetes also added beta support for using AppArmor with your pods by using annotations. For more information, refer to the following documentation page: https://kubernetes.io/docs/admin/apparmor/.

PodSecurityPolicies are enforced using the admission controller we spoke of earlier in this book. By default, Kubernetes doesn’t enable PodSecurityPolicy, so if you have a GKE cluster running, you can try the following:

$ kubectl get psp

You should see 'No resources found.', assuming you haven’t enabled them. 

Let’s try an example by using the Docker image from our previous chapters. If we use the following run command on a cluster with no PodSecurityPolicy applied, it will happily run:

$ kubectl run myroottest --image=jonbaier/node-express-info:latest

Follow this with kubectl get pods and in a minute or so we should see a pod starting with myroottest in the listings. 

Go ahead and clean this up with the following code before proceeding:

$ kubectl delete deployment myroottest

Enabling PodSecurityPolicies

Now, let’s try this with a cluster that can utilize PodSecurityPolicies. If you are using GKE, it is quite easy to create a cluster with PodSecurityPolicy enabled. Note you will need the Beta APIs enabled for this:

$ gcloud beta container clusters create [Cluster Name] --enable-pod-security-policy --zone=[Zone To Deply Cluster]

If you have an existing GKE cluster, you can enable it with a command similar to the preceding one. Simply replace the create keyword with update.

For clusters created with kube-up, like we saw in Chapter 1, Introduction to Kubernetes, you’ll need to enable the admission controller on the API server. Take a look here for more information: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#enabling-pod-security-policies.

Once you have PodSecurityPolicy enabled, you can see the applied policies by using the following code:

$ kubectl get psp

GKE default pod security policies

You’ll notice a few predefined policies that GKE has already defined. You can explore the details and the YAML used to create these policies with the following code:

$ kubectl get psp/[PSP Name] -o yaml

It’s important to note that PodSecurityPolicies work with the RBAC features of Kubernetes. There are a few default roles, role bindings, and namespaces that are defined by GKE. As such, we will see different behaviors based on how we interact with Kubernetes. For example, by using kubectl in a GCloud Shell, you may be sending commands as a cluster admin and therefore have access to all policies, including gce.privileged. However, using the kubectl run command, as we did previously, will invoke the pods through the kube-controller-manager, which will be restricted to the policies bound to its role. Thus, if you simply create a pod with kubectl, it will create it without an issue, but by using the run command, we will be restricted.

Sticking to our previous method of using kubectl run, let’s try the same deployment as the preceding one:

$ kubectl run myroottest --image=jonbaier/node-express-info:latest

Now, if we follow this with kubectl get pods, we won’t see any pods prefaced with myroottest. We can dig a bit deeper by describing our deployment:

$ kubectl describe deployment myroottest

By using the name of the replica set listed in the output from the preceding command, we can then get the details on the failure. Run the following command:

$ kubectl describe rs [ReplicaSet name from deployment describe]

Under the events at the bottom, you will see the following pod security policy validation error:

Replica set pod security policy validation error

Again, because the run command uses the controller manager and that role has no bindings that allow the use of the existing PodSecurityPolicies, we are unable to run any pods.

Understanding that running containers securely is not merely the task of administrators adding constraints is important. The work must be done in collaboration with developers, who will properly create the images. 

You can find all of the possible parameters for PodSecurityPolicies in the source code, but I’ve created the following table for convenience. You can find more handy lookups like this on my new site, http://www.kubesheets.com:

Parameter

Type

Description

Required

Privileged

bool

Allows or disallows running a pod as privileged.

No

DefaultAddCapabilities

[]v1.Capaility

This defines a default set of capabilities that are added to the container. If the pod specifies a capability drop that will override, then add it here. 

Values are strings of POSIX capabilities minus the leading CAP_. For example, CAP_SETUID would be SETUID (http://man7.org/linux/man-pages/man7/capabilities.7.html).

No

RequiredDropCapabilities

[]v1.Capaility

This defines a set of capabilities that must be dropped from a container. The pod cannot specify any of these capabilities.

Values are strings of POSIX capabilities minus the leading CAP_. For example, CAP_SETUID would be SETUID (http://man7.org/linux/man-pages/man7/capabilities.7.html).

No

AllowedCapabilities

[]v1.Capaility

This defines a set of capabilities that are allowed and can be added to a container. The pod can specify any of these capabilities.

Values are strings of POSIX capabilities minus the leading CAP_. For example, CAP_SETUID would be SETUID (http://man7.org/linux/man-pages/man7/capabilities.7.html).

No

Volumes

[]string

This list defines which volumes can be used. Leave this empty for all types (https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/apis/extensions/v1beta1/types.go#L1127).

No

HostNetwork

bool

This allows or disallows the pod to use the host network.

No

HostPorts

[]HostPortRange

This lets us restrict allowable host ports that can be exposed.

No

HostPID

bool

This allows or disallows the pod to use the host PID.

No

HostIPC

bool

This allows or disallows the pod to use the host IPC.

No

SELinux

SELinuxStrategyOptions

Set it to one of the strategy options, as defined here: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#selinux.

Yes

RunAsUser

RunAsUserStrategyOptions

Set it to one of the strategy options, as defined here: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups.

Yes

SupplementalGroups

SupplementalGroupsStrategyOptions

Set it to one of the strategy options, as defined here: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#users-and-groups

Yes

FSGroup

FSGroupStrategyOptions

Set it to one of the strategy options, as defined here: https://kubernetes.io/docs/user-guide/pod-security-policy/#strategies

Yes

ReadOnlyRootFilesystem

bool

Setting this to true will either deny the pod or force it to run with a read-only root filesystem.

No

allowedHostPaths

 

[]AllowedHostPath

This provides a whitelist of host paths that can be used at volumes.

No

allowedFlexVolumes

[]AllowedFlexVolume

This provides a whitelist of flex volumes that can be mounted.

No

allowPrivilegeEscalation

bool

This governs where setuid can be used to change the user a process is running under. Its default is true.  

No

defaultAllowPrivilegeEscalation

bool

Sets the default for allowPrivilegeEscalation.

No

 

 

Additional considerations

In addition to the features we just reviewed, Kubernetes has a number of other constructs that should be considered in your overall cluster hardening process. Earlier in this book, we looked at namespaces that provide a logical separation for multi-tenancy. While the namespaces themselves do not isolate the actual network traffic, some of the network plugins, such as Calico and Canal, provide additional capability for network policies. We also looked at quotas and limits that can be set for each namespace, which should be used to prevent a single tenant or project from consuming too many resources within the cluster.

Comments are closed.

loading...