loading...

Google Cloud Platform – Configuring cluster networking

Install PHP on CentOS 8

Now that we have a cluster deployed with three nodes and a deployment with varying number of pods, let’s look at how to expose these pods so we can access our application. Kubernetes services are exposed by using a load balancer. Load balancing allows your cluster services to be available on a single IP address. In Kubernetes, you create internal load balancers, which make it easier for you to expose your services between GCP applications, if needed.

For our exercise here, exposing the deployed pods to the internet is easy. If you noticed, we had deployed the nginx images as our workloads. Let’s look at exposing this to the internet so we can access our pods:

  1. Open up your workload and click on Expose:
  1. You will see the next screen that allows you to map an external port to an internal port. This internal port is the port your application talks to in the pod. In our example, nginx being a web server, talks to port 80. The default value is the same value as the port number:
  1. Next, we need to select the Service type. A service is defined as a policy by which you access a set of pods (or a workload/deployment). Kubernetes services support TCP (default) and UDP:
    • Cluster IP: This exposes the service to a internal IP and choosing this option makes the service reachable from within the cluster. This is also the default service type.
    • Node port: This mode exposes the service at each node’s IP address and a port number. You will be able to access the service by going to the node’s IP address and the associated port number.
    • Load balancer: This exposes your service externally via a load balancer and makes it accessible to the internet:
  1. Let’s deploy a load balancer and check whether we can access our website running on containers:
  1. Click on Expose when you are done.
  1. You will see a new service created, as shown here:
  1. Notice that the status reads Creating Service Endpoints. This is because the Kubernetes engine is deploying an external load balancer.
  2. Go to the main side menu,  Networking | Network services and click on Load balancing:
  1. We see a load balancer deployed with the backend having three node instances. Click on the load balancer:
  1. As you can see, the backend node instances are our three Kubernetes managed nodes from our cluster. Notice the external IP address and the port as well.
  1. Open up a browser, and copy and paste the IP:Port info (your IP will certainly be different from what I have here). You should see the following:
  1. The load balancer receives our request and passes it on to our nodes. Kubernetes takes our request and sends it to a pod to serve that request.
  2. Open up the Services pane in the Kubernetes engine. You should see the service deployed and mapped to the external load balancer:
  1. Remember what we discussed earlier: the cluster IP is an internal IP on which Kubernetes pods communicate. The external load balancer passes all requests over to this cluster IP, which then routes it to the node where the pod is living. If you are wondering about firewall rules, those have already been added by the Kubernetes engine when we chose to expose the service.
  2. Go to the main panel,  Networking | VPC network. I see that a rule to allow ingress (incoming) traffic from anywhere (0.0.0.0/0) on port 80 is open. Let’s click on it:

  1. You will see more specific details on the rule and also that it is enabled:

Multi-zone clusters

If you did the previous example, you must have noticed that we were deploying our cluster and its workloads in just one zone. By default, a cluster deploys all of its components (cluster master and its nodes) in a single zone. In production environments, multi-zone, or regional clusters are deployed to improve availability and to make deployments resilient to failures.

There is a difference between regional and multi-zone clusters. In a multi-zone cluster, nodes are created in multiple zones and a single cluster master is created in a specific zone. In a regional cluster, by default you create three cluster masters in three zones and nodes in multiple zones, depending on the number you need. You choose to create a multi-zone or regional clusters at the time of creation. However, you cannot downgrade or migrate once the clusters are created, so appropriate planning of your deployment is very important.

Comments are closed.

loading...