loading...

Windows Server 2019 – Configuring a load-balanced website

Initial Server Setup with CentOS 8

Enough talk; it’s time to set this up for ourselves and give it a try. I have two web servers running on my lab network, WEB1 and WEB2. They both use IIS to host an intranet website. My goal is to provide my users with a single DNS record for them to communicate with, but have all of that traffic be split between the two servers with some real load balancing. Follow along for the steps on making this possible.

Enabling NLB

First things first, we need to make sure that WEB1 and WEB2 are prepared to do NLB, because it is not installed by default. NLB is a feature available in Windows Server 2019, and you add it just like any other role or feature, by running through the Add roles and features wizard. Add this feature on all of the servers that you want to be part of the NLB array:

Enabling MAC address spoofing on VMs

Remember when we talked about unicast NLB and how the physical MAC address of the NIC gets replaced with a virtual MAC address that is used for NLB array communications? Yeah, virtual machines don’t like that. If you are load balancing physical servers with physical NICs, you can skip this section. But many of you will be running web servers that are VMs. Whether they are hosted with Hyper-V, VMware, or some other virtualization technology, there is an extra option in the configuration of the virtual machine itself that you will have to make, so that your VM will happily comply with this MAC addressing change.

The name of this setting will be something along the lines of Enable MAC address spoofing, though the specific name of the function could be different depending on what virtualization technology you use. The setting should be a simple checkbox that you have to enable in order to make MAC spoofing work properly. Make sure to do this for all of your virtual NICs upon which you plan to utilize NLB. Keep in mind, this is a per-NIC setting, not a per-VM setting. If you have multiple NICs on a VM, you may have to check the box for each NIC, if you plan to use them all with load balancing.

The VM needs to be shut down in order to make this change, so I have shut down my WEB1 and WEB2 servers. Now find the checkbox and enable it. Since everything that I use is based on Microsoft technology, I am of course using Hyper-V as the platform for my virtual machines here in the lab. Within Hyper-V, if I right-click on my WEB1 server and head into the VM’s settings, I can then click on my network adapter to see the various pieces that are changeable on WEB1’s virtual NIC. In the latest versions of Hyper-V, this setting is listed underneath the NIC properties, inside the section titled Advanced Features. And there it is, my Enable MAC address spoofing checkbox. Simply click on that to enable, and you’re all set:

If Enable MAC address spoofing is grayed out, remember that the virtual machine must be completely shut down before the option appears. Shut it down, then open up Settings and take another look. The option should now be available to select.

Configuring NLB

Let’s summarize where we are at this point. I have two web servers, WEB1 and WEB2, and they each currently have a single IP address. Each server has IIS installed, which is hosting a single website. I have enabled MAC address spoofing on each (because these servers are virtual machines), and I just finished installing the NLB feature onto each web server. We now have all of the parts and pieces in place to be able to configure NLB and get that web traffic split between both servers.

I will be working from WEB1 for the initial configuration of NLB. Log into this, and you will see that we have a new tool in the list of tools that are available inside Server Manager, called Network Load Balancing Manager. Go ahead and open up that console. Once you have NLB Manager open, right-click on Network Load Balancing Clusters, and choose New Cluster, as shown in the following screenshot:

When you create a new cluster, it is important to note that currently, there are zero machines in this cluster. Even the server where we are running this console is not automatically added to the cluster, and we must remember to manually place it into this screen. So first, I am going to type in the name of my WEB1 server and click on Connect. After doing that, the NLB Manager will query WEB1 for NICs and will give me a list of available NICs upon which I could potentially set up NLB:

Since I only have one NIC on this server, I simply leave it selected and click on Next. The following screenshot gives you the opportunity to input additional IP addresses on WEB1, but since we are only running one IP address, I will leave this screen as is, and click on Next again.

Now we have moved on to a window asking us to input cluster IP addresses. These are the VIPs that we intend to use to communicate with this NLB cluster. As stated earlier, my VIP for this website is going to be 10.10.10.42, so I click on the Add… button and input that IPv4 address along with its corresponding subnet mask:

One more click of the Next button, and we can now see our option for which Cluster operation mode we want to run. Depending on your network configuration, choose between Unicast, Multicast, and IGMP multicast:

The following screenshot of our NLB wizard allows you to configure port rules. By default, there is a single rule that tells NLB to load balance any traffic coming in on any port, but you can change this if you want. I don’t see a lot of people in the field specifying rules here to distribute specific ports to specific destinations, but one neat feature in this screenshot is the ability to disable certain ranges of ports.

That function could be very useful if you want to block unnecessary traffic at the NLB layer. For example, the following screenshot shows a configuration that would block ports 81 and higher from being passed through the NLB mechanism:

Finish that wizard, and you have now created an NLB cluster! However, at this point we have only specified information about the VIP, and about the WEB1 server. We have not established anything about WEB2. We are running an NLB array, but currently that array has just a single node inside of it, so traffic to the array is all landing on WEB1. Right-click on the new cluster and select Add Host To Cluster:

Input the name of our WEB2 server, click on Connect, and walk through the wizard in order to add the secondary NLB node of WEB2 into the cluster. Once both nodes are added to the cluster, our NLB array, or cluster, is online and ready to use. (See, I told you that the word cluster is used in a lot of places, even though this is not talking about a failover cluster at all!)

If you take a look inside the NIC properties of our web servers, and click on the Advanced button inside TCP/IPv4 properties, you can see that our new cluster IP address of 10.0.0.42 has been added to the NICs. Each NIC will now contain both the DIP address assigned to it, as well as the VIP address shared in the array:

The traffic that is destined for the 10.10.10.42 IP address is now starting to be split between the two nodes, but right now the websites that are running on the WEB1 and WEB2 servers are configured to only be running on the dedicated 10.10.10.40 and 10.10.10.41 IP addresses, so we need to make sure to adjust that next.

Configuring IIS and DNS

Just a quick step within IIS on each of our web servers should get the website responding on the appropriate IP address. Now that the NLB configuration has been established and we confirmed that the new 10.10.10.42 VIP address has been added to the NICs, we can use that IP address as a website binding. Open up the IIS management console, and expand the Sites folder so that you can see the properties of your website. Right-click on the site name, and choose Edit Bindings…:

Once inside Site Bindings, choose the binding that you want to manipulate, and click on the Edit… button. This intranet website is just a simple HTTP site, so I am going to choose my HTTP binding for this change. The binding is currently set to 10.10.10.40 on WEB1, and 10.10.10.41 on WEB2. This means that the website is only responding to traffic that comes in on these IP addresses. All I have to do is change that IP address drop-down menu to the new VIP, which is 10.10.10.42. After making this change (on both servers) and clicking on OK, the website is immediately responding to traffic coming in through the 10.10.10.42 IP address:

Now we come to the last piece of the puzzle: DNS. Remember, we want the users to have the ability to simply enter http://intranet into their web browsers in order to browse this new NLB website, so we need to configure a DNS host A record accordingly. That process is exactly the same as any other DNS host record; simply create one and point intranet.contoso.local to 10.10.10.42:

Testing it out

Is NLB configured? Check.

Are the IIS bindings updated? Check.

Has the DNS record been created? Check.

We are ready to test this thing out. If I open up an internet browser on a client computer and browse to http://intranet, I can see the website:

But how can we determine that load balancing is really working? If I continue refreshing the page, or browse from another client, I continue accessing http://intranet, and eventually the NLB mechanism will decide that a new request should be sent over to WEB2, instead of WEB1. When this happens, I am presented with this page instead:

As you can see, I modified the content between WEB1 and WEB2 so that I could distinguish between the different nodes, just for the purposes of this test. If this were a real production intranet website, I would want to make sure that the content of both sites was exactly the same, so that users were completely unaware of the NLB even happening. All they need to know is that the website is going to be available and working, all of the time.

Flushing the ARP cache

Earlier, we had a little discussion about how switches keep a cache of ARP information, which lessens the time those switches need to take when deciding where packets should flow. When you assign an NIC an IP address, the MAC address of that NIC gets associated with the IP address inside the ARP table of certain pieces of networking equipment. Switches, routers, firewalls  these tools commonly have what we refer to as an ARP table, and therefore they have a set of data in that table that is known as the ARP cache.

When configuring NLB, particularly unicast, the NIC’s MAC address gets replaced with a new, virtual MAC address. Sometimes the switches and networking equipment are very quick to catch on to this change, and they associate the new MAC address with the new IP address, and everything works just fine. However, I find that when configuring NLB, the following is generally true: The smarter and more expensive your networking equipment is, the dumber it gets when configuring NLB. What I mean is that your networking equipment might continue to hold onto the old MAC address information that is stored in its ARP table, and doesn’t get updated to reflect the new MAC addressing.

What does this look like in real life? Network traffic will stop flowing to or from those NICs. Sometimes when you establish NLB and it turns itself on, all network traffic will suddenly stop cold to or from those network interfaces. What do you need to do to fix this situation? Sometimes you can wait it out, and within a few minutes, hours, or even a few days the switches will drop the old ARP info and allow the new virtual MACs to register themselves in that table. What can you do to speed up this process? Flush the ARP cache.

The procedure for doing this will be different depending on what kind of networking equipment you are working on  whether it is a switch or router, what brand it is, what model it is, and so on. But each of these guys should have this capability, and it should be named something along the lines of flushing the ARP cache. When you run this function on your equipment, it cleans out that ARP table, getting rid of the old information that is causing you problems and allowing the new MAC addresses to register themselves appropriately into the fresh table.

I only wanted to point this out in the event that you configure NLB, only to see traffic flow cease on your server. More than likely, you are dealing with the ARP cache being stuck on one or more pieces of network equipment that is trying to shuttle the traffic to and from your server.

Comments are closed.

loading...