Windows Server 2019 – Software-defined networking

Installing an FTP server

The flexibility and elasticity of cloud computing cannot be denied, and most technology executives are currently exploring their options for utilizing cloud technologies. One of the big stumbling blocks to adaptation is trust. Cloud services provide enormous computing power, all immediately accessible at the press of a button. In order for companies to store their data on these systems, the level of trust that your organization has in that cloud provider must be very high. After all, you don’t own any of the hardware or networking infrastructure that your data is sitting on when it’s in the cloud, and so your control of those resources is limited at best. Seeing this hurdle, Microsoft has made many efforts in recent updates to bring cloud-like technology into the local data center. Introducing server elasticity into our data centers means virtualization. We have been virtualizing servers for many years now, though the capabilities there are being continually improved. Now that we have the ability to spin up new servers so easily through virtualization technologies, it makes sense that the next hurdle would be our ability to easily move these virtual servers around whenever and wherever we need to.

Do you have a server that you want to move into a data center across the country? Are you thinking of migrating an entire data center into a new colocation across town? Maybe you have recently acquired a new company and need to bring its infrastructure into your network, but have overlapping network configurations. Have you bought into some space at a cloud service provider and are now trying to wade through the mess of planning the migration of all your servers into the cloud? These are all questions that needed an answer, and that answer is SDN.

SDN is a broad, general term that umbrellas many technologies working together to make this idea possible. Its purpose is to extend your network boundaries whenever and wherever you need. Let’s take a look at some of the parts and pieces available in Windows Server 2019 that work in tandem to create a virtual networking environment, the first step in adopting our software-defined networking ideology.

Hyper-V Network Virtualization

The biggest component being focused on right now that brings the ability to pick up your networks and slide them around on a layer of virtualization lies within Hyper-V. This makes sense, because this is the same place you are touching and accessing to virtualize your servers. With Hyper-V Network Virtualization, we are creating a separation between the virtual networks and the physical networks. You no longer need to accommodate IP scheme limitations on the physical network when you set up new virtual networks, because the latter can ride on top of the physical network, even if the configurations of the two networks would normally be incompatible.

This concept is a little bit difficult to wrap your head around if this is the first time you are hearing about it, so let’s discuss some real-world situations that would benefit from this kind of separation.

Private clouds

Private clouds are steamrolling through data centers around the World, because they make a tremendous number of sense. Anyone interested in bringing the big benefits of the cloud into their environments, while at the same time staying away from cloud negatives, can benefit from this. Building a private cloud gives you the ability to have dynamically-expanding and -shrinking compute resources and the ability to host multiple tenants or divisions within the same compute infrastructure. It provides management interfaces directly to those divisions so that the nitty-gritty setup and configuration work can be done by the tenant and you don’t have to expend time and resources on the infrastructure-provider level making small, detailed configurations.

Private clouds enable all of these capabilities while staying away from the big scare of your data being hosted in a cloud service-provider’s data center that you have no real control over, and all of the privacy concerns surrounding that.

In order to provide a private cloud inside your infrastructure, particularly one where you want to provide access to multiple tenants, the benefits of network virtualization become apparent, and even a requirement. Let’s say you provide computing resources to two divisions of a company, and they each have their own needs for hosting some web servers. No big deal, but these two divisions both have administrative teams who want to use IP schemes that are within 10.0.0.0. They both need to be able to use the same IP addresses, on the same core network that you are providing, yet you need to keep all of their traffic completely segregated and separated. These requirements would have been impossible on a traditional physical network, but by employing the power of network virtualization, you can easily grant IP subnets and address schemes of whatever caliber each division chooses. They can run servers on whatever subnets and IP addresses they like, and all of the traffic is encapsulated uniquely so that it remains separated, completely unaware of the other traffic running around on the same physical core network that runs beneath the virtualization layer. This scenario also plays out well with corporate acquisitions. Two companies who are joining forces at the IT level often have conflicts with domains and network subnetting. With network virtualization, you can allow the existing infrastructure and servers to continue running with their current network config, but bring them within the same physical network by employing Hyper-V Network Virtualization.

Another simpler example is one where you simply want to move a server within a corporate network. Maybe you have a legacy line-of-business server that many employees still need access to, because their daily workload includes the LOB application to be working at all times. The problem with moving the server is that the LOB application on the client computers has a static IPv4 address configured by which it communicates with the server. When the user opens their app, it does something such as talk to the server at 10.10.10.10. Traditionally, that could turn into a dealbreaker for moving the server, because moving that server from its current data center into a new location would mean changing the IP address of the server, and that would break everyone’s ability to connect to it. With virtual networks, this is not an issue. With the ability to ride network traffic and IP subnets on the virtualization layer, that server can move from New York to San Diego and retain all of its IP address settings, because the physical network running underneath doesn’t matter at all. All of the traffic is encapsulated before it is sent over the physical network, so the IP address of the legacy server can remain at 10.10.10.10, and it can be picked up and moved anywhere in your environment without interruption.

Hybrid clouds

While adding flexibility to your corporate networks is already a huge benefit, the capabilities provided by virtualizing your networks expands exponentially when you do finally decide to start delving into real cloud resources. If and when you make the decision to move some resources to be hosted by a public cloud service provider, you will likely run a hybrid cloud environment. This means that you will build some services in the cloud, but you will also retain some servers and services on-site. I foresee most companies staying in a hybrid cloud scenario for the rest of eternity, as a 100% movement to the cloud is simply not possible given the ways that many of our companies do business. So now that you want to set up a hybrid cloud, we are again looking at all kinds of headaches associated with the movement of resources between our physical and cloud networks. When I want to move a server from on-site into the cloud, I need to adjust everything so that the networking configuration is compatible with the cloud infrastructure, right? Won’t I have to reconfigure the NIC on my server to match the subnet that is running in my cloud network? Nope, not if you have your network virtualization infrastructure up and running. Once again, software-defined networking saves the day, giving us the ability to retain the existing IP address information on our servers that are moving, and simply run them with those IP addresses in the cloud. Again, since all of the traffic is encapsulated before being transported, the physical network that is being provided by the cloud does not have to be compatible with or distinct from our virtual network, and this gives us the ability to seamlessly shuttle servers back and forth from on-premise to the cloud without having to make special accommodations for networking.

How does it work?

So far it all sounds like a little bit of magic; how does this actually work and what pieces need to fit together in order to make network virtualization a reality in our organization? Something this comprehensive surely has many moving parts, and cannot be turned on by simply flipping a switch. There are various technologies and components running within a network that has been enabled for network virtualization. Let’s do a little explaining here so that you have a better understanding of the technologies and terminology that you will be dealing with once you start your work with software-defined networking.

System Center Virtual Machine Manager

Microsoft System Center is a key piece of the puzzle for creating your software-defined networking model, particularly the Virtual Machine Manager (VMM) component of System Center. The ability to pick up IP addresses and move them to other locations around the World requires some coordination of your networking devices, and VMM is here to help. This is the component that you interface with as your central management point to define and configure your virtual networks. System Center is an enormous topic with many options and data points that won’t fit in this book, so I will leave you with a link as a starting point on VMM learning: https://docs.microsoft.com/en-us/previous-versions/system-center/system-center-2012-R2/gg610610(v=sc.12).

Network controller

Microsoft’s Network controller is a role that was initially introduced in Windows Server 2016, and as the name implies, it is used for control over network resources inside your organization. In most cases, it will be working side by side with VMM in order to make network configurations as centralized and seamless as possible. Network Controller is a standalone role and can be installed onto Server 2016 or 2019 and then accessed directly, without VMM, but I don’t foresee many deployments leaving it at that. Interfacing with Network Controller directly is possible by tapping into its APIs with PowerShell, but is made even better by adding on a graphical interface from which you configure new networks, monitor existing networks and devices, or troubleshoot problems within the virtual networking model. The graphical interface that can be used is System Center VMM.

Network controller can be used to configure many different aspects of your virtual and physical networks. You can configure IP subnets and addresses, configurations and VLANs on Hyper-V switches, and you can even use it to configure NICs on your VMs. Network controller also allows you to create and manage Access Control List (ACL) type rules within the Hyper-V switch so that you can build your own firewalling solution at this level, without needing to configure local firewalls on the VMs themselves or having dedicated firewall hardware. Network controller can even be used to configure load balancing and provide VPN access through RRAS servers.

Generic Routing Encapsulation

Generic Routing Encapsulation (GRE) is just a tunneling protocol, but it’s imperative to making network virtualization happen successfully. Earlier, when we talked about moving IP subnets around and about how you can sit virtual networks on top of physical networks without regard for making sure that their IP configurations are compatible, we should add that all of that functionality is provided at the core by GRE. When your physical network is running 192.168.0.x but you want to host some VMs on a subnet in that data center, you can create a virtual network of 10.10.10.x without a problem, but that traffic needs to be able to traverse the physical 192.168 network in order for anything to work. This is where routing encapsulation comes into play. All of the packets from the 10.10.10.x network are encapsulated before being transported across the physical 192.168.0.x network.

There are two different specific routing-encapsulation protocols that are supported in our Microsoft Hyper-V Network Virtualization environment. In previous versions of the Windows Server operating system, we could only focus on Network Virtualization Generic Routing Encapsulation (NVGRE), since this was the only protocol that was supported by the Windows flavor of network virtualization. However, there is another protocol, called Virtual Extensible Local Area Network (VXLAN), that has existed for quite some time, and many of the network switchesparticularly Ciscothat you have in your environment are more likely to support VXLAN than they are NVGRE. So for the new network-virtualization platforms provided within Windows Server 2016+, we are now able to support either NVGRE or VXLAN, whichever best fits the needs of your company.

You don’t necessarily have to understand how these GRE protocols work in order to make them do work for you, since they will be configured for you by the management tools that exist in this Hyper-V Network Virtualization stack. But it is important to understand in the overall concept of this virtual networking environment that GRE exists, and that it is the secret to making all of this work.

Microsoft Azure Virtual Network

Once you have Hyper-V Network Virtualization running inside your corporate network and get comfortable with the mentality of separating the physical and virtual networks, you will more than likely want to explore the possibilities around interacting with cloud service-provider networks. When you utilize Microsoft Azure as your cloud service provider, you have the ability to build a hybrid cloud environment that bridges your on-premise physical networks with remote virtual networks hosted in Azure. Azure’s virtual network is the component within Azure that allows you to bring your own IP addresses and subnets into the cloud. You can get more info (and even sign up for a free trial of Azure virtual network) here: https://azure.microsoft.com/en-us/services/virtual-network/.

Windows Server Gateway/SDN Gateway

When you are working with physical networks, virtual networks, and virtual networks that are stored in cloud environments, you need some component to bridge those gaps, enabling the networks to interact and communicate with each other. This is where a Windows Server Gateway (also called an SDN Gateway) comes into play. Windows Server Gateway is the newer term; it was previously and is sometimes still called the Hyper-V Network Virtualization Gateway, so you might see that lingo in some of the documentation. A Windows Server Gateway’s purpose is pretty simple: to be the connection between virtual and physical networks. These virtual networks can be hosted in your local environment, or in the cloud. In either case, when you want to connect networks, you will need to employ a Windows Server Gateway. When you are creating a bridge between on-premise and the cloud, your cloud service provider will utilize a gateway on their side, which you would tap into from the physical network via a VPN tunnel.

A Windows Server Gateway is generally a virtual machine, and is integrated with Hyper-V Network Virtualization. A single gateway can be used to route traffic for many different customers, tenants, or divisions. Even though these different customers have separated networks that need to retain separation from traffic of the other customers, cloud providerpublic or privatecan still utilize a single gateway to manage this traffic, because the gateways retain complete isolation between those traffic streams.

The Windows Server Gateway functionality existed in Server 2016, but once it was put into practice, some performance limitations that restricted network traffic throughput were discovered. Those overheads have now been increased dramatically in Windows Server 2019, meaning that you can flow more traffic and additional tenants through a single gateway than was previously possible.

Virtual network encryption

Security teams are continually concerned with the encryption of data. Whether that data is stored or on the move, making sure that it is properly secured and safe from tampering is essential. Prior to Server 2019, getting inner-network traffic encrypted while it was moving was generally the responsibility of the software application itself, not a network’s job. If your software has the ability to encrypt traffic while it is flowing between the client and server, or between the application server and the database server, great! If your application does not have native encryption capabilities, it is likely the communications from that application are flowing in cleartext between the client and server. Even for applications that do encrypt, encryption ciphers, and algorithms are sometimes cracked and compromised, and in the future as new vulnerabilities are discovered, hopefully the way that your application encrypts its traffic can be updated in order to support newer and better encryption methods.

Fortunately, Windows Server 2019 brings us a new capability within the boundaries of software-defined networking. This capability is called virtual network encryption, and it does just what the name implies. When traffic moves between virtual machines and between Hyper-V servers (within the same network), entire subnets can be flagged for encryption, which means that all traffic flowing around in those subnets is automatically encrypted at the virtual networking level. The VM servers and your applications that are running on those servers don’t have to be configured or changed in any way to take advantage of this encryption, as it happens within the network itself, automatically encrypting all traffic that flows on that network.

With Server 2019 SDN, any subnet in a virtual network can be flagged for encryption by specifying a certificate to use for that encryption. If the future happens to bring the scenario where the current encryption standards are out of date or insecure, the SDN fabric can be updated to new encryption standards, and those subnets will continue to be encrypted using the new methods, once again without having to make changes to your VMs or applications. If you are using SDN and virtual networks in your environments, enabling encryption on those subnets is a no-brainer!

Bridging the gap to Azure

Most companies that host servers in Microsoft Azure still have physical, on-premise networks, and one of the big questions that always needs to be answered is How are we going to connect our physical data center to our Azure data center? Usually, companies will establish one of two different methods to make this happen. You can deploy gateway servers on the edges of both your onsite and Azure networks, and connect them using Site-to-Site VPN. This establishes a continuous tunnel between the two networks. Alternatively, Microsoft provides a service called Azure Express Route that does effectively the same thing— it creates a permanent tunnel between your physical network and that of your Azure virtual networks. Either of these methods works great once configured, but these solutions might be considered overkill by small organizations that only have a few on-premise servers that need to be connected to the Azure Cloud.

Comments are closed.