Windows Server 2019 – Storage Spaces Direct (S2D)

How to set a static IP address on Windows Server 2019

S2D is a clustering technology, but I list it here separate from general failover clustering because S2D is a core component of the software-defined data center (SDDC) and has had so much focus on improvements over the past few years that it really is in a category of its own.

In a nutshell, S2D is a way to build an extremely efficient and redundant centralized, network-based storage platform, entirely from Windows Servers. While serving the same general purpose (file storage) as a traditional NAS or SAN device, S2D takes an entirely different approach in that it does not require specialized hardware, nor special cables or connectivity between the nodes of the S2D cluster.

In order to build S2D, all you need is Windows Servers; the faster the better, but they could be normal, everyday servers. These servers must be connected through networking, but there are no special requirements here; they simply all get connected to a network, just like any other server in your environment. Once you have these servers running, you can utilize clustering technologies or the new WAC to bind these servers together into S2D arrays.

S2D is part of the overall Hyper-Converged Infrastructure (HCI) story, and is a wonderful way to provide extremely fast and protected storage for anything, but especially for workloads such as clusters of Hyper-V servers. As you already know, when building a Hyper-V server cluster, the nodes of that cluster must have access to shared storage upon which the virtual machine hard disk files will reside. S2D is the best way to provide that centralized storage.

S2D will take the hard drives inside your S2D cluster node servers, and combine all of their space together into software-defined pools of storage. These storage pools are configured with caching capabilities, and even built-in fault-tolerance. You obviously wouldn’t want a single S2D node, or even a single hard drive going offline, to cause a hiccup to your S2D solution, and of course Microsoft doesn’t want that to happen either. So when you group servers and all of their hard drives together into these large pools of S2D storage, they are automatically configured with parity among those drives so that particular components going offline does not result in lost data, or even slow down the system.

S2D is the best storage platform for both SOFS and Hyper-V clusters.

While Server 2016-based S2D was configured mostly through PowerShell (which unfortunately means that a lot of administrators haven’t tried it yet), Windows Server 2019 brings us the new WAC toolset, and WAC now includes built-in options for configuring an S2D environment:

S2D is one of those technologies that warrants its own book, but anyone looking to try out or get started with this amazing storage technology should start at https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-overview.

New in Server 2019

For those of you already familiar with the concept of S2D who want to know what is new or different in the Server 2019 flavor, here are some of the improvements that have come with this latest version of the operating system:

  • Improved use of Resilient File System (ReFS) volumes: We now have deduplication and compression functions on ReFS volumes hosted by S2D.
  • USB witness: We already discussed this one briefly, when using a witness to oversee an S2D cluster that is only two nodes, you can now utilize a USB key plugged into a piece of networking equipment, rather than running a third server for this witnessing purpose.
  • WAC: WAC now includes tools and functionality for defining and managing S2D clusters. This will make adoption much easier for folks who are not overly familiar with PowerShell.
  • Improved capability: We can now host four petabytes per cluster.
  • Improved speed: While S2D has been fast since the very first version, we have some efficiency improvements in Server 2019. At last year’s Ignite conference, Microsoft showcased an 8-node S2D cluster that was capable of achieving 13,000,000 IOPs. Holy moly!

Comments are closed.