loading...

Ubuntu Server 18.04 – Setting up a virtual machine server

How to install and use docker on ubuntu 18.04

I’m sure many of you have already used a virtualization solution before. In fact, I bet a great many readers are following along with this tutorial while using a virtual machine running in a solution such as VirtualBox, Parallels, VMware, or one of the others. In this section, we’ll see how to use an Ubuntu Server in place of those solutions. While there’s certainly nothing wrong with solutions such as VirtualBox, Ubuntu has virtualization built right in, in the form of a dynamic duo consisting of Kernel-based Virtual Machine (KVM) and Quick Emulator (QEMU), which together form a virtualization suite that enables Ubuntu (and Linux in general) to run virtual machines without the need for a third-party solution. KVM is built right into the Linux kernel, and handles the low-level instructions needed to separate tasks between a host and a guest. QEMU takes that a step further and emulates system hardware devices. There’s certainly nothing wrong with running VirtualBox on your Ubuntu Server, and many people do. But there’s something to be said of a native system, and KVM offers a very fast interface to the Linux kernel to run your virtual machines with near-native speeds, depending on your use case. QEMU/KVM (which I’ll refer to simply as KVM going forward) is about as native as you can get.

I bet you’re eager to get started, but there are a few quick things to consider before we dive in. First, of all the activities I’ve walked you through in this tutorial so far, setting up our own virtualization solution will be the most expensive from a hardware perspective. The more virtual machines you plan on running, the more resources your server will need to have available (especially RAM). Thankfully, most computers nowadays ship with 8 GB of RAM at a minimum, with 16 GB or more being fairly common. With most modern computers, you should be able to run virtual machines without too much of an impact. Depending on what kind of machine you’re using, CPU and RAM may present a bottleneck, especially when it comes to legacy hardware.

For the purposes of this chapter, it’s recommended that you have a PC or server available with a processor that’s capable of supporting virtual machine extensions. The performance of virtual machines may suffer without this support, if you can even get VMs to run at all without these extensions. A good majority of CPUs on computers nowadays offer this, though some may not. To be sure, you can run the following command on the machine you intend to host KVM virtual machines on in order to find out whether your CPU supports virtualization extensions. A result of 1 or more means that your CPU does support virtualization extensions. A result of 0 means it does not:

egrep -c '(vmx|svm)' /proc/cpuinfo 

Even if your CPU does support virtualization extensions, it’s usually a feature that’s disabled by default with most end-user PCs sold today. To enable these extensions, you may need to enter the BIOS setup screen for your computer and enable the option. Depending on your CPU and chipset, this option may be called VT-x or AMD-V. Some chipsets may simply call this feature virtualization support or something along those lines. Unfortunately, I won’t be able to walk you through how to enable the virtualization extensions for your hardware, since the instructions will differ from one machine to another. If in doubt, refer to the documentation for your hardware.

There are two ways in which you can use and interface with KVM. You can choose to set up virtualization on your desktop or laptop computer, replacing a solution such as VirtualBox. Alternatively, you can set up a server on your network, and manage the VMs running on it remotely. It’s really up to you, as neither solution will impact how you follow along with this chapter. Later on, I’ll show you how to connect to a local KVM instance as well as a remote one. It’s really simple to do, so feel free to set up KVM on whichever machine you prefer. If you have a spare server available, it will likely make a great KVM host. Not all of us have spare servers lying around though, so use what you have.

One final note: I’m sure many of you are using VirtualBox, as it seems to be a very popular solution for those testing out Linux distributions (and rightfully so, it’s great!). However, you can’t run both VirtualBox and KVM virtual machines on the same machine simultaneously. This probably goes without saying, but I wanted to mention it just in case you didn’t already know. You can certainly have both solutions installed on the same machine, you just can’t have a VirtualBox VM up and running, and then expect to start up a KVM virtual machine at the same time. The virtualization extensions of your CPU can only work with one solution at a time.

Another consideration to bear in mind is the amount of space the server has available, as virtual machines can take quite a bit of space. The default directory for KVM virtual machine images is /var/lib/libvirt/images. If your /var directory is part of the root filesystem, you may not have a lot of space to work with here. One trick is that you can mount an external storage volume to this directory, so you can store your virtual machine disk images on another volume. Or, you can simply create a symbolic link that will point this directory somewhere else. The choice is yours. If your root filesystem has at least 10 GB available, you should be able to create at least one virtual machine without needing to configure the storage. I think it’s a fair estimate to assume at least 10 GB of hard drive space per virtual machine.

We’ll also need to create a group named kvm as we’re going to allow members of this group to manage virtual machines:

sudo groupadd kvm 

Even though KVM is built into the Linux kernel, we’ll still need to install some packages in order to properly interface with it. These packages will require a decent number of dependencies, so it may take a few minutes for everything to install:

sudo apt install bridge-utils libvirt-bin qemu-kvm qemu-system 

You’ll now have an additional service running on your server, libvirtd. Once you’ve finished installing KVM’s packages, this service will be started and enabled for you. Feel free to take a look at it to see for yourself:

systemctl status libvirtd 

Let’s stop this service for now, as we have some additional configuration to do:

sudo systemctl stop libvirtd 

Next, make the root user and the kvm group the owner of the /var/lib/libvirt/images directory:

sudo chown root:kvm /var/lib/libvirt/images 

Let’s set the permissions of /var/lib/libvirt/images such that anyone in the kvm group will be able to modify its contents:

sudo chmod g+rw /var/lib/libvirt/images 

The primary user account you use on the server should be a member of the kvm group. That way, you’ll be able to manage virtual machines without switching to root first. Make sure you log out and log in again after executing the next command, so the changes take effect:

sudo usermod -aG kvm <user> 

At this point, we should be clear to start the libvirtd service:

sudo systemctl start libvirtd 

Next, check the status of the service to make sure that there are no errors:

sudo systemctl status libvirtd 

On your laptop or desktop (or the machine you’ll be managing KVM from), you’ll need a few additional packages:

sudo apt install ssh-askpass virt-manager 

The last package we installed with the previous command was virt-manager, which is a graphical utility for managing KVM virtual machines. As a Linux-only tool, you won’t be able to install it on a Windows or macOS workstation. There is a way to manage VMs via the command line which we’ll get to near the end of this chapter, but virt-manager is definitely recommended. If all else fails, you can install this utility inside a Linux VM running on your workstation.

We now have all the tools installed that we will need, so all that we need to do is configure the KVM server for our use. There are a few configuration files we’ll need to edit. The first is /etc/libvirt/libvirtd.conf. There are a number of changes you’ll need to make to this file, which I’ll outline below. First, you should make a backup copy of this file in case you make a mistake:

sudo cp /etc/libvirt/libvirtd.conf /etc/libvirt/libvirtd.conf.orig 

Next, look for the following line:

unix_sock_group = "libvirtd" 

Change previous line to the following:

unix_sock_group = "kvm" 

Now, find this line:

unix_sock_ro_perms = "0777" 

Change it to this:

unix_sock_ro_perms = "0770" 

Afterwards, restart libvirtd:

sudo systemctl restart libvirtd 

Next, open virt-manager on your administration machine. It should be located in the Applications menu of your desktop environment, usually under the System Tools section under Virtual Machine Manager. If you have trouble finding it, simply run virt-manager at your shell prompt:

The virt-manager application

The virt-manager utility is especially useful as it allows us to manage both remote and local KVM servers. From one utility, you can create connections to any of your KVM servers, including an external server or localhost if you are running KVM on your laptop or desktop. To create a new connection, click on File and select Add Connection. A new screen will appear, where we can fill out the details for the KVM server we wish to connect to:

Adding a new connection to virt-manager

In the Add Connection window, you can simply leave the defaults if you’re connecting to localhost (meaning your local machine is where you installed the KVM packages). If you installed KVM packages on a remote server, enter the details here. In the screenshot, you can see that I first checked the Connect to remote host box, and then I selected¬† SSH as my connection Method, jay for my Username, and 172.16.250.130 was the IP address for the server I installed KVM on. Fill out the details here specific to your KVM server to set up your connection. Keep in mind that in order for this to work, the username you include here will need to be able to access the server via SSH, have permissions to the hypervisor (be a member of the kvm group we added earlier) and the libvirtd unit must be running on the server. If all of these requirements are met, you’ll have a new connection set up to your KVM server when you click Connect. You might see a pop-up dialog box with the text Are you sure you wish to continue connecting (yes/no)? If you do, type yes and press Enter.

Either way, you should be prompted for your password to your KVM server; type that in and press Enter. You should now have a connection listed in your virt-manager application. You can see the connection I added in the following screenshot; it’s the second one on the list. The first connection is localhost, since I also have KVM running on my local laptop in addition to having it installed on a remote server:

virt-manager with a new connection added

We’re almost at a point where we’ll be able to test our KVM server. But first, we’ll need a storage group for ISO images, for use when installing operating systems on our virtual machines. When we create a virtual machine, we can attach an ISO image from our ISO storage group to our VM, which will allow it to install the operating system. To create this storage group, open virt-manager if it’s not already. Right-click on the listing for your server connection and then click on Details. You’ll see a new window that will show details regarding your KVM server. Click on the Storage tab:

The first screen while setting up a new storage pool

At first, you’ll only see the default connection we edited earlier. Now, we can add our ISO storage pool. Click on the plus symbol to create the new pool:

The storage tab of the virt-manager application

In the Name field, type ISO. You can actually name it anything you want, but ISO makes sense, considering it will be storing ISO images. Leave the Type setting as dir: Filesystem Directory and click Forward to continue to the next screen:

The second screen while setting up a new storage pool

For the Target Path field, you can leave the default if you want to, which will create a new directory at /var/lib/libvirt/images/ISO. If this default path works for you, then you should be all set. Optionally, you can enter a different path here if you prefer to store your ISO images somewhere else. Just make sure the directory exists first. Also, we should update the permissions for this directory:

sudo chown root:kvm /var/lib/libvirt/images/ISO 
sudo chmod g+rw var/lib/libvirt/images/ISO 

Congratulations! You now have a fully configured KVM server for creating and managing virtual machines. Our server has a place to store virtual machines as well as ISO images. You should also be able to connect to this instance using virt-manager, as we’ve done in this section. Next, I’ll walk you through the process of setting up your first VM. Before we get to that, I recommend you copy some ISO images over to your KVM server. It doesn’t really matter which ISO image you use, any operating system should suffice. If in doubt, you can download the Ubuntu minimal ISO image from the following wiki article:

https://help.ubuntu.com/community/Installation/MinimalCD

The mini ISO file is a special version of Ubuntu that installs only a very small base set of packages and is the smallest download of Ubuntu available.

After you’ve chosen an ISO file and you’ve downloaded it, copy it over to your server via scp or rsync. Both of those utilities were covered in Chapter 8, Sharing and Transferring Files. Once the file has been copied over, move the file to your storage directory for ISO images. The default, again, is /var/lib/libvirt/images/ISO if you didn’t create a custom path.

Comments are closed.

loading...