loading...

AWS – Attaching storage

How to manage remote IIS on Windows Server 2019

Ideally, you will have defined all your storage requirements (disk size, IOPs, and so on) up-front as code using a service such as CloudFormation. However, sometimes, that isn’t possible due to application restrictions or changing requirements.

You can easily add additional storage to your instances while they are running by attaching a new volume.

Getting ready

For this recipe, you will need the following:

  • A running instance’s ID. It will start with i-followed by alphanumeric characters.
  • The AZ the instance is running in. This looks like the region name with a letter after it; for example, us-east-1a.

In this recipe, we are using an Amazon Linux 2 instance. If you are using a different operating system, the steps to mount the volume will be different. We will be running an instance in the AZ us-east-1a region.

You must have configured your AWS CLI tool with working credentials for this to work.

How to do it…

Follow these steps to create an Elastic Block Store (EBS) volume and attach it to an instance. EBS is the AWS service that provides disk storage to EC2 instances. While some instance types come with local disk storage, in the vast majority of cases, you will be working with EBS. You can think of EBS as being similar to a Storage Attached Network (SAN) or Network Attached Storage (NAS), but there are significant advantages to EBS over those technologies, which will be covered later:

  1. Create a volume:
      aws ec2 create-volume --availability-zone us-east-1a --size 8
Take note of the returned VolumeId in the response. It will start with vol-followed by alphanumeric characters.
  1. Attach the volume to the instance using the volume ID we noted in the previous step and the instance ID you started with:
      aws ec2 attach-volume \
        --volume-id <your-volume-id> \
        --instance-id <your-instance-id> \
        --device /dev/sdf
  1. Run the lsblk command on the instance. You will see that the device name has been changed from /dev/sdf to /dev/xvdf:
sh-4.2$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
xvda 202:0 0 8G 0 disk
`-xvda1 202:1 0 8G 0 part /
xvdf 202:80 0 8G 0 disk
  1. On Amazon Linux 2, you can see that /dev/sd* is linked to /dev/xvd*:
sh-4.2$ ls -l /dev/sd*
lrwxrwxrwx 1 root root 4 Jun 17 20:12 /dev/sda -> xvda
lrwxrwxrwx 1 root root 5 Jun 17 20:12 /dev/sda1 -> xvda1
lrwxrwxrwx 1 root root 4 Jun 17 20:15 /dev/sdf -> xvdf

  1. Create a filesystem on the device with the mkfs command. Make sure that you use the correct identifier for the new, unformatted device, as you might corrupt an existing data drive if you get it wrong:
sudo mkfs -t xfs /dev/xvdf
  1. Create a new directory and run the mount command on the instance to mount the volume device:
sudo mkdir /mydata  
sudo mount /dev/xvdf /mydata

After running those commands, the new EC2 volume will be mounted to /mydata and will be available for use.

How it works…

In this recipe, we started by creating a volume. Volumes are created from snapshots. If you don’t specify a snapshot ID, it uses a blank snapshot, and you get a blank volume.

While volumes are hosted redundantly, they are only hosted in a single AZ, so they must be provisioned in the same AZ the instance is running in. The data on a volume is stored in several places in the AZ to ensure a high level of durability, but they are only made available in a single AZ to ensure consistent low latency performance.

The create-volume command returns a response that includes the newly created volume’s VolumeId. We then use this ID in the next step.

It can sometimes take a few seconds for a volume to become available. If you are scripting these commands, use the aws ec2 wait command to wait for the volume to become available.

In step 2, we attached a volume to the instance. When attaching to an instance, you must specify the name of the device that it will be presented to the operating system with. Unfortunately, this doesn’t guarantee what the device will appear as. In the case of AWS Linux, /dev/sdf becomes /dev/xvdf.

Device naming is kernel-specific, so if you are using something other than AWS Linux, the device name may be different. See  http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/device_naming.html for full details.

See also

  • The Launching an instance recipe
  • The Working with network storage recipe in  Chapter 3, AWS Storage and Content Delivery

Comments are closed.

loading...