CentOS 7 – Installing and configuring OpenStack

How to Install Intellij IDEA on Windows 10

After a brief explanation of cloud computing and OpenStack, we can now move on to OpenStack installation on a CentOS 7 Linux server. First of all, we are going to make a few basic environment configurations and then set it up.

For this installation, we will have our cloud infrastructure as follows:

  • The Router/Gateway server as eth machine to provide Internet access to the external websites, with the IP address: 10.0.1.1
  • The cloud server to host OpenStack, with the IP address: 10.0.1.2
  • The hosts that will be used for the cloud computing, with their IP addresses as follows: 10.0.1.4, 10.0.1.5, 10.0.1.6

To have OpenStack well secured, the community integrated many services to ensure that some of those services secure data access and user authentication with encrypted data transmission. For this action, we will need to have OpenSSL installed on our cloud server so that OpenStack can use it to run its services:


$ sudo yum install openssl

To have a safe installation without errors, we need to disable the firewall, if there is one, like this:


$ sudo systemctl stop firewalld.service

Then we need to make sure that the server is connected to the local network and has Internet access. To do so, we need to ping one machine at the local network and a nicely working web server (https://www.google.co.in/):


$ ping –c 5 10.0.1.1
PING 10.0.1.1 (10.0.1.1) 56(84) bytes of data.
64 bytes from 10.0.1.1: icmp_seq=1 ttl=255 time=1.21 ms
64 bytes from 10.0.1.1: icmp_seq=2 ttl=255 time=4.19 ms
64 bytes from 10.0.1.1: icmp_seq=3 ttl=255 time=4.32 ms
64 bytes from 10.0.1.1: icmp_seq=4 ttl=255 time=4.15 ms
64 bytes from 10.0.1.1: icmp_seq=5 ttl=255 time=4.01 ms
--- 10.0.1.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4007ms
rtt min/avg/max/mdev = 1.214/3.580/4.324/1.186 ms
$ ping –c 5 www.google.com

The result of the test should look like the following:

Then we need to add all the nodes involved (controller node, network node, compute node, object storage node, and block storage node):


$ sudo nano /etc/hosts

Next, to have the nodes well synchronized among each other, we need to set up a time server to configure a time for all the servers. To do this, we will be using the NTP service. First, however, we need to install it:


$ sudo yum install ntp

Then we need to start it and make it run at system startup:


$ sudo systemctl enable ntpd.service
$ sudo systemctl start ntpd.service

To verify the installation, we need to use the following command:


$ sudo ntpq -c peers

To see the output of this command, have a look at the following:


$ sudo ntpq -c assoc

To see the output of this command, refer to the following:

We need to see sys.peer in the condition column at any line.

Note

We need to do the same for all the involved nodes.

Now, we put SELinux into permissive mode:


$ sudo nano /etc/selinux/config

Then consider this line:


SELINUX=enforcing

Change it to the following line:


SELINUX= permissive

Then we should reboot the system so that the change can take effect.

After the system starts up, we can move on to the package source configuration. First, we need to make sure that our system packages are all updated:


$ sudo yum update –y

Then we install the epel repository:


$ sudo yum install epel-release

Next, we check whether the additional EPEL repository is enabled:


$ sudo nano /etc/yum.repos.d/epel.repo

We need to make sure that all modules ([epel] [epel-debuginfo] [epel-source]) are enabled:


enabled=1

Then we proceed to install the YUM plugin priorities to enable assignment of relative priorities within repositories:


$ sudo yum install yum-plugin-priorities

Finally, we can set up the OpenStack repository:


$ sudo yum install https://repos.fedorapeople.org/repos/openstack/openstack-juno/rdo-release-juno-1.noarch.rpm

To make OpenStack automatically manage security policies for its services, we need to install the OpenStack-SELinux package:


$ sudo yum install openstack-selinux

Just before installing the official package for the OpenStack service, we will be installing some tools needed for the SELinux policies for OpenStack of our cloud-computing platform. We will first install the database server. For that, we will have the Python MySQL library and the MariaDB server:


$ sudo yum install mariadb mariadb-server MySQL-python

After having MariaDB installed, we need to go ahead and configure it. First, we need to start the database server and add it to the system startup:


$ sudo systemctl enable mariadb.service
$ sudo systemctl start mariadb.service

By default, OpenStack is installed with a no password policy for the root. We need to change that during the first use, while performing a secure setup.

At this point, we have properly set all the required tools and configurations. We can start the OpenStack package installation. We can install each OpenStack component individually, or make it faster by installing and configuring them all at the same time. To do so, we will be using the yum package manager:


$ sudo yum install -y openstack-packstack

For a single-node OpenStack deployment, we should use the following command to configure it:


$ sudo packstack --allinone

We should see a message that starts as follows to conclude that the installation is done correctly and the configuration has been started properly. This may take some time to finish.

The following screen appears if the configuration is done properly:

After getting the configuration done, there will be two authentication credentials generated to be used by the administrator. The first is for the Nagios Server. The login and the password will appear on the screen, so we need to save them to change the password later. The second one is for the OpenStack dashboard, which will be stored in a file at the root directory, called keystonerc_admin.

The first of the two web interfaces should look like this as a confirmation that the node is running:

The second interface looks like what is shown in the following screenshot:

Now we can move on to the network-bridging configuration. We need to create a bridge interface:


$ sudo nano /etc/sysconfig/network-scripts/ifcfg-br-ex

After creating the file, we need to put the following code into it:


DEVICE=br-ex
DEVICETYPE=ovs
TYPE=OVSBridge
BOOTPROTO=static
IPADDR=10.0.1.2 # Old eth0 IP 
NETMASK=255.255.255.0 # the netmask
GATEWAY=10.0.1.1 # the gateway
DNS1=8.8.8.8 # the nameserver
ONBOOT=yes
Now we've got to fix the eth0 configuration file to look like the following:
BOOTPROTO="none"
IPV4_FAILURE_FATAL="no"
IPV6INIT="yes"
IPV6_AUTOCONF="yes"
IPV6_DEFROUTE="yes"
IPV6_FAILURE_FATAL="no"
NAME="eth0"
UUID="XXXXXXXXXX"
ONBOOT="yes"
HWADDR="XXXXXXXXXXXXXX" # this is the Ethernet network Mac address
IPV6_PEERDNS="yes"
IPV6_PEERROUTES="yes"
TYPE=OVSPort
DEVICETYPE=ovs
OVS_BRIDGE=br-ex
ONBOOT=yes

Then we add the following lines to the Neutron configuration file to look like the following in the [ovs] module:


$ sudo nano /etc/neutron/plugin.ini
[ovs]
network_vlan_ranges = physnet1
bridge_mappings = physnet1:br-ex

Next, we restart the network:


$ sudo systemctl restart network.service

The following part is optional, wherein we are going to show in detail what happens if we run the manual way and not the automatic interactive way.

If we want to deploy other nodes manually, we should be using packstack with the --install-hosts option and then put the other host IP address:


$ sudo packstack --install-hosts=10.0.1.4

If there are many hosts, we can add a comma (,) between the IP addresses:


$ sudo packstack --install-hosts=10.0.1.4,10.0.1.5,10.0.1.6

While this command is executed, we will be asked to type the root password from each system individually to connect to the system, install OpenStack, and take control over it:


root@10.0.1.4's password:

We know that the installation is done when we see the following message:


**** Installation completed successfully ******

An answer file containing all the chosen configuration options is saved to the disk in the system from which we run packstack. This file can be used to automate future deployments:


* A new answerfile was created in: /root/packstack-answers-XXXXXXXX-XXXX.txt

A file containing the authentication details of the OpenStack admin user is saved to the disk in the system on which the OpenStack client tools were deployed. We will need these details to manage the OpenStack environment:


* To use the command line tools you need to source the file /root/keystonerc_admin created on 10.0.1.4

We can run packstack interactively to create both single-node and multiple-node OpenStack deployments:


$ sudo packstack

After running this command, we need to follow the list of steps to have the nodes deployed.

First, it will ask for the public key to be stored in the server to get automatic SSH access, so we need to have one generated already:


$ ssh-keygen –t rsa

Then we give its location, which is ~/.ssh/id_rsa.pub:


Enter the path to your ssh Public key to install on servers:

Next, we select the services that we need to deploy. We can choose whatever we need:


Should Packstack install Glance image service [y|n] [y] :
Should Packstack install Cinder volume service [y|n] [y] :
Should Packstack install Nova compute service [y|n] [y] :
Should Packstack install Horizon dashboard [y|n] [y] :
Should Packstack install Swift object storage [y|n] [y] :

Each selected service can be deployed on either a local or a remote system. Where each service is deployed will be determined based on the IP addresses that we provide later in the deployment process.

OpenStack includes a number of client tools. Enter y to install the client tools. A file containing the authentication values of the administrative user will also be created:


Should Packstack install OpenStack client tools [y|n] [y] :

Optionally, the packstack script will configure all servers in the deployment to retrieve date and time information using the Network Time Protocol ( NTP). To use this facility, enter a comma-separated pool of NTP servers:


Enter a comma separated list of NTP server(s). Leave plain if Packstack should not install ntpd on instances.:

Optionally, the packstack script will install and configure Nagios to provide advanced facilities for monitoring the nodes in the OpenStack environment:


Should Packstack install Nagios to monitor openstack hosts [y|n] [n] : 

We now move on to the configuration of the MySQL Instance. OpenStack services require a MySQL database to store data in. To configure the database, we go through the following.

We type the IP address of the server to deploy the MySQL database server on:


Enter the IP address of the MySQL server [10.0.1.1] :

Enter the password to be used for the MySQL administrative user. If we do not enter a value, it will be generated randomly. The generated password will be available in both the ~/.my.cnf file of the current user and the answer file:


Enter the password for the MySQL admin user :

OpenStack services use the Qpid messaging system to communicate. Enter the IP address of the server to deploy Qpid on:


Enter the IP address of the QPID service  [10.0.1.2] :

OpenStack uses keystone (openstack-keystone) for identity, token, catalog, and policy services. If the keystone installation has been selected, then enter the IP address of the server to deploy keystone on when prompted:


Enter the IP address of the Keystone server  [10.0.1.2] :

OpenStack uses glance ( openstack-glance-*) to store, discover, and retrieve virtual machine images. If the glance installation has been selected, then enter the IP address of the server to deploy glance on when prompted:


Enter the IP address of the Glance server  [10.0.1.2] :

To provide volume storage services, OpenStack uses Cinder ( openstack-cinder-*). Enter the IP address of the server to deploy Cinder on. If the installation of the volume services was selected, then these additional configuration prompts will be presented:


Enter the IP address of the Cinder server  [10.0.1.2] :

The packstack utility expects the storage for use with Cinder to be available in a volume group named cinder-volumes. If this volume group does not exist, then we will be asked whether we want it to be created automatically.

Answering yes means that packstack will create a raw disk image in /var/lib/cinder and mount it for use by Cinder using a loopback device:


Should Cinder's volumes group be createdi (for proof-of-concept installation)? [y|n] [y]:

If we chose to have packstack create the cinder-volumes volume group, then we will be prompted to enter its size in gigabytes ( GB):


Enter Cinder's volume group size  [20G] :

OpenStack uses Nova to provide compute services. Nova is itself made up of a number of complementary services that must be deployed. If the installation of the compute services was selected, then these additional configuration prompts will be presented.

The Nova API service ( openstack-nova-api) provides web service endpoints for authenticating and interacting with the OpenStack environment over HTTP or HTTPS. We type the IP address of the server to deploy the Nova API service on:


Enter the IP address of the Nova API service  [10.0.1.3] :

Nova includes a certificate management service ( openstack-nova-cert). Enter the IP address of the server to deploy the Nova certificate management service on:


Enter the IP address of the Nova Cert service  [10.0.1.3] :

The Nova VNC proxy provides facilities to connect users of the Nova compute service to their instances running in the OpenStack cloud. Enter the IP address of the server to deploy the Nova VNC proxy on:


Enter the IP address of the Nova VNC proxy  [10.0.1.3] :

The packstack script is able to deploy one or more compute nodes. Enter a comma-separated list containing the IP addresses or hostnames of all the nodes that you wish to deploy compute services on:


Enter a comma separated list of IP addresses on which to install the Nova Compute services  [10.0.1.3] :

A private interface must be configured to provide DHCP services on the Nova compute nodes. Enter the name of the private interface to use:


Enter the Private interface for Flat DHCP on the Nova compute servers  [eth1] :

The Nova network service ( openstack-nova-network) provides network services for compute instances. Enter the IP address of the server to deploy the Nova network service on:


Enter the IP address of the Nova Network service  [10.0.1.3] :

A public interface must be configured to allow connections from other nodes and clients. Enter the name of the public interface to use:


Enter the Public interface on the Nova network server  [eth0] :

A private interface must be configured to provide DHCP services on the Nova network server. Enter the name of the private interface to use:


Enter the Private interface for Flat DHCP on the Nova network server  [eth1] :

All compute instances are automatically assigned a private IP address. Enter the range within which these private IP addresses must be assigned:


Enter the IP Range for Flat DHCP [10.0.2.0/24] :

Compute instances can optionally be assigned publicly accessible floating IP addresses. Enter the range within which floating IP addresses will be assigned:


Enter the IP Range for Floating IP's [10.0.1.0/24] :

The Nova scheduler ( openstack-nova-scheduler) is used to map compute requests to compute resources. Enter the IP address of the server on which you want to deploy the Nova scheduler:


Enter the IP address of the Nova Scheduler service  [10.0.1.4] :

In the default configuration, Nova allows overcommitment of physical CPU and memory resources. This means that more of these resources can be made available for running instances than actually physically exist on the compute node.

The amount of overcommitment that is permitted is configurable.

The default level of CPU overcommitment allows 16 virtual CPUs to be allocated for each physical CPU socket or core that exists on the physical compute node. Press Enter to accept the default level or enter a different value if desired:


Enter the CPU overcommitment ratio. Set to 1.0 to disable CPU overcommitment [16.0] : 

The default level of memory over commitment allows up to 50% more virtual memory to be allocated than what exists on the physical compute node. Press Enter to accept the default or enter a different value if desired:


Enter the RAM overcommitment ratio. Set to 1.0 to disable RAM overcommitment [1.5] :

If installation of the client tools was selected then enter the IP address of the server to install the client tools on when prompted:


Enter the IP address of the client server  [10.0.1.4] :

OpenStack uses Horizon ( openstack-dashboard) to provide a web-based user interface or dashboard for access to OpenStack services, including Cinder, Nova, Swift, and Keystone. If the installation of the Horizon dashboard was selected then these additional configuration values will be requested.

Enter the IP address of the server to deploy Horizon on:


Enter the IP address of the Horizon server  [10.0.1.4] :

To enable HTTPS communication with the dashboard, we enter y when prompted. Enabling this option ensures that user access to the dashboard is encrypted:


Would you like to set up Horizon communication over https [y|n] [n] : 

If we have already selected to install Swift object storage, then these additional configuration values will be requested.

Enter the IP address of the server that is to act as the Swift proxy. This server will act as the public link between clients and the Swift object storage:


Enter the IP address of the Swift proxy service  [10.0.1.2] :

Enter a comma-separated list of devices that the Swift object storage will use to store objects. Each entry must be specified in HOST/DEVICE format, where the Host is replaced by the IP address of the host the device is attached to, and Device is replaced by the appropriate path to the device:


Enter the Swift Storage servers e.g. host/dev,host/dev  [10.0.1.2] :

The Swift object storage uses zones to ensure that each replica of a given object is stored separately. A zone might represent an individual disk drive or array, a server, all the servers in a rack, or even an entire data center.

When prompted, enter the number of Swift storage zones that must be defined. Note that the number provided must not be bigger than the number of individual devices specified, as follows:


Enter the number of swift storage zones, MUST be no bigger than the number of storage devices configured  [1] :

The Swift object storage relies on replication to maintain the state of objects, even in the event of a storage outage in one or more of the configured storage zones. Enter the number of replicas that Swift must keep of each object when prompted.

A minimum of three replicas is recommended to ensure a reasonable degree of fault tolerance in the object store. Note, however, that the number of replicas specified must not be greater than the number of storage zones, as it would result in one or more of the zones containing multiple replicas of the same object:


Enter the number of swift storage replicas, MUST be no bigger than the number of storage zones configured  [1] :

Currently, packstack supports the use of either Ext4 or XFS file systems for object storage. The default and recommended choice is ext4. Enter the desired value when prompted:


Enter FileSystem type for storage nodes [xfs|ext4]  [ext4] :

The packstack utility allows us to configure the target servers to retrieve software packages from a number of sources. We can leave this part blank to rely on the nodes’ default package sources:


Enter a comma-separated list of URLs to any additional yum repositories to install:

At this point, we will be asked to confirm the deployment details that we provided. Type yes and press Enter to continue with the deployment. Then, it will show us all the information already provided during the entire phase. After verifying that everything is set properly, we type yes for the following question:


Proceed with the configuration listed above? (yes|no): yes

Now, packstack will commence deployment. Note that when packstack is setting up SSH keys, it will prompt us to enter the root password to connect to machines that are not already configured to use key authentication.

Applying the Puppet manifests to all machines involved in the deployment takes a significant amount of time. The packstack utility provides continuous updates, indicating which manifests are being deployed as it progresses through the deployment process. Once the process completes, a confirmation message will be displayed:


 **** Installation completed successfully ******
     (Please allow Installer a few moments to start up.....)
Additional information:
 * A new answerfile was created in: /root/packstack-answers-xxxxx-xxxxx.txt
 * Time synchronization was skipped. Please note that unsynchronized time on server instances might be a problem for some OpenStack components.
 * To use the command line tools source the file /root/keystonerc_admin created on 10.0.1.2
 * To use the console, browse to http://10.0.0.2/dashboard
 * The installation log file is available at: /var/tmp/packstack/xxxx-xxxx-TkY04B/openstack-setup.log
You have mail in /var/spool/mail/root
You have successfully deployed OpenStack using packstack.

The configuration details that we provided are also recorded in an answer file, which can be used to recreate the deployment in future. This answer file is stored in ~/answers.txt by default.

With this step, we can say that we have nicely installed and configured OpenStack as a cloud-computing solution to be used inside a little infrastructure of CentOS 7 Linux servers.

The OpenStack dashboard will be our best way to have a better and clean way to visualize useful information about the status of the cloud infrastructure. It is extremely useful for system administrators to maintain the infrastructure and troubleshoot the system for any issues. Here are some screenshots that show some of the dashboard overview pages:

Source: http://dachary.org/?p=2969

The following page presents the list of the running machines (nodes) with some useful information about the nodes, and also gives us some options to manage them.

Source: http://assist-software.net

Then we shall see the network page that shows the topology of the network holding the cloud nodes.

Source: http://4.bp.blogspot.com

There is another Nova API dashboard with a better-designed interface to be used for presentation and a gigantic dashboard screen used specially for monitoring big grid computer infrastructure. The first dashboard screen shows information about the API’s in use:

Source: http://openstack-in-production.blogspot.com

The second dashboard screen shows the history of execution of those API as well presented log:

Source: http://openstack-in-production.blogspot.com

Comments are closed.