loading...

Nginx – Hosting Web Sites on Nginx

File Management Commands in Linux

I started it as an experiment to boost the delivery of static content (my reference examples of that time were things like thttpd). But as soon as other people tried it in production, they immediately requested the proxy component, and the whole “web acceleration” direction had started. In short, NGINX evolved from a simple experiment with the idea of solving the C10k problem, to a complete solution for proxying, load balancing, SSL, static content acceleration, and a few unique capabilities.

—Igor Sysoev,
http://bit.ly/nginx_interview

The above excerpt from an interview implies that Nginx never really started as a server for dynamic languages like PHP, etc. It evolved from being a static-only server to web accelerator, and so on. In short, Nginx is used for the goodness it provides from static files point of view, and uses its proxy capabilities to hand off the request to the back-end server or processes in order to handle dynamic requests. This gives you the best of both worlds. In this chapter, the focus is on serving the static content only. For brevity, only CentOS servers will be used on a virtual machine using VirtualBox.

Every website is different, not only from the content perspective, but also from the technology perspective. Primarily, you can categorize the applications as static or dynamic. It actually makes a lot of sense to host multiple websites on the same server if the server can handle it.

The static sites contain a lot of resources like images, stylesheets, JavaScript files, html, text, PDF, and so on. The basic nature of the content is that it is made once and served multiple times to the visitors. If you have to change the content, you will need to edit the file appropriately and update the server so that new content gets served to the audience.

The dynamic sites, on the other hand, have scripts and programming languages working at the back end emitting pages that your browser can understand and render directly. The key difference is that the page you view is never really saved on the server’s disk. It also possible that what you are seeing on a page could be completely different from what others would see (for example, Facebook). These websites are very flexible in nature. However, keep in mind that even the most dynamic websites would still use a lot of resources that are static in nature. Nginx is not a programming language or framework that allows you to create dynamic pages. But it does help in front ending the dynamic applications with grace by serving static pages, scripts, style sheets, images, and other static content, while offloading the dynamic content generation to the back-end servers.

You don’t always need to spin multiple servers in order to serve multiple websites. That would be a huge waste of server resources, especially if the websites are not attracting a lot of hits. It is often a good idea to host the website and scale up and scale out as needed.

Scale Up Vs. Scale Out

Scale Up: If you have a web server that attracts a lot of hits, it is possible that adding more CPU or RAM might help depending on the workload. An activity where you add more resources to the existing server is called scaling up the server.

Scale Out: Scaling up has its limitations, since you can only scale up as much as your hardware allows you to. Scaling out is the activity where you add more servers in order to keep up with the traffic. Most popular websites use scaled-out servers.

Server blocks in Nginx help you to map the website content and ensure that each domain points to the appropriate content only. You can host multiple websites on the same server and differentiate them using server blocks. If you are coming from an Apache background, the server directive is similar to a virtual host.

Web Server Setup

It is important that you practice as we go along. In this chapter, you will need to start afresh with two servers. You can use VirtualBox

to create the CentOS servers

. Before you create the servers, read through the article at
http://attosol.com/centos-setup-and-networking-using-virtual-box
. It will guide you in a step-by-step manner regarding the installation steps. Remember, you will need to set up the servers using different variables as discussed below.

We will call our servers WFE1 and WFE2. In chapter 8, you will learn about load balancing these servers. For now, creating two servers with hostname wfe1.localdomain and wfe2.localdomain should suffice.

Once your servers are provisioned, execute ip addr on both the servers and you will notice that the output is exactly the same (similar to Figure 6-1). In simple words, this implies that they are in their own isolated networks and will not be able to ping each other. Let’s change this so that both the servers have different IPs.

Figure 6-1. Output of ip addr

VirtualBox

creates the VMs in such a way that they are not interconnected by default due to security reasons. You can change this by changing the preferences for VirtualBox. The idea is to simply create a NAT network called CentOSFarm (see Figure 6-2).

Figure 6-2. Creating a NAT network

Now that the NAT network is created, you need to change the server settings for both WFE1 and WFE2 as shown in Figure 6-3.

Figure 6-3. Changing NAT network settings for WFE1 and WFE2

Once you are done making these changes, you can run ip addr again on both terminals and you will find that they now have different IPs. If you try to use the ping command, you will find that both these servers are now connected and able to ping each other. Your lab setup, when complete, should have values as shown in Table 6-1. It is possible that you get a different set of IPs while creating your virtual machines. Write them down, so that you can make the necessary changes in the upcoming sections based on your own IPs.

Table 6-1.
Server Naming Convention

Server Name

WFE1

WFE2

IP Address

10.0.2.6

10.0.2.7

HostName

wfe1.localdomain

wfe2.localdomain

Connecting Host and Guest Servers

So far you have your servers talking to each other using a NAT network. It is helpful to connect to the server’s terminal from the host machine so that you can copy files over and perform multiple management activities directly from your host. To ensure you are able to do this, you will need to set up port forwarding so that VirtualBox allows your request to reach the guest servers.

You can click the port-forwarding button (shown in Figure 6-2) and configure the rules as per Figure 6-4. Notice that there are rules created for both WFE1 and WFE2. The rules are created for HTTP and SSH as well, so that you can use a terminal (on Mac/Linux) or PuTTY (on Windows) to login using SSH. Also notice the IPs, since they are the same as for Table 6-1.

Figure 6-4. Setting up port-forwarding rules

Once the rules are set up, you can connect to WFE1 and WFE2 using the following commands on OSX and Linux (for Windows, you can use PuTTY):

#ssh -p 3026 root@127.0.0.1
#ssh -p 3027 root@127.0.0.1

Let’s do a basic Nginx install now on both these servers. Execute the following commands on both servers sequentially (you have already learned what they do in chapter 2):

vi /etc/yum.repos.d/nginx.repo

Add the following text to the file:

[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
enabled=1

Save and exit. Run the following command on both servers:

yum install -y nginx
nginx

At this point, if you try browsing from your host machine using the following URIs (the port-forwarding rules for HTTP were set in Figure 6-4), you should be able to view the pages hosted on your servers (Figure 6-5):

Figure 6-5. Browsing an Nginx website using port forwarding from a host machine

http://127.0.0.1:8006
http://127.0.0.1:8007

User Creation

So far, you have installed Nginx and connected the two servers using NAT network

. You have also been able to use the secure shell (ssh) to connect to the servers using ssh -p 3026 root@127.0.0.1. However, there is a problem.

Connecting to the server using root account is not considered a secure web practice. Besides, in today’s world where everything is going to the cloud, you often don’t have a root access to begin with. If you are using AWS EC2 instance or Azure for hosting your virtual servers, an account will be provisioned for you automatically, and you would use that account to access your servers. Since you are working locally at the moment on your virtual machines hosted on VirtualBox, you have full liberty to play with all different accounts. You will be following good practices nevertheless.

Putting that small detour aside, let’s start by creating normal users. Use the commands below (on both WFE1 and WFE2) to create a user and assign a password:

#useradd user1
#passwd user1

You can log out from the root prompt by using ( logout command) and log back in using the following command:

ssh -p 3026 user1@127.0.0.1.

At the prompt, type pwd and you should see /home/user1.

Sample Applications

Now it’s time to upload the website content to the web server. Instead of creating sample applications

from scratch, you can visit https://​github.​com/​attosol/​nginx and download the zipped version of the repository. This repository is made only for the purpose of this book and contains various samples curated from the open source community.

Once downloaded, extract the zip file and navigate to the folder called static. It contains two subfolders called site1 and site2. Both of them contain different website samples created using static content only. In this chapter, you will deal only with the static content.

Uploading Content

You can use copy command or an FTP client on your host server or desktop to make the data transfer easy. One of the popular tools is FileZilla; you can download it from https://filezilla-project.org/. It is open source and is extremely powerful. You can use Site Manager in FileZilla to set up your connection so that it is easy for you to upload content easily. Figure 6-6 will show you the details required to be filled in order to connect to the virtual server.

Figure 6-6. Using Site Manager

to create connections for frequent use

Notice the use of port and protocol. Also notice that you can add multiple entries to store all your connections at once. Once you connect you will land inside /home/user1 by default.

Let’s assume that you are working on some server that you have not provisioned yourself. It may be a bit tricky initially to figure out the default web path and configuration file location for your server. In the Linux world, there are a wide variety of distros available and the default location varies a lot. When stuck, you can use the following method to find the default configuration path of any Nginx server:

Step 1: Execute nginx -V and take a look at the –conf-path:

nginx version: nginx/1.8.1
built by gcc 4.8.3 20140911 (Red Hat 4.8.3-9) (GCC)
built with OpenSSL 1.0.1e-fips 11 Feb 2013
TLS SNI support enabled
configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_stati

Step 2: Open the config file ( /etc/nginx/nginx.conf), and locate the server block. By default, the root configuration file will not have the server block. Instead, it will be structured like the following:

user  nginx;                                                                      
...
http {
    ...
    include /etc/nginx/conf.d/*.conf;
}

Step 3: Open default.conf file inside /etc/nginx.conf.d and locate the root directive. That location will be your default root for the web server.

root   /usr/share/nginx/html;

Once the root is determined, you would want to start uploading the content. Well, there is another issue that you need to fix before you could do that. Recall that you are not using the root account any more. The user1 account that you added doesn’t have write access and FileZilla will not be able to upload the content directly to the server. You can fix this in multiple ways:

  • You may be tempted to use chmod 777 /usr/share/nginx/html. NEVER do that, period! Most people who use chmod 777 on a web server don’t realize what they are doing. Basically, it will open up your web server for full access by anyone. You don’t everyone in the world to come over and mess around with your servers. If you have already done that by mistake, use chmod 755 /usr/share/nginx/html to fix the permissions.

  • You can change the ownership of the root folder so that the allowed users can upload the file. Assuming user1 is one of the allowed users, you can use the following command to allow access to user1:

    chown user1 /usr/share/nginx/html.
  • The previous approach doesn’t scale well if you have multiple members uploading to the same directory (which is often the case). To fix that, you can create a group and add users to that group instead.

    • Create a new group called www by using this command:

    groupadd www
    • Modify the user1 information such that it belongs to this group www.

    usermod -a -G www user1
    • Make this group an owner (similar to chown command that is used for a user) of the root path /usr/share/nginx/html. -R switch is used to ensure that all permissions are set recursively.

    chgrp -R www /usr/share/nginx/html
    • Now, grant write permission to this group on the root directory.

    chmod -R g+w /usr/share/nginx/html

Almost done! This group exercise may appear a bit cumbersome, but it will keep your web server in good shape from a security perspective. After all this exercise, you should be ready to upload the folder to the root path. Upload the content that you downloaded earlier so that the structure looks like Figure 6-7. Please note that none of the files or directory names have been modified.

Figure 6-7. Uploading files

You can see the website named Shield Theme hosted inside a subdirectory under site1, whereas site2 contains another website called Landy along with its dependencies.

Hosting Websites

Your servers are now up and ready to host the websites. But there are still issues that you should be aware of. If you browse to http://localhost:8006/site2/index.html, you will find the website being rendered as Figure 6-8 depicts.

Figure 6-8.
Website rendering

as a relative path

As you can see, the URI is still localhost and there is a path /site2/index.html that is being rendered in the browser. Even though the site is rendering, it is not an individual website. An isolated website should be such that when you type http://localhost:8006 as your URI, you should be able to see this page as in Figure 6-8. Not only that, but the content of the site2 should not be rendered from the root site at all. At the moment, the root site is the only website for Nginx, and the configuration needs to be fixed.

As you have already learned, the default configuration file

of Nginx ( /etc/nginx/nginx.conf), contains an include directive ( include /etc/nginx/conf.d/*.conf;) at the end of the configuration file, and it ensures that all the conf inside conf.d directory gets loaded as part of the Nginx configuration.

It is a good idea to rename the default.conf file as a template (say, default.template) and create other websites based on the default.template. The reason why this renaming is done is to ensure that this template doesn’t get included from the nginx.conf due to the include directive. To rename, use the following command:

# mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.template

Follow the renaming with a configuration reload (use nginx -s reload) and refresh your browser ( http://localhost:8006/). The page wouldn’t load, and this is fine.

From here on in this book, wherever you read reload configuration, it would mean executing the command nginx -s reload. Also, most commands here would work without sudo, but a few need sudo before that. If any of the commands do not work without sudo, try it again with sudo.

It is a good practice to keep the name of the configuration similar to your domain name. In this chapter, you will make two websites hosted at
site1.com
and site2. com. These websites should render if someone uses
www.site1.com
or
www.site2.com
as well. Start by making a copy of the template using the command

that follows:

# cp /etc/nginx/conf.d/default.template /etc/nginx/conf.d/site1.conf

After that, edit the site.conf file so that it looks like the following:

server {
    listen       80;
    server_name  localhost;

    root  /usr/share/nginx/html/site1/Shield\ Theme;

    location / {
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Reload configuration and browse http://localhost:8006. Contrary to what you might have guessed, it throws a 404 error. What went wrong?

Troubleshooting Tips

Try listing the directory and sure enough, it is there:

#ls /usr/share/nginx/html/site1/Shield\ Theme
assets  index.html

Instead of guessing around, you can take a few approaches to troubleshoot such issues without wasting time.

Approach 1: The first one is accessing the tail of the access logs like so:

#tail /var/log/nginx/access.log
/usr/share/nginx/html/site1/Shield\x5C Theme - / - GET / HTTP/1.1 - 404 - 570 -

Approach 2: Sometimes, you might find that a plain status code doesn’t help as much. In that case, you can use a command-line utility called strace. If it is not available on the CentOS version you are using, download it using yum.

#sudo yum install -y strace
  1. strace needs process id (PID) of the process you need to hook into. Use the following command to get the PID of the worker process.

    # ps -aux | grep nginx
    root     28524  0.0  0.2  48236  2052 ?        Ss   14:15   0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf
    nginx    28534  0.0  0.2  48240  2184 ?        S    14:18   0:00 nginx: worker process
  2. After you get the PID, use strace as follows and refresh the URI again in your browser:

    strace -p 28534 -e trace=file -f
  3. Once you refresh the browser, strace should output a few lines like so:

    Process 28534 attached
    stat("/usr/share/nginx/html/site1/Shield\\ Theme/index.html", 0x7fffd772cb90) = -1 ENOENT (No such file or directory)
    stat("/usr/share/nginx/html/site1/Shield\\ Theme", 0x7fffd772cb90) = -1 ENOENT (No such file or directory)

    As you can see, this gives you a much clearer picture of what is going on. It seems like Nginx has issues with parsing path with space.

Once you know the reason, fixing it is easy. Fix the site1.conf by changing it like so:

server {
    listen       80;
    server_name  localhost;

    root  "/usr/share/nginx/html/site1/Shield Theme";

    location / {
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Now, the browser should be happy and it should render the site well (see Figure 6-9).

Figure 6-9. Site1 rendered using localhost:8006 as URI

Websites Using Different Names

Try to host site2 using the same concept, and

with site2.conf (create a copy of site1.conf if you like and make the changes required) file as follows:

server {
    listen       80;
    server_name  localhost;

    root  "/usr/share/nginx/html/site2";

    location / {
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

It might look simple, but when you try reloading the configuration, it won’t work and throw an error message instead:

#nginx -s reload
nginx: [warn] conflicting server name "localhost" on 0.0.0.0:80, ignored

What went wrong?

Reading the warning carefully reveals that the name localhost was used multiple times with the same port and Nginx ignored it. In other words, it means that you cannot use the same server_name to distinguish different websites for obvious reasons. Open site2.conf again and change server_name directive from localhost to 127.0.0.1. Leave everything as is.

server {
    listen       80;
        server_name       127.0.0.1;              

    root  "/usr/share/nginx/html/site2";

    location / {
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Reload the configuration and you will find that this time, both websites work (see Figure 6-10). You now have an interesting configuration here! It might confuse you, especially if you are coming from an IIS background. When you access localhost:8006 you get site1 but when you use 127.0.0.1, you get site2. But isn’t localhost the same as 127.0.0.1? The answer lies in the fact that for Nginx, the look up and name matching happens in a little different manner. You will learn about it shortly in an upcoming section.

Figure 6-10. Both site1 and site2 running in parallel

Websites Using Domain Name

A typical web site will have a domain name like
www.site1.com
, but you can find many people type site1.com as well. From search engine optimization perspective (SEO), it is often considered better to have just one address. It can either be
www.site1.com
or site1.com, but it should be consistent and assuming site1.com is the chosen one,
www.site1.com
should redirect to site1.com. In this section, you start with making a basic change like the following for site1.conf:

server {
    listen       80;
    server_name  site1.com, www.site1.com;

    root  "/usr/share/nginx/html/site1/Shield Theme";

    ...output trimmed...
}

Reload the configuration and try browsing to the following URI:

http://site1.com
http://www.site.com

Did it work?

Well, it won’t because site1.com is not a valid domain name and your operating system tried to fetch the address for site1.com thinking it actually exists. In reality, you will need to buy a domain name called site1.com from a domain registrar and map it directly to the public IP of your server. For testing purposes, you can create host entries to fool your operating system into thinking that site1.com and site2.com points to 127.0.0.1.

In Windows, start Notepad as administrator and open the hosts file located here:

C:\Windows\System32\drivers\etc\hosts

Mac/Linux users should modify the following file:

/etc/hosts

Modify the file by adding two lines:

127.0.0.1               site1.com
127.0.0.1               site2.com                                          

Try browsing the site again, and it should work now. As mentioned earlier, it doesn’t really mean that your website is accessible publicly using the domain name, but for your machine and test purpose this should suffice.

Internal Redirects

You now have both site.com and
www.site1.com
pointing to the correct website. As pointed out in the previous section, it is better that we redirect if someone types
www.site1.com
. Let’s fix that now. Instead of setting explicit locations, you can use two server blocks in the site1.conf file as follows:

server {
    listen      80;
    server_name www.site1.com;
    return      301 http://site1.com$request_uri;
}
server {
    listen       80;
    server_name  site1.com;

    root  "/usr/share/nginx/html/site1/Shield Theme";

    location / {
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

The first server block simply returns a 301, which in HTTP means “Moved Permanently.” Once the browser gets this output, it knows that it has to make another request, which in this case is this:

http://site1.com$request_uri.

The $request_uri is a variable and is present there to ensure that if someone asks for
http://www.site1.com/abc/foo
, they get redirected to
http://site1.com/abc/foo
. If you don’t add $request_uri, you will end up redirecting the request to
http://site1.com
and this can confuse your visitors.

To make it even more robust, you can use $scheme://site1.com/$request_uri. $scheme is another variable that will ensure that the request gets routed to HTTP or HTTPS. The way it is configured now, if the page requested is
https://www.site1.com/abc
, it will get redirected to
http://site1.com/abc
, which is not good from a security perspective.

Note

The redirection set like this might not work in port-forwarding solutions that you have set up using VirtualBox. In the real world, where your public IPs are exposed and mapped to the domain name, the redirection will work just fine.

Sites Using Different Ports

In specific cases, you can have different parts of your application exposed on different ports. In that case, you can configure server blocks in a way that multiple server blocks with the same name exist, but are listening on different ports.

To enable this capability, you will need to use the listen port and do some extra tasks. First of all, update your site1.conf file so that it looks similar to the following:

server {
    listen      8080;
    server_name site1.com, www.site1.com;

    root /usr/share/nginx/html/site3;

    location / {
        index index.html index.htm;
    }

    error_page 500 502 503 504 /503.html;
    location = /50x.html{
        root /usr/share/nginx/html;
    }
}
server {
    listen       80;
    server_name  site1.com, www.site1.com;

    root  "/usr/share/nginx/html/site1/Shield Theme";

    location / {
        index  index.html index.htm;
    }

    error_page   500 502 503 504  /50x.html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
}

Notice the subtle change in the listen directive. In the first server block, it is bound to port 8080, where as in the second block it is bound to port 80, which is the default port. Also notice that the first server block points to a different root ( /usr/share/nginx/html/site3). You can have a totally different application hosted here. For now, simply create a directory called site3 (at /usr/share/nginx/html), and create a text file called index.html with some text.

Before you check it from your host server, check it locally from the guest using curl:

#curl localhost:8080
hello from site 3!

If you get an output locally, you have done well, and you can now expose your website outside your guest server. But before you do that, open port 8080 using the following command:

#firewall-cmd --permanent --zone=public --add-port=8080/tcp
#firewall-cmd --reload

With firewall ports opened, it is time to add one more forwarding rule to forward a request to the internal port. Use Figure 6-11 to ensure you have added the new rule correctly.

Figure 6-11. Adding port-forwarding rule for Site3 (HTTP – WFE1 – Site 3)

Once the rules are set, you should be able to browse to
http://site1.com:8016
and get your page back from site3.

Wildcard Mapping

You can also have server blocks set up with wildcards. In simple words, you can have a server block handle request to blog.site1.com, mobile.site1.com, etc. by changing the server_name to *.site1.com. You can even use wildcard ( *) as a prefix or a suffix. Hence, *.site1.com or www.site1.* will equate to
blog.site1.com
or
www.site1.co.us
respectively.

Blocking Access

Right now if you try accessing the site using 127.0.0.1 or site1.com, both will work and you will get the same website. If you want to block access to 127.0.0.1 but allow access to site1.com, you can add additional blocks to take care of it like so:

server {
    listen       80;
    server_name  127.0.0.1;
    return       444;
}
server {
    listen       80;
    server_name  site1.com;

    root  "/usr/share/nginx/html/site1/Shield Theme";
    ...output trimmed...
}

Return code 444 is a special part of Nginx’s nonstandard code that closes the connection.

Domain Name Mapping

You have already seen that adding a host header helps you resolve a name like
www.site.com
to your Nginx server. This approach wouldn’t be of help if you really want to take your website online. If you really want to take your web server online, you will need to buy a domain name from one of the domain name registrars. There are plenty of them available and a simple Internet search will take you to the most famous ones. Typically, you buy the name for a year (or multiple years) and map the name on the portal of the website you purchased it from. For example, if you use GoDaddy.com to buy your domain, you can log in at their portal and configure the domain name so that it points to your public IP (Figure 6-12).

Figure 6-12.
GoDaddy’s portal
for managing a domain name. Try pinging attosol.com

A lot goes behind the scene when you type a name in your browser and hit enter. Here is the very simplistic gist of it:

  1. Your browser asks the Operating System to resolve the hostname.

    1. Operating system checks what the name resolves to using the hosts files. If the host files don’t have an entry, it checks your locally configured DNS servers.

    2. Operating system returns the IP address corresponding to the hostname to the browser. So far, by creating the entries for site1.com point to 127.0.0.1, you have been using host files to your advantage. This is to ensure that your operating system doesn’t search the DNS servers at all, since it is able to find the entries in the host file.

  2. The browser creates an HTTP request with information like host header.

  3. The browser then sends the HTTP request.

  4. The server at the IP address receives the request from the browser (including the hostname).

  5. The server then processes the request and sends the response back to the client.

IP-Based Hosting

So far in this chapter, you have been using
server_nameto distinguish between server blocks. This approach is the most common approach since it allows you to share the IP address of the server. There is another kind of configuration that can be classified into IP-based hosting. Table 6-2 differentiates and highlights the differences between them.

Table 6-2. Difference between Name-based

and IP-based hosting

Name-Based

IP-Based

No dedicated IP address is required.

Dedicated IP address is required.

Configured using server_name directive.

Configured using the listen directive.

Multiple websites use same port and ip address.

Multiple websites use individual port and IP addresses.

Works Application layer in the OSI model.

Works at Network and Transport layer in the OSI model.

Since all websites are hosted on a single IP address and NIC, there could be performance impact.

Dedicated IP address and NIC helps isolation of website traffic.

Example:

server_name

www.app1.com

*.app1.com someapp.app1.com;

Example:

listen 80;

listen 10.0.2.4:80;

listen 10.0.2.5:8080;

Mixed Name-Based and IP-Based Servers

Take a look at a more practical example where both name-based and IP-based addresses are used. In the following configuration, Nginx first tests the IP address and port of the request against the listen directive. It then matches the host header of the request with the server_name. If the server name is not found, the request is mapped to the default_server. If the default_server is not mentioned, the first server block takes care of the request.

server {
    listen      10.0.2.6:80;
    server_name site1.net www.site1.net;
    ...
}
server {
    listen      10.0.2.6:80 default_server;
    server_name site1.org www.site1.org;
    ...
}
server {
    listen      10.0.3.6:80 default_server;
    server_name site1.com www.site1.com;
    ...
}
server {
    listen      10.0.3.6:80;
    server_name site1.biz www.site1.biz;
    ...
}

Common Configuration Mistakes

This section goes over some common errors many users make and then offers suggestions on how to avoid or fix them.

Let’s Use 777

When a configuration doesn’t work as it is expected to, some administrators take a shortcut (i.e.,
chmod 777). NO, just don’t do that ever! It has been already explained earlier in the chapter that it is better to not troubleshoot by the trial-and-error method. Try to find out what is wrong by using logs and tools that are suitable to the situation.

Root Inside Location Block

Notice the use of multiple root directives in the following configuration. It is a perfectly valid configuration, but it is not a good configuration. It creates two problems. The first problem is that you now have to add a root to every location block you add, which adds a lot of unnecessary lines in your configuration. The second problem is that if you don’t provide a root block to the location block, it will not have any root.

server {
    server_name www.site1.com;
    location / {
        root /usr/share/nginx/html;
        # [...]
      }
    location /somewhere {
        root /usr/share/nginx/html;
        # [...]
    }
}

You should refactor the previous configuration like so:

server {
    server_name www.site1.com;
    root /usr/share/nginx/html;
    location / {
        # [...]
      }
    location /somewhere {
        # [...]
    }
}

Monolithic Configuration Files

If you like, you can keep adding your server blocks to the default.conf file. It is perfectly okay if you intend to host only one application on the server. However, it is often not the case. Hence it is strongly advised to keep different configuration files for different domain names. It will make your administration tasks a lot easier and will eventually save you a lot of time scrolling up and down finding the correct server blocks.

Unnecessary Complications

There are multiple ways to achieve the same result in Nginx. Whenever you have to use if directives or redirections, evaluate if your approach is correct and determine if there is a better way. Ask questions and visit forums if there is any confusion. There might be easier and more efficient solutions that might not cross your mind. A lot of times it is found that the configuration contains unnecessary processing blocks inside a server block that could have been easily avoided using another server block. For instance, if you take a look at the following configuration, you will find two server blocks. The primary task of the first server block is to redirect to the second server block. You may ask how is it more efficient?

server {
    server_name www.site1.com;
    return 301 $scheme://site1.com$request_uri;
}
server {
    server_name site1.com;
    # [...]
}

To answer the efficiency question, take a look at the larger picture about how a request actually reaches your server. Primarily, outside visitors are likely to use a search engine to reach your website. While indexing, search engines will easily understand that you prefer site1.com over
www.site1.com
. Thus, most people will not hit the first server block at all. It also implies that the redirection code is not executed to process your request at all!

Contrast this situation to an alternate configuration where location blocks are used in the second server block and there is just one server block taking care of all requests. Now, your redirection code is evaluated EVERY time any request comes in. This leads to unnecessary evaluation of a URI that could have been easily avoided by using two server blocks as shown in this example.

Listening on Hostname

You should never listen on hostname

. It can cause binding issues during the restarts of the server. Use IP instead of hostnames.

server {
    # Bad > listen site1.com:80;
    #Good > listen 127.0.0.1:80;
    # [...]
}

Summary

Nginx is very flexible and there are often multiple ways of achieving a task. It can be both good and bad, depending on your knowledge about the subject. Luckily, the community is vibrant and all you have to do when, in doubt, is to ask!

In this chapter you have learned the finer nuances about hosting multiple websites on the same Nginx server. You should be fairly comfortable now about the name-based and IP-based hosting options. Hopefully the common tasks and configurations mentioned in this chapter will help you configure your server with ease. Last but not least, you should now be aware of some of the most common configuration mistakes made by web administrators. Knowing the dangers, as they say, is the first step in order to avoid them.

Comments are closed.

loading...