One place for hosting & domains

      blog

      CentOS vs Ubuntu: Choosing the Right Linux Distribution for Your Server


      CentOS, Ubuntu, Debian, Fedora, RHEL, OpenSUSE, FreeBSD, Manjaro—the list of Linux distributions goes on and on. In fact, there are literally hundreds of distributions (a.k.a. distros) a Linux fanatic can choose from, and while not all stay active forever, 791 have existed since 2001, according to the DistroWatch database.1

      Despite the multitude of options, there are two distributions we see customers requesting most often for their dedicated servers: CentOS and Ubuntu. This post delves into the similarities, differences, and general IT user sentiment for these popular distros.

      Let’s start with a quick look at how these two stack up in terms of known website usage, as reported by w3techs.com:2

      As you can see, it’s a close race. Ubuntu is used by slightly more sites, as well as by more high traffic sites, with CentOS close behind. We’ll unpack some of the reasons why that might be, but first, here’s an overview of each respective distribution.

      Ubuntu Overview

      Based on the Debian architecture, Ubuntu was used early on for personal computers but has since become a household name in server-class computing and cloud environments. Ubuntu runs on the most popular architectures, including Intel, AMD, and ARM-based machines.

      Oh, and a fun fact: it’s named after the South African philosophy of ubuntu, which translates to “human-ness,” “humanity to others,” or “I am what I am because of who we all are.”3

      Ubuntu is known for its frequent update release cycles, which occurs publicly every six months with free support for a particular release for nine months following. Additionally, starting with Ubuntu 6.06, there’s a major release every two years that receives long-term support (LTS) for five years. These releases support hardware and integration for all updates in that series (i.e., 6.0X).

      Relative to other popular Linux distributions, Ubuntu is incredibly feature rich and friendly to developers looking to stay on the cutting edge. That said, it takes more support to stay up to date with the release cycle than some of the other distros, CentOS included. This can sometimes be seen as a con to going all-in on Ubuntu. More features and more releases can mean more complexity.

      Ubuntu utilizes the Advanced Package Tool (APT) using DEB packages for software management.

      Suggested Ubuntu-based alternatives: Linux Mint (desktop), elementary OS (desktop), Zorin OS (desktop), Pinguy OS (desktop), Trisquel GNU/Linux (free software), Bodhi Linux (desktop with Enlightenment) 

      CentOS Overview

      A free variant of Red Hat Enterprise Linux (RHEL), CentOS is known for its stability and support from their far-reaching community of enthusiasts. This Linux distribution falls in line with enterprise class needs and provides IT users a reliable way to deliver their applications and services. With a less-frequent release cycle than Ubuntu and others, CentOS typically requires less support and development expertise. Major release cycles happen every 2-3 years, which follows the RHEL release cycle.

      CentOS also comes with 7-10 years of free security updates. There’s an attractiveness to the fact that every version can serve for up to 10 years in that you don’t have to worry about major changes that could impact your applications, security, and user experience.

      Relative to Ubuntu, CentOS comes with fewer features, but this also makes it lightweight and consumes less of your compute resources. So if your applications are heavy, this operating system is one less resource-hungry area to worry about and factor into your growth model.

      CentOS utilizes the YUM graphical and command line utility using RPM packages for software management.

      Other RHEL clones and CentOS-based distributions: Scientific Linux, Springdale Linux, SME Server, Rocks Cluster Distribution, Oracle Enterprise Linux (according to distrowatch.com)

      Pros and Cons of Ubuntu and CentOS

      In some cases, a choice to go with Ubuntu over CentOS or vice versa comes down to personal preference. However, there are real pros and cons of each.

      CentOS

      Pros: Highly reliable and stable for enterprise workloads, a free variant of the well-trusted Red Hat Enterprise Linux (RHEL), each major version serves or up to 10 years with free security updates for 7-10 years, less support required, lightweight.

      Cons: Less frequent updates, lacks feature richness compared to other operating systems.

      Ubuntu

      Pros: Frequent updates, feature rich, leading edge, developer friendly, stable, support for five years for major releases.

      Cons: Higher resource consumption, less secure out of the box, requires more support to stay up to date.

      For a quick comparison, reference this side-by-side look from our friends at best-web-hosting.org:4

      Take the Next Step

      As a managed infrastructure and cloud hosting provider, we’re fans of all things Linux (and Windows), and hope you found this article helpful. If you’d like to learn more about these Linux distributions, how you can use them on our platform, or just want to talk shop, drop a question in the comments or schedule a free consultation with one of SingleHop’s server OS experts.

       Links, References, Further Reading:

      1. DistroWatch.com: https://distrowatch.com/dwres.php?resource=major
      2. Web Technology Surveys: https://w3techs.com/technologies/comparison/os-centos,os-ubuntu
      3. TedBlog: https://blog.ted.com/further-reading-on-ubuntu/
      4. Best-Web-Hosting:https://best-web-hosting.org/centos-vs-ubuntu-2018/

      On-Demand, Enterprise-Class Dedicated Servers

      Power your mission-critical workloads with bare metal services only found at SingleHop.



      Source link

      Use journalctl to View Your System's Logs


      Updated by Linode

      Written by Linode


      Use promo code DOCS10 for $10 credit on a new account.

      What is journalctl?

      journalctl is a command for viewing logs collected by systemd. The systemd-journald service is responsible for systemd’s log collection, and it retrieves messages from the kernel, systemd services, and other sources.

      These logs are gathered in a central location, which makes them easy to review. The log records in the journal are structured and indexed, and as a result journalctl is able to present your log information in a variety of useful formats.

      Using journalctl for the First Time

      Run the journalctl command without any arguments to view all the logs in your journal:

      journalctl
      

      If you do not see output, try running it with sudo:

      sudo journalctl
      

      If your Linux user does not have sudo privileges, add your user to the sudo group.

      Default Log Format and Ordering

      journalctl will display your logs in a format similar to the traditional syslog format. Each line starts with the date (in the server’s local time), followed by the server’s hostname, the process name, and the message for the log.

        
      Aug 31 12:00:25 debian sshd[15844]: pam_unix(sshd:session): session opened for user example_user by (uid=0)
      
      

      Your logs will be displayed from oldest to newest. To reverse this order and display the newest messages at the top, use the -r flag:

      journalctl -r
      

      Paging through Your Logs

      journalctl pipes its output to the less command, which shows your logs one page at a time in your terminal. If a log line exceeds the horizontal width of your terminal window, you can use the left and right arrow keys to scroll horizontally and see the rest of the line:

      Furthermore, your logs can be navigated and searched by using all the same key commands available in less:

      Key command Action
      down arrow key, enter, e, or j Move down one line.
      up arrow key, y, or k Move up one line.
      space bar Move down one page.
      b Move up one page.
      right arrow key Scroll horizontally to the right.
      left arrow key Scroll horizontally to the left.
      g Go to the first line.
      G Go to the last line.
      10g Go to the 10th line. Enter a different number to go to other lines.
      50p or 50% Go to the line half-way through the output. Enter a different number to go to other percentage positions.
      /search term Search forward from the current position for the search term string.
      ?search term Search backward from the current position for the search term string.
      n When searching, go to the next occurrence.
      N When searching, go to the previous occurrence.
      m<c> Set a mark, which saves your current position. Enter a single character in place of <c> to label the mark with that character.
      '<c> Return to a mark, where <c> is the single character label for the mark. Note that ' is the single-quote.
      q Quit less

      View journalctl without Paging

      To send your logs to standard output and avoid paging them, use the --no-pager option:

      journalctl --no-pager
      

      It’s not recommended that you do this without first filtering down the number of logs shown.

      Monitor New Log Messages

      Run journalctl with the -f option to view a live log of new messages as they are collected:

      journalctl -f
      

      The key commands from less are not available while in this mode. Enter Control-C on your keyboard to return to your command prompt from this mode.

      Filter journalctl Output

      In addition to searching your logs with the less key commands, you can invoke journalctl with options that filter your log messages before they are displayed.

      These filters can be used with the normal paged display, and with the --no-pager and -f options. Filters of different types can also be combined together to further narrow the output.

      Show Logs within a Time Range

      Use the --since option to show logs after a specified date and time:

      journalctl --since "2018-08-30 14:10:10"
      

      Use the --until option to show logs up to a specified date and time:

      journalctl --until "2018-09-02 12:05:50"
      

      Combine these to show logs between the two times:

      journalctl --since "2018-08-30 14:10:10" --until "2018-09-02 12:05:50"
      

      Dates and times should be specified in the YYYY-MM-DD HH:MM:SS format. If the time is omitted (i.e. only the YYYY-MM-DD date is specified), then the time is assumed to be 00:00:00.

      journalctl can also accept some alternative terms when specifying dates:

      • The terms yesterday, today, and tomorrow are recognized. When using one of these terms, the time is assumed to be 00:00:00.

      • Terms like 1 day ago or 3 hours ago are recognized.

      • The - and + symbols can be used to specify relative dates. For example, -1h15min specifies 1 hour 15 minutes in the past, and +3h30min specifies 3 hours 30 minutes in the future.

      Show Logs for a Specific Boot

      Use the -b option to show logs for the last boot of your server:

      journalctl -b
      

      Specify an integer offset for the -b option to refer to a previous boot. For example, journalctl -b -1 show logs from the previous boot, journalctl -b -2 shows logs from the boot before the previous boot, and so on.

      List the available boots:

      journalctl --list-boots
      

      Each boot listed in the output from journalctl --list-boots command includes a 32-bit boot ID. You can supply a boot ID with the -b option; for example:

      journalctl -b a09dce7b2c1c458d861d7d0f0a7c8c65
      

      If no previous boots are listed, your journald configuration may not be set up to persist log storage. Review the Persist Your Logs section for instructions on how to change this configuration.

      Show Logs for a systemd Service

      Pass the name of a systemd unit with the -u option to show logs for that service:

      journalctl -u ssh
      

      View Kernel Messages

      Supply the -k option to show only kernel messages:

      journalctl -k
      

      Change the Log Output Format

      Because the log records for systemd’s journals are structured, journalctl can show your logs in different formats. Here are a few of the formats available:

      Format Name Description
      short The default option, displays logs in the traditional syslog format.
      verbose Displays all information in the log record structure.
      json Displays logs in JSON format, with one log per line.
      json-pretty Displays logs in JSON format across multiple lines for better readability.
      cat Displays only the message from each log without any other metadata.

      Pass the format name with the -o option to display your logs in that format. For example:

      journalctl -o json-pretty
      

      Anatomy of a Log Record

      The following is an example of the structured data of a log record, as displayed by journalctl -o verbose. For more information on this data structure, review the man page for journalctl:

        
      Fri 2018-08-31 12:00:25.543177 EDT [s=0b341b44cf194c9ca45c99101497befa;i=70d5;b=a09dce7b2c1c458d861d7d0f0a7c8c65;m=9fb524664c4;t=57517dfc5f57d;x=97097ca5ede0dfd6]
          _BOOT_ID=a09dce7b2c1c458d861d7d0f0a7c8c65
          _MACHINE_ID=1009f49fff8fe746a5111e1a062f4848
          _HOSTNAME=debian
          _TRANSPORT=syslog
          PRIORITY=6
          SYSLOG_IDENTIFIER=sshd
          _UID=0
          _GID=0
          _COMM=sshd
          _EXE=/usr/sbin/sshd
          _CAP_EFFECTIVE=3fffffffff
          _SYSTEMD_CGROUP=/system.slice/ssh.service
          _SYSTEMD_UNIT=ssh.service
          _SYSTEMD_SLICE=system.slice
          SYSLOG_FACILITY=10
          SYSLOG_PID=15844
          _PID=15844
          _CMDLINE=sshd: example_user [priv
          MESSAGE=pam_unix(sshd:session): session opened for user example_user by (uid=0)
          _AUDIT_SESSION=30791
          _AUDIT_LOGINUID=1000
          _SOURCE_REALTIME_TIMESTAMP=1536120282543177
      
      

      Note

      In addition to the types of filters listed in the previous section, you can also filter logs by specifying values for the variables in the log record structure. For example, journalctl _UID=0 will show logs for user ID 0 (i.e. the root user).

      Persist Your Logs

      systemd-journald can be configured to persist your systemd logs on disk, and it also provides controls to manage the total size of your archived logs. These settings are defined in /etc/systemd/journald.conf.

      To start persisting your logs, uncomment the Storage line in /etc/systemd/journald.conf and set its value to persistent. Your archived logs will be held in /var/log/journal. If this directory does not already exist in your file system, systemd-journald will create it.

      After updating your journald.conf, load the change:

      sudo systemctl restart systemd-journald
      

      Control the Size of Your Logs’ Disk Usage

      The following settings in journald.conf control how large your logs’ size can grow to when persisted on disk:

      Setting Description
      SystemMaxUse The total maximum disk space that can be used for your logs.
      SystemKeepFree The minimum amount of disk space that should be kept free for uses outside of systemd-journald’s logging functions.
      SystemMaxFileSize The maximum size of an individual journal file.
      SystemMaxFiles The maximum number of journal files that can be kept on disk.

      systemd-journald will respect both SystemMaxUse and SystemKeepFree, and it will set your journals’ disk usage to meet whichever setting results in a smaller size.

      To view your default limits, run:

      sudo journalctl -u systemd-journald
      

      You should see a line similar to the following which describes the current limits in place:

        
      Permanent journal is using 32.0M (max allowed 2.3G, trying to leave 3.5G free of 21.2G available → current limit 2.3G).
      
      

      Note

      A parallel group of settings is used when journald.conf is set to only persist the journals in memory (instead of on disk): RuntimeMaxUse, RuntimeKeepFree, RuntimeMaxFileSize, and RuntimeMaxFiles.

      Manually Clean Up Archived Logs

      journalctl offers functions for immediately removing archived journals on disk. Run journalctl with the --vacuum-size option to remove archived journal files until the total size of your journals is less than the specified amount. For example, the following command will reduce the size of your journals to 2GiB:

      journalctl --vacuum-size=2G
      

      Run journalctl with the --vacuum-time option to remove archived journal files with dates older than the specified relative time. For example, the following command will remove journals older than one year:

      journalctl --vacuum-time=1years
      

      Run journalctl with the --vacuum-files option to remove archived journal files until the specified number of files remains. For example, the following command removes all but the 10 most recent journal files:

      journalctl --vacuum-files=10
      

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link

      How To Create a Self-Signed SSL Certificate for Nginx on Debian 9


      A previous version of this tutorial was written by Justin Ellingwood

      Introduction

      TLS, or transport layer security, and its predecessor SSL, which stands for secure sockets layer, are web protocols used to wrap normal traffic in a protected, encrypted wrapper.

      Using this technology, servers can send traffic safely between the server and clients without the possibility of the messages being intercepted by outside parties. The certificate system also assists users in verifying the identity of the sites that they are connecting with.

      In this guide, we will show you how to set up a self-signed SSL certificate for use with an Nginx web server on a Debian 9 server.

      Note: A self-signed certificate will encrypt communication between your server and any clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers, users cannot use the certificate to validate the identity of your server automatically.

      A self-signed certificate may be appropriate if you do not have a domain name associated with your server and for instances where the encrypted web interface is not user-facing. If you do have a domain name, in many cases it is better to use a CA-signed certificate. To learn how to set up a free trusted certificate with the Let’s Encrypt project, consult How to Secure Nginx with Let’s Encrypt on Debian 9.

      Prerequisites

      Before you begin, you should have a non-root user configured with sudo privileges. You can learn how to set up such a user account by following our initial server setup for Debian 9.

      You will also need to have the Nginx web server installed. If you would like to install an entire LEMP (Linux, Nginx, MySQL, PHP) stack on your server, you can follow our guide on setting up LEMP on Debian 9.

      If you just want the Nginx web server, you can instead follow our guide on installing Nginx on Debian 9.

      When you have completed the prerequisites, continue below.

      Step 1 — Creating the SSL Certificate

      TLS/SSL works by using a combination of a public certificate and a private key. The SSL key is kept secret on the server. It is used to encrypt content sent to clients. The SSL certificate is publicly shared with anyone requesting the content. It can be used to decrypt the content signed by the associated SSL key.

      We can create a self-signed key and certificate pair with OpenSSL in a single command:

      • sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/private/nginx-selfsigned.key -out /etc/ssl/certs/nginx-selfsigned.crt

      You will be asked a series of questions. Before we go over that, let’s take a look at what is happening in the command we are issuing:

      • openssl: This is the basic command line tool for creating and managing OpenSSL certificates, keys, and other files.
      • req: This subcommand specifies that we want to use X.509 certificate signing request (CSR) management. The “X.509” is a public key infrastructure standard that SSL and TLS adheres to for its key and certificate management. We want to create a new X.509 cert, so we are using this subcommand.
      • -x509: This further modifies the previous subcommand by telling the utility that we want to make a self-signed certificate instead of generating a certificate signing request, as would normally happen.
      • -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Nginx to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening because we would have to enter it after every restart.
      • -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here.
      • -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
      • -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
      • -out: This tells OpenSSL where to place the certificate that we are creating.

      As we stated above, these options will create both a key file and a certificate. We will be asked a few questions about our server in order to embed the information correctly in the certificate.

      Fill out the prompts appropriately. The most important line is the one that requests the Common Name (e.g. server FQDN or YOUR name). You need to enter the domain name associated with your server or, more likely, your server’s public IP address.

      The entirety of the prompts will look something like this:

      Output

      Country Name (2 letter code) [AU]:US State or Province Name (full name) [Some-State]:New York Locality Name (eg, city) []:New York City Organization Name (eg, company) [Internet Widgits Pty Ltd]:Bouncy Castles, Inc. Organizational Unit Name (eg, section) []:Ministry of Water Slides Common Name (e.g. server FQDN or YOUR name) []:server_IP_address Email Address []:admin@your_domain.com

      Both of the files you created will be placed in the appropriate subdirectories of the /etc/ssl directory.

      While we are using OpenSSL, we should also create a strong Diffie-Hellman group, which is used in negotiating Perfect Forward Secrecy with clients.

      We can do this by typing:

      • sudo openssl dhparam -out /etc/nginx/dhparam.pem 4096

      This will take a while, but when it’s done you will have a strong DH group at /etc/nginx/dhparam.pem that we can use in our configuration.

      Step 2 — Configuring Nginx to Use SSL

      We have created our key and certificate files under the /etc/ssl directory. Now we just need to modify our Nginx configuration to take advantage of these.

      We will make a few adjustments to our configuration.

      1. We will create a configuration snippet containing our SSL key and certificate file locations.
      2. We will create a configuration snippet containing strong SSL settings that can be used with any certificates in the future.
      3. We will adjust our Nginx server blocks to handle SSL requests and use the two snippets above.

      This method of configuring Nginx will allow us to keep clean server blocks and put common configuration segments into reusable modules.

      Creating a Configuration Snippet Pointing to the SSL Key and Certificate

      First, let’s create a new Nginx configuration snippet in the /etc/nginx/snippets directory.

      To properly distinguish the purpose of this file, let’s call it self-signed.conf:

      • sudo nano /etc/nginx/snippets/self-signed.conf

      Within this file, we need to set the ssl_certificate directive to our certificate file and the ssl_certificate_key to the associated key. In our case, this will look like this:

      /etc/nginx/snippets/self-signed.conf

      ssl_certificate /etc/ssl/certs/nginx-selfsigned.crt;
      ssl_certificate_key /etc/ssl/private/nginx-selfsigned.key;
      

      When you’ve added those lines, save and close the file.

      Creating a Configuration Snippet with Strong Encryption Settings

      Next, we will create another snippet that will define some SSL settings. This will set Nginx up with a strong SSL cipher suite and enable some advanced features that will help keep our server secure.

      The parameters we will set can be reused in future Nginx configurations, so we will give the file a generic name:

      • sudo nano /etc/nginx/snippets/ssl-params.conf

      To set up Nginx SSL securely, we will be using the recommendations by Remy van Elst on the Cipherli.st site. This site is designed to provide easy-to-consume encryption settings for popular software.

      The suggested settings on the site linked to above offer strong security. Sometimes, this comes at the cost of greater client compatibility. If you need to support older clients, there is an alternative list that can be accessed by clicking the link on the page labelled “Yes, give me a ciphersuite that works with legacy / old software.” That list can be substituted for the items copied below.

      The choice of which config you use will depend largely on what you need to support. They both will provide great security.

      For our purposes, we can copy the provided settings in their entirety. We just need to make a few small modifications.

      First, we will add our preferred DNS resolver for upstream requests. We will use Google’s for this guide.

      Second, we will comment out the line that sets the strict transport security header. Before uncommenting this line, you should take take a moment to read up on HTTP Strict Transport Security, or HSTS, specifically about the “preload” functionality. Preloading HSTS provides increased security, but can have far reaching consequences if accidentally enabled or enabled incorrectly.

      Copy the following into your ssl-params.conf snippet file:

      /etc/nginx/snippets/ssl-params.conf

      ssl_protocols TLSv1.2;
      ssl_prefer_server_ciphers on;
      ssl_dhparam /etc/nginx/dhparam.pem;
      ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
      ssl_ecdh_curve secp384r1; # Requires nginx >= 1.1.0
      ssl_session_timeout  10m;
      ssl_session_cache shared:SSL:10m;
      ssl_session_tickets off; # Requires nginx >= 1.5.9
      ssl_stapling on; # Requires nginx >= 1.3.7
      ssl_stapling_verify on; # Requires nginx => 1.3.7
      resolver 8.8.8.8 8.8.4.4 valid=300s;
      resolver_timeout 5s;
      # Disable strict transport security for now. You can uncomment the following
      # line if you understand the implications.
      # add_header Strict-Transport-Security "max-age=63072000; includeSubDomains; preload";
      add_header X-Frame-Options DENY;
      add_header X-Content-Type-Options nosniff;
      add_header X-XSS-Protection "1; mode=block";
      

      Because we are using a self-signed certificate, SSL stapling will not be used. Nginx will output a warning but continue to operate correctly.

      Save and close the file when you are finished.

      Adjusting the Nginx Configuration to Use SSL

      Now that we have our snippets, we can adjust our Nginx configuration to enable SSL.

      We will assume in this guide that you are using a custom server block configuration file in the /etc/nginx/sites-available directory. We will use /etc/nginx/sites-available/example.com for this example. Substitute your configuration filename as needed.

      Before we go any further, let’s back up our current configuration file:

      • sudo cp /etc/nginx/sites-available/example.com /etc/nginx/sites-available/example.com.bak

      Now, open the configuration file to make adjustments:

      • sudo nano /etc/nginx/sites-available/example.com

      Inside, your server block probably begins similar to this:

      /etc/nginx/sites-available/example.com

      server {
          listen 80;
          listen [::]:80;
      
          server_name example.com www.example.com;
      
          root /var/www/example.com/html;
          index index.html index.htm index.nginx-debian.html;
      
          . . .
      }
      

      Your file may be in a different order, and instead of the root and index directives you may have some location, proxy_pass, or other custom configuration statements. This is ok, as we only need to update the listen directives and include our SSL snippets. We will be modifying this existing server block to serve SSL traffic on port 443, then create a new server block to respond on port 80 and automatically redirect traffic to port 443.

      Note: We will use a 302 redirect until we have verified that everything is working properly. Afterwards, we can change this to a permanent 301 redirect.

      In your existing configuration file, update the two listen statements to use port 443 and SSL, then include the two snippet files we created in previous steps:

      /etc/nginx/sites-available/example.com

      server {
          listen 443 ssl;
          listen [::]:443 ssl;
          include snippets/self-signed.conf;
          include snippets/ssl-params.conf;
      
          server_name example.com www.example.com;
      
          root /var/www/example.com/html;
          index index.html index.htm index.nginx-debian.html;
      
          . . .
      }
      

      Next, paste a second server block into the configuration file, after the closing bracket (}) of the first block:

      /etc/nginx/sites-available/example.com

      . . .
      server {
          listen 80;
          listen [::]:80;
      
          server_name example.com www.example.com;
      
          return 302 https://$server_name$request_uri;
      }
      

      This is a bare-bones configuration that listens on port 80 and performs the redirect to HTTPS. Save and close the file when you are finished editing it.

      Step 3 — Adjusting the Firewall

      If you have the ufw firewall enabled, as recommended by the prerequisite guides, you’ll need to adjust the settings to allow for SSL traffic. Luckily, Nginx registers a few profiles with ufw upon installation.

      We can see the available profiles by typing:

      You should see a list like this:

      Output

      Available applications: . . . Nginx Full Nginx HTTP Nginx HTTPS . . .

      You can see the current setting by typing:

      It will probably look like this, meaning that only HTTP traffic is allowed to the web server:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx HTTP ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx HTTP (v6) ALLOW Anywhere (v6)

      To additionally let in HTTPS traffic, we can allow the "Nginx Full" profile and then delete the redundant "Nginx HTTP" profile allowance:

      • sudo ufw allow 'Nginx Full'
      • sudo ufw delete allow 'Nginx HTTP'

      Your status should look like this now:

      Output

      Status: active To Action From -- ------ ---- OpenSSH ALLOW Anywhere Nginx Full ALLOW Anywhere OpenSSH (v6) ALLOW Anywhere (v6) Nginx Full (v6) ALLOW Anywhere (v6)

      Step 4 — Enabling the Changes in Nginx

      Now that we've made our changes and adjusted our firewall, we can restart Nginx to implement our new changes.

      First, we should check to make sure that there are no syntax errors in our files. We can do this by typing:

      If everything is successful, you will get a result that looks like this:

      Output

      nginx: [warn] "ssl_stapling" ignored, issuer certificate not found nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful

      Notice the warning in the beginning. As noted earlier, this particular setting throws a warning since our self-signed certificate can't use SSL stapling. This is expected and our server can still encrypt connections correctly.

      If your output matches the above, your configuration file has no syntax errors. We can safely restart Nginx to implement our changes:

      • sudo systemctl restart nginx

      Step 5 — Testing Encryption

      Now, we're ready to test our SSL server.

      Open your web browser and type https:// followed by your server's domain name or IP into the address bar:

      https://server_domain_or_IP
      

      Because the certificate we created isn't signed by one of your browser's trusted certificate authorities, you will likely see a scary looking warning like the one below (the following appears when using Google Chrome) :

      Nginx self-signed cert warning

      This is expected and normal. We are only interested in the encryption aspect of our certificate, not the third party validation of our host's authenticity. Click "ADVANCED" and then the link provided to proceed to your host anyways:

      Nginx self-signed override

      You should be taken to your site. If you look in the browser address bar, you will see a lock with an "x" over it. In this case, this just means that the certificate cannot be validated. It is still encrypting your connection.

      If you configured Nginx with two server blocks, automatically redirecting HTTP content to HTTPS, you can also check whether the redirect functions correctly:

      http://server_domain_or_IP
      

      If this results in the same icon, this means that your redirect worked correctly.

      Step 6 — Changing to a Permanent Redirect

      If your redirect worked correctly and you are sure you want to allow only encrypted traffic, you should modify the Nginx configuration to make the redirect permanent.

      Open your server block configuration file again:

      • sudo nano /etc/nginx/sites-available/example.com

      Find the return 302 and change it to return 301:

      /etc/nginx/sites-available/example.com

          return 301 https://$server_name$request_uri;
      

      Save and close the file.

      Check your configuration for syntax errors:

      When you're ready, restart Nginx to make the redirect permanent:

      • sudo systemctl restart nginx

      Conclusion

      You have configured your Nginx server to use strong encryption for client connections. This will allow you serve requests securely, and will prevent outside parties from reading your traffic.



      Source link