One place for hosting & domains

      CentOS

      How To Implement Browser Caching with Nginx’s header Module on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      The faster a website loads, the more likely a visitor is to stay. When websites are full of images and interactive content run by scripts loaded in the background, opening a website is not a simple task. It consists of requesting many different files from the server one by one. Minimizing the quantity of these requests is one way to speed up your website.

      One method for improving website performance is browser caching. Browser caching tells the browser that it can reuse local versions of downloaded files instead of requesting the server for them again and again. To do this, you must introduce new HTTP response headers that tell the browser how to behave.

      Nginx’s header module can help you accomplish browser caching. You can use this module to add any arbitrary headers to the response, but its major role is to properly set caching headers. In this tutorial, we will use Nginx’s header module to implement browser caching.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Creating Test Files

      In this step, we will create several test files in the default Nginx directory. We’ll use these files later to check Nginx’s default behavior and then to test that browser caching is working.

      To infer what kind of file is served over the network, Nginx does not analyze the file contents; that would be prohibitively slow. Instead, it looks up the file extension to determine the file’s MIME type, which denotes its purpose.

      Because of this behavior, the content of our test files is irrelevant. By naming the files appropriately, we can trick Nginx into thinking that, for example, one entirely empty file is an image and another is a stylesheet.

      Create a file named test.html in the default Nginx directory using truncate. This extension denotes that it’s an HTML page:

      • sudo truncate -s 1k /usr/share/nginx/html/test.html

      Let’s create a few more test files in the same manner: one jpg image file, one css stylesheet, and one js JavaScript file:

      • sudo truncate -s 1k /usr/share/nginx/html/test.jpg
      • sudo truncate -s 1k /usr/share/nginx/html/test.css
      • sudo truncate -s 1k /usr/share/nginx/html/test.js

      The next step is to check how Nginx behaves with respect to sending caching control headers on a fresh installation with the files we have just created.

      Step 2 — Checking the Default Behavior

      By default, all files will have the same default caching behavior. To explore this, we’ll use the HTML file we created in step 1, but you can run these tests with any example files.

      So, let’s check if test.html is served with any information regarding how long the browser should cache the response. The following command requests a file from our local Nginx server and shows the response headers:

      • curl -I http://localhost/test.html

      You should see several HTTP response headers:

      Output: Nginx response headers

      HTTP/1.1 200 OK Server: nginx/1.14.1 Date: Thu, 04 Feb 2021 18:23:09 GMT Content-Type: text/html Content-Length: 1024 Last-Modified: Thu, 04 Feb 2021 18:22:39 GMT Connection: keep-alive ETag: "601c3b6f-400" Accept-Ranges: bytes

      In the second to last line you will find the ETag header, which contains a unique identifier for this particular revision of the requested file. If you execute the previous curl command repeatedly, you will find the exact same ETag value.

      When using a web browser, the ETag value is stored and sent back to the server with the If-None-Match request header when the browser wants to request the same file again — for example, when refreshing the page.

      We can simulate this on the command line with the following command. Make sure you change the ETag value in this command to match the ETag value in your previous output:

      • curl -I -H 'If-None-Match: "601c3b6f-400"' http://localhost/test.html

      The response will now be different:

      Output: Nginx response headers

      HTTP/1.1 304 Not Modified Server: nginx/1.14.1 Date: Thu, 04 Feb 2021 18:24:05 GMT Last-Modified: Thu, 04 Feb 2021 18:22:39 GMT Connection: keep-alive ETag: "601c3b6f-400"

      This time, Nginx will respond with 304 Not Modified. It won’t send the file over the network again; instead, it will tell the browser that it can reuse the file it already has downloaded locally.

      This is useful because it reduces network traffic, but it’s not good enough to achieve good caching performance. The problem with ETag is that the browser always sends a request to the server asking if it can reuse its cached file. Even though the server responds with a 304 instead of sending the file again, it still takes time to make the request and receive the response.

      In the next step, we will use the headers module to append caching control information. This will make the browser to cache some files locally without explicitly asking the server if its fine to do so.

      Step 3 — Configuring Cache-Control and Expires Headers

      In addition to the ETag file validation header, there are two caching control response headers: Cache-Control and Expires. Cache-Control is the newer version, with more options than Expires and is generally more useful if you want finer control over your caching behavior.

      If these headers are set, they can tell the browser that the requested file can be kept locally for a certain amount of time (including forever) without requesting it again. If the headers are not set, browsers will always request the file from the server, expecting either 200 OK or 304 Not Modified responses.

      We can use the header module to set these HTTP headers. The header module is a core Nginx module, which means it doesn’t need to be installed separately to be used.

      To add the header module, open the default server block Nginx configuration file in vi (here’s a short introduction to vi) or your favorite text editor:

      • sudo vi /etc/nginx/nginx.conf

      Find the server configuration block:

      /etc/nginx/nginx.conf

      . . .
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
      . . .
      

      Add the following two new sections here: one before the server block, to define how long to cache different file types, and one inside it, to set the caching headers appropriately:

      Modified /etc/nginx/nginx.conf

      . . .
      # Expires map
      map $sent_http_content_type $expires {
          default                    off;
          text/html                  epoch;
          text/css                   max;
          application/javascript     max;
          ~image/                    max;
          ~font/                     max;
      }
      
      server {
          listen 80 default_server;
          listen [::]:80 default_server;
          server_name  _;
          root         /usr/share/nginx/html;
      
          expires $expires;
      . . .
      

      The section before the server block is a new map block that defines the mapping between the file type and how long that kind of file should be cached.

      We’re using several different settings in this map:

      • The default value is set to off, which will not add any caching control headers. It’s a safe bet for the content, we have no particular requirements on how the cache should work.

      • For text/html, we set the value to epoch. This is a special value that results explicitly in no caching, which forces the browser to always ask if the website itself is up to date.

      • For text/css and application/javascript, which are stylesheets and JavaScript files, we set the value to max. This means the browser will cache these files for as long as possible, reducing the number of requests considerably given that there are typically many of these files.

      • The last two settings are for ~image/ and ~font/, which are regular expressions that will match all file types containing image/ or font/ in their MIME type name (like image/jpg, image/png or font/woff2). Like stylesheets, both pictures and web fonts on websites can be safely cached to speed up page-loading times, so we set this to max as well.

      Note: These are just a few examples of the most common MIME types used on websites. You can get acquainted with the more extensive list of such types on Common MIME types site and add others to the map that you might find useful in your case.

      Inside the server block, the expires directive (a part of the headers module) sets the caching control headers. It uses the value from the $expires variable set in the map. This way, the resulting headers will be different depending on the file type.

      Save and close the file to exit.

      To enable the new configuration, restart Nginx:

      • sudo systemctl restart nginx

      Next, let’s make sure our new configuration works.

      Step 4 — Testing Browser Caching

      Execute the same request as before for the test HTML file:

      • curl -I http://localhost/test.html

      This time the response will be different. You will see two additional HTTP response headers:

      Nginx response headers

      HTTP/1.1 200 OK
      Server: nginx/1.14.1
      Date: Thu, 04 Feb 2021 18:28:19 GMT
      Content-Type: text/html
      Content-Length: 1024
      Last-Modified: Thu, 04 Feb 2021 18:22:39 GMT
      Connection: keep-alive
      ETag: "601c3b6f-400"
      Expires: Thu, 01 Jan 1970 00:00:01 GMT
      Cache-Control: no-cache
      Accept-Ranges: bytes
      

      The Expires header shows a date in the past and Cache-Control is set with no-cache, which tells the browser to always ask the server if there is a newer version of the file (using the ETag header, like before).

      You’ll find a difference in response with the test image file.

      • curl -I http://localhost/test.jpg

      Note the new output:

      Nginx response headers

      HTTP/1.1 200 OK
      Server: nginx/1.14.1
      Date: Thu, 04 Feb 2021 18:29:05 GMT
      Content-Type: image/jpeg
      Content-Length: 1024
      Last-Modified: Thu, 04 Feb 2021 18:22:42 GMT
      Connection: keep-alive
      ETag: "601c3b72-400"
      Expires: Thu, 31 Dec 2037 23:55:55 GMT
      Cache-Control: max-age=315360000
      Accept-Ranges: bytes
      

      In this case, Expires shows the date in the distant future, and Cache-Control contains max-age information, which tells the browser how long it can cache the file in seconds. This tells the browser to cache the downloaded image for as long as it can, so any subsequent appearances of this image will use local cache and not send a request to the server at all.

      The result should be similar for both test.js and test.css, as both JavaScript and stylesheet files are set with caching headers too.

      This means the cache control headers have been configured properly and your website will benefit from the performance gain and less server requests due to browser caching. You should customize the caching settings based on your website’s content, but the defaults in this article are a reasonable place to start.

      Conclusion

      The headers module can be used to add any arbitrary headers to the response, but properly setting caching control headers is one of its most useful applications. It increases performance for the website users, especially on networks with higher latency, like mobile carrier networks. It can also lead to better results on search engines that factor speed tests into their results. Setting browser caching headers is a crucial recommendation from Google’s PageSpeed and similar performance testing tools.

      You can find more detailed information about the headers module in Nginx’s official headers module documentation.



      Source link

      How To Set Up Multi-Factor Authentication for SSH on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      SSH uses passwords for authentication by default, and most SSH hardening instructions recommend using an SSH key instead. However, an SSH key is still only a single factor, though a much more secure factor. The channel is the terminal on your computer sending the data via an encrypted tunnel to the remote machine. But like a hacker can guess a password, they can steal an SSH key, and then in either case, with that single piece of data, an attacker can gain access to your remote systems.

      In this tutorial, we’ll set up multi-factor authentication to combat that. Multi-factor authentication (MFA) or Two-factor authentication (2FA) requires more than one factor to authenticate or log in. This means a bad actor would have to compromise multiple things, like your computer and your phone, to get in. There are several types of factors used in authentication:

      1. Something you know, like a password or security question
      2. Something you have, like an authenticator app or security token
      3. Something you are, like your fingerprint or voice

      One common factor is an OATH-TOTP app, like Google Authenticator. OATH-TOTP (Open Authentication Time-Based One-Time Password) is an open protocol that generates a one-time use password, commonly a six-digit number recycled every 30 seconds.

      This article will go over how to enable SSH authentication using an OATH-TOTP app in addition to an SSH key. Logging into your server via SSH will require two factors across two channels, thereby making it more secure than a password or SSH key alone. Also, we’ll go over some additional use cases for MFA and some helpful tips and tricks.

      And if you are seeking further guidance on securing SSH connections, check out these tutorials on Hardening OpenSSH and Hardening OpenSSH Client.

      Prerequisites

      To follow this tutorial, you will need:

      • One CentOS 8 server with a sudo non-root user and SSH key, which you can set up by following this Initial Server Setup tutorial.
      • A smartphone or tablet with an OATH-TOTP app installed, like Google Authenticator (iOS, Android).
      • Alternatively, you can also use a Linux command line app called ‘oathtool’ to generate an OATH-TOTP code. It’s available in various distribution repos
      • If you really want secure your SSH connection there are several good steps outlined in this SSH Essentials article, such as whitelisting users, disabling root login, and changing which port SSH uses.

      Step 1 — Installing Google’s PAM

      In this step, we’ll install and configure Google’s PAM.

      PAM, which stands for Pluggable Authentication Module, is an authentication infrastructure used on Linux systems to authenticate a user. Because Google made an OATH-TOTP app, they also made a PAM that generates TOTPs and is fully compatible with any OATH-TOTP app, like Google Authenticator or Authy.

      First, add the EPEL (Extra Packages for Enterprise Linux) repo:

      If your repositories have the EPEL install package, you will see the following output:

      Output

      ===== Name Matched: epel ===== epel-release.noarch : Extra Packages for Enterprise Linux repository configuration

      Now install the epel-release package to enable the EPEL repository:

      • sudo yum install epel-release

      However, if you don’t have the package epel-release, then you can install the repository information manually:

      • sudo yum install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm

      Next, install the PAM. You might be prompted to accept the EPEL key if this is the first time using the repo. Once accepted, you won’t be prompted for again to accept the key:

      • sudo yum install google-authenticator qrencode-libs

      With the PAM installed, we’ll use a helper app that comes with the PAM to generate a TOTP key for the user you want to add a second factor to. This key is generated on a user-by-user basis, not system-wide. This means every user that wants to use a TOTP auth app will need to log in and run the helper app to get their own key; you can’t just run it once to enable it for everyone (but there are some tips at the end of this tutorial to set up or require MFA for many users).

      Run these two commands to initialize the app:

      • google-authenticator -s ~/.ssh/google_authenticator

      Normally, all you need to do is run the google-authenticator command with no arguments, but SELinux doesn’t allow the ssh daemon to write to files outside of the .ssh directory in your home folder. This prevents authentication.

      SELinux is a powerful tool that protects your system from potential attacks, and it’s worth running in Enforcing mode. As such, turning off SELinux is not considered a best practice. Instead, we’ll move the default location of the google_authenticator file into your ~/.ssh directory.

      After you run the command, you’ll be asked a few questions. The first one asks if authentication tokens should be time-based:

      Output

      Do you want authentication tokens to be time-based (y/n) y

      This PAM allows for time-based or sequential-based tokens. Using sequential-based tokens mean the code starts at a certain point and then increments the code after every use. Using time-based tokens mean the code changes after a certain time frame. We’ll stick with time-based because that is what apps like Google Authenticator anticipate, so answer y for yes.

      After answering this question, a lot of output will scroll past, including a large QR code. Use your authenticator app on your phone to scan the QR code or manually type in the secret key. If the QR code is too big to scan, you can use the URL above the QR code to get a smaller version. Once it’s added, you’ll see a six-digit code that changes every 30 seconds in your app.

      Note: Make sure you record the secret key, verification code, and the emergency scratch codes in a safe place, like a password manager. The emergency scratch codes are the only way to regain access if you, for example, lose access to your TOTP app.

      The remaining questions inform the PAM on how to function. We’ll go through them one by one:

      Output

      Do you want me to update your "~/.google_authenticator" file (y/n) y

      This writes the key and options to the google_authenticator file. If you say no, the program quits and nothing is written, which means the authenticator won’t work:

      Output

      Do you want to disallow multiple uses of the same authentication token? This restricts you to one login about every 30s, but it increases your chances to notice or even prevent man-in-the-middle attacks (y/n) y

      By answering yes here, you are preventing a replay attack by making each code expire immediately after use. This prevents an attacker from capturing a code you just used and logging in with it:

      Output

      By default, a new token is generated every 30 seconds by the mobile app. In order to compensate for possible time-skew between the client and the server, we allow an extra token before and after the current time. This allows for a time skew of up to 30 seconds between authentication server and client. If you experience problems with poor time synchronization, you can increase the window from its default size of 3 permitted codes (one previous code, the current code, the next code) to 17 permitted codes (the 8 previous codes, the current code, and the 8 next codes). This will permit for a time skew of up to 4 minutes between client and server. Do you want to do so? (y/n) n

      Answering yes here allows up to 17 valid codes in a moving four minute window. By answering no, you limit it to 3 valid codes in a 1:30 minute rolling window. Unless you find issues with the 1:30 minute window, answering no is the more secure choice. If you answer no and later realize you need more time, this setting can be adjusted in the .google_authenticator file stored at the root of your home directory:

      Output

      If the computer that you are logging into isn't hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s. Do you want to enable rate-limiting (y/n) y

      Rate limiting means a remote attacker can only attempt a certain number of guesses before being forced to wait some time before being able to try again. If you haven’t previously configured rate limiting directly into SSH, doing so now is a great hardening technique.

      Once you finish this setup, if you want to back up your secret key, you can copy the ~/.ssh/google-authenticator file to a trusted location. From there, you can deploy it on additional systems or redeploy it after a clean install. Be aware that by using the same key on multiple machines you are introducing the possibility of a replay attack if an attacker is able to sniff the token for one server then use it, within the valid window, against a different server. You must evaluate the likelihood of that risk versus having every system use a different token and managing those different tokens.

      Note: If you are using an encrypted home folder, which is beyond the scope of this article, you may need to store the ~./google-authenticator file in a directory outside of your home folder. The project README has details on how to do that.

      Since we stored the config file in a non-standard location, we need to restore the SELinux context based on its new location.

      Use the following command to do so:

      With those two changes done we now have the Google Authenticator PAM installed and configured, the next step is to configure SSH to use your TOTP key. We’ll need to tell SSH about the PAM and then configure SSH to use it.

      Step 2 — Configuring OpenSSH to Use MFA/2FA

      Because we’ll be making SSH changes over SSH, it’s important to never close your initial SSH connection. Instead, open a second SSH session to do testing. This is to avoid locking yourself out of your server if there was a mistake in your SSH configuration. Once everything works, then you can safely close any sessions. Another safety precaution is to create a backup of the system files you will be editing so in case something goes wrong you can simply revert to the vanilla file and start over again with a clean configuration.

      To begin, back up the sshd configuration file and then edit it. Here, we’re using nano, which isn’t installed on CentOS by default. You can install it with sudo yum install nano, or use your favorite alternative text editor.

      Backup the file and then open it:

      • sudo cp /etc/pam.d/sshd /etc/pam.d/sshd.bak
      • sudo nano /etc/pam.d/sshd

      Add the following line to the end of the file:

      /etc/pam.d/sshd

      auth       required     pam_google_authenticator.so secret=/home/${USER}/.ssh/google_authenticator nullok
      auth       required     pam_permit.so
      

      Since we had to put the google_authenticator config file in a non-standard location, we have to provide PAM with the path to the config file. The secret option tells PAM where the non-default location of the config file is stored.

      The nullok word at the end of the line tells the PAM that this authentication method is optional. This allows users without a OATH-TOTP token to still log in just using their SSH key. Once all users have an OATH-TOTP token, you can remove nullok from this line to make MFA mandatory.

      The second line with pam_permit.so is required to allow authentication if I user doesn’t use an MFA token to login. When logging in each method needs a SUCCESS to allow authentication. If a user doesn’t use the MFA auth tool by utilizing the nullok option that returns an IGNORE for the interactive keyboard authentication. So if that line is ignored then the next line triggers which pam_permit.so returns SUCCESS and allows authentication to proceed.

      Save and close the file.

      Next, we’ll configure SSH to support this kind of authentication.

      Back up the the SSH configuration file then open it for editing:

      • sudo cp /etc/ssh/sshd_config /etc/ssh/sshd_config.bak
      • sudo nano /etc/ssh/sshd_config

      Look for lines beginning ChallengeResponseAuthentication. Comment out the no line and uncomment the yes line.

      You file will look like this:

      /etc/ssh/sshd_config

      . . .
      # Change to no to disable s/key passwords
      ChallengeResponseAuthentication yes
      #ChallengeResponseAuthentication no
      . . .
      

      Save and close the file and then restart SSH to reload the configuration files. Restarting the sshd service won’t close our current open connections, so you won’t risk locking yourself out with this command:

      • sudo systemctl restart sshd.service

      To test that everything’s working so far, open ANOTHER terminal window and try logging in over SSH. It is very important that you keep your current SSH session open and test with an additional session or you will lock yourself out at some point and will need to use the web console to get yourself back in.

      Note: If you’ve previously created an SSH key and are using it, you’ll notice you didn’t have to type in your user’s password or the MFA verification code. This is because an SSH key overrides all other authentication options by default. Otherwise, you should have gotten a password and verification code prompt.

      Next, to enable an SSH key as one factor and the verification code as a second, we need to tell SSH which factors to use and prevent the SSH key from overriding all other types.

      Step 3 — Making SSH Aware of MFA

      Reopen the sshd configuration file:

      • sudo nano /etc/ssh/sshd_config

      Add the following line at the bottom of the file. This tells SSH which authentication methods are required. This line tells SSH we need a SSH key and either a password or a verification code (or all three):

      /etc/ssh/sshd_config

      . . .
      AuthenticationMethods publickey,password publickey,keyboard-interactive
      

      Save and close the file.

      Next, open the PAM sshd configuration file again:

      • sudo nano /etc/pam.d/sshd

      Find the line auth substack password-auth towards the top of the file. Comment it out by adding a # character as the first character on the line. This tells PAM not to prompt for a password:

      /etc/pam.d/sshd

      . . .
      #auth       substack     password-auth
      . . .
      

      Save and close the file, then restart SSH:

      • sudo systemctl restart sshd.service

      Now try logging into the server again with a different terminal session/window. Unlike last time, SSH should ask for your verification code. Upon entering it, you’ll be logged in. Even though you don’t see any indication that your SSH key was used, your login attempt used two factors. If you want to verify, you can add -v (for verbose) after the SSH command:

      Example SSH Output

      . . . debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/sammy/.ssh/id_rsa debug1: Server accepts key: pkalg ssh-rsa blen 279 Authenticated with partial success. debug1: Authentications that can continue: keyboard-interactive debug1: Next authentication method: keyboard-interactive Verification code:

      Towards the end of the output, you’ll see where SSH uses your SSH key and then asks for the verification code. You can now log in over SSH with a SSH key and a one-time password. If you want to enforce all three authentication types, you can follow the next step.

      You have now successfully added a second factor when logging in remotely to your server over SSH. If this is what you wanted — to use your SSH key and a TOTP token to enable MFA for SSH (for most people, this is the optimal configuration) — then you’re done.

      What follows are some tips and tricks for recovery, automated usage, and more.

      Step 4 — Adding a Third Factor (Optional)

      In Step 3, we listed the approved types of authentication in the sshd_config file:

      1. publickey (SSH key)
      2. password publickey (password)
      3. keyboard-interactive (verification code)

      Although we listed three different factors, with the options we’ve chosen so far, they only allow for an SSH key and the verification code. If you’d like to have all three factors (SSH key, password, and verification code), one quick change will enable all three.

      Open the PAM sshd configuration file:

      • sudo nano /etc/pam.d/sshd

      Locate the line you commented out previously, #auth substack password-auth, and uncomment the line by removing the # character. Save and close the file. Now once again, restart SSH:

      • sudo systemctl restart sshd.service

      By enabling the option auth substack password-auth, PAM will now prompt for a password in addition the checking for an SSH key and asking for a verification code, which we had working previously. Now we can use something we know (password) and two different types of things we have (SSH key and verification code) over two different channels (your computer for the SSH key and your phone for the TOTP token).

      Step 5 — Recovering Access to Google MFA (optional)

      As with any system that you harden and secure, you become responsible for managing that security. In this case, that means not losing your SSH key or your TOTP secret key and making sure you have access to your TOTP app. However, sometimes things happen, and you can lose control of the keys or apps you need to get in.

      Losing Access to the Secret Key

      If you lose your TOTP secret key, recovery can be broken up into a couple of steps. The first is getting back in without knowing the verification code and the second is finding the secret key or regenerating it for normal MFA login. This often can happen if you get a new phone and don’t transfer over your secrets to a new authenticator app.

      To get in after losing the TOTP secret key on a DigitalOcean Droplet, you can simply use the virtual console from your dashboard to log in using your username and password. This works because we only protected your user account with MFA for ssh connections. Non-ssh connections, such as a console login, doesn’t use the Google Authenticator PAM module.

      If you’re on a non-Droplet system then you have two options for regaining access:

      1. Console (local/non-ssh) access to the system (typically physical or via something like iDrac)
      2. Have a different user that doesn’t have MFA enabled

      The second option is the less secure option, since the point in using MFA is to harden all ssh connections, but it’s one fail safe if you lose access to your MFA authenticator app.

      Once you’re logged in, there are two ways to get the TOTP secret:

      1. Recover the existing key
      2. Generate a new key

      In each user’s home directory, the secret key and Google Authenticator settings are saved in the file ~/.ssh/google-authenticator. The very first line of this file is a secret key. A quick way to get the key is to execute the following command, which displays the first line of the google-authenticator file (i.e. the secret key). Then, take that secret key and manually type it into a TOTP app:

      • head -n 1 /home/sammy/.ssh/google_authenticator

      Once you’ve recovered your existing key, you can either manually type it into your authenticator app and then you should be back in business or you can fill in the relevant details in the URL below and have Google generate a QR code for you to scan. You’ll need to add your username, hostname, the secret key from the google-authenticator file, and then any name of your choosing for 'entry-name-in-auth-app’ to easily identify this key versus a different TOTP token:

      https://www.google.com/chart?chs=200x200&chld=M|0&cht=qr&chl=otpauth://totp/username@hostname%3Fsecret%3D16-char-secret%26issuer%3Dentry-name-in-auth-app
      

      If there is a reason not to use the existing key (for example, being unable to easily share the secret key with the impacted user securely), you can remove the ~/.ssh/google-authenticator file outright. This will allow the user to log in again using only a single factor, assuming you haven’t enforced MFA by removing the nullok option in the ’/etc/pam.d/sshd’ file. They can then run google-authenticator to generate a new key.

      Losing Access to the TOTP App

      If you need to log in to your server but don’t have access to your TOTP app to get your verification code, you can still log in using the recovery codes that were displayed when you first created your secret key. Note that these recovery codes are one-time use. Once one is used to log in, it cannot be used as a verification code again. Hopefully you saved them in a place that is accessible if you don’t have your TOTP app and yet still secure.

      Step 6 — Changing Authentication Settings (optional)

      If you want to change your MFA settings after the initial configuration, instead of generating a new configuration with the updated settings, you can just edit the ~/.ssh/google-authenticator file. This file is laid out in the following manner:

      .google-authenticator layout

      <secret key>
      <options>
      <recovery codes>
      

      Options that are set in this file have a line in the options section; if you answered “no” to a particular option during the initial setup, the corresponding line is excluded from the file. In other words, the file only contains enabled options. If an option isn’t present it is by default disabled.

      Here are the changes you can make to this file:

      • To enable sequential codes instead of time based codes, change the line " TOTP_AUTH to " HOTP_COUNTER 1.
      • To allow multiple uses of a single code, remove the line " DISALLOW_REUSE.
      • To extend the code expiration window to 4 minutes, add the line " WINDOW_SIZE 17.
      • To disable multiple failed logins (rate limiting), remove the line " RATE_LIMIT 3 30.
      • To change the threshold of rate limiting, find the line " RATE_LIMIT 3 30 and adjust the numbers. The 3 in the original indicates the number of attempts over a period of time, and the 30 indicates the period of time in seconds.
      • To disable the use of recovery codes, remove the five 8 digit codes at bottom of the file.

      Step 7 — Avoiding MFA for Some Accounts (optional)

      There may be a situation in which a single user or a few service accounts (i.e. accounts used by applications, not humans) need SSH access without MFA enabled. For example, some applications that use SSH, like some FTP clients, may not support MFA. If an application doesn’t have a way to request the verification code, the request may get stuck until the SSH connection times out.

      As long as a couple of options in /etc/pam.d/sshd are set correctly, you can control which factors are used on a user-by-user basis.

      To allow MFA for some accounts and SSH key only for others, make sure the following settings in /etc/pam.d/sshd are active:

      /etc/pam.d/sshd

      #%PAM-1.0
      auth       required     pam_sepermit.so
      #auth       substack     password-auth
      
      . . .
      
      # Used with polkit to reauthorize users in remote sessions
      -session   optional     pam_reauthorize.so prepare
      auth       required      pam_google_authenticator.so nullok
      

      Here, auth substack password-auth is commented out because passwords need to be disabled. MFA cannot be forced if some accounts are meant to have MFA disabled, so leave the nullok option on the final line.

      After setting this configuration, simply run google-authenticator as any users that need MFA, and don’t run it for users where only SSH keys will be used.

      Step 8 — Automating Setup with Configuration Management (optional)

      Many system administrators use configuration management tools, like Puppet, Chef, or Ansible, to manage their systems. If you want to use a system like this to install set up a secret key when a new user’s account is created, there is a method to do that.

      google-authenticator supports command line switches to set all the options in a single, non-interactive command. To see all the options, you can type google-authenticator --help. Below is the command that would set everything up as outlined in Step 1:

      • google-authenticator -t -d -f -r 3 -R 30 -w 3 -s ~/.ssh/google_authenticator

      The options referenced above are as follows:

      • -t => Time based counter
      • -d => Disallow token reuse
      • -f => Force writing the settings to file without prompting the user
      • -r => How many attempts to enter the correct code
      • -R => How long in seconds a user can attempt to enter the correct code
      • -w => How many codes can are valid at a time (this references the 1:30 min - 4 min window of valid codes)
      • -s => Path to where the authentication file should be stored

      These answers all the questions we answered manually, saves it to a file, and then outputs the secret key, QR code, and recovery codes. (If you add the flag -q, then there won’t be any output.) If you do use this command in an automated fashion, make sure to capture the secret key and/or recovery codes and make them available to the user. Also remember to have your automation reset the SELinux context of the ’.ssh’ directory (restorecon -Rv ~/.ssh/).

      Step 9 — Forcing MFA for All Users (optional)

      If you want to force MFA for all users even on the first login, or if you would prefer not to rely on your users to generate their own keys, there’s an easy way to handle this. You can simply use the same google-authenticator file for each user, as there’s no user-specific data stored in the file.

      To do this, after the configuration file is initially created, a privileged user needs to copy the file to the .ssh directory of every home directory and change its permissions to the appropriate user. You can also copy the file to /etc/skel/ so it’s automatically copied over to a new user’s home directory upon creation.

      Warning: This can be a security risk because everyone is sharing the same second factor. This means that if it’s leaked, it’s as if every user had only one factor. Take this into consideration if you want to use this approach.

      Another method to force the creation of a user’s secret key is to use a bash script that:

      1. Creates a TOTP token,
      2. Prompts them to download the Google Authenticator app and scan the QR code that will be displayed, and
      3. Runs the google-authenticator application for them after checking if the google-authenticator file already exists.

      To make sure the script runs when a user logs in, you can name it .bash_login and place it at the root of their home directory.

      Conclusion

      In this tutorial, you added two factors (an SSH key + MFA token) across two channels (your computer + your phone) to your server. You’ve made it very difficult for an outside agent to brute force their way into your machine via SSH and greatly increased the security of your machine.

      And remember, if you are seeking further guidance on securing SSH connections, check out these tutorials on Hardening OpenSSH and Hardening OpenSSH Client.



      Source link

      How To Secure Nginx with Let’s Encrypt on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      The author selected the COVID-19 Relief Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Let’s Encrypt is a certificate authority (CA) that provides free certificates for Transport Layer Security (TLS) encryption. It simplifies the process of creation, validation, signing, installation, and renewal of certificates by providing a software client—Certbot.

      In this tutorial you’ll set up a TLS/SSL certificate from Let’s Encrypt on a CentOS 8 server running Nginx as a web server. Additionally, you will automate the certificate renewal process using a cron job.

      Prerequisites

      In order to complete this guide, you will need:

      • One CentOS 8 server set up by following the CentOS 8 Initial Server Setup guide, including a non-root user with sudo privileges and a firewall.
      • Nginx installed on the CentOS 8 server with a configured server block. You can learn how to set this up by following our tutorial How To Install Nginx on CentOS 8.
      • A fully registered domain name. This tutorial will use your_domain as an example throughout. You can purchase a domain name on Namecheap, get one for free on Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them.
        • An A record with your_domain pointing to your server’s public IP address.
        • An A record with www.your_domain pointing to your server’s public IP address.

      Step 1 — Installing the Certbot Let’s Encrypt Client

      First, you need to install the certbot software package. Log in to your CentOS 8 machine as your non-root user:

      The certbot package is not available through the package manager by default. You will need to enable the EPEL repository to install Certbot.

      To add the CentOS 8 EPEL repository, run the following command:

      • sudo dnf install epel-release

      When asked to confirm the installation, type and enter y.

      Now that you have access to the extra repository, install all of the required packages:

      • sudo dnf install certbot python3-certbot-nginx

      This will install Certbot itself and the Nginx plugin for Certbot, which is needed to run the program.

      The installation process will ask you about importing a GPG key. Confirm it so the installation can complete.

      You have now installed the Let’s Encrypt client, but before obtaining certificates, you need to make sure that all required ports are open. To do this, you will update your firewall settings in the next step.

      Step 2 — Updating the Firewall Rules

      Since your prerequisite setup enables firewalld, you will need to adjust the firewall settings in order to allow external connections on your Nginx web server.

      To check which services are already enabled, run the command:

      • sudo firewall-cmd --permanent --list-all

      You’ll receive output like this:

      Output

      public target: default icmp-block-inversion: no interfaces: sources: services: cockpit dhcpv6-client http ssh ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules:

      If you do not see http in the services list, enable it by running:

      • sudo firewall-cmd --permanent --add-service=http

      To allow https traffic, run the following command:

      • sudo firewall-cmd --permanent --add-service=https

      To apply the changes, you’ll need to reload the firewall service:

      • sudo firewall-cmd --reload

      Now that you’ve opened up your server to https traffic, you’re ready to run Certbot and fetch your certificates.

      Step 3 — Obtaining a Certificate

      Now you can request an SSL certificate for your domain.

      When generating the SSL Certificate for Nginx using the certbot Let’s Encrypt client, the client will automatically obtain and install a new SSL certificate that is valid for the domains provided as parameters.

      If you want to install a single certificate that is valid for multiple domains or subdomains, you can pass them as additional parameters to the command. The first domain name in the list of parameters will be the base domain used by Let’s Encrypt to create the certificate, and for that reason you will pass the top-level domain name as first in the list, followed by any additional subdomains or aliases:

      • sudo certbot --nginx -d your_domain -d www.your_domain

      This runs certbot with the --nginx plugin, and the base domain will be your_domain. To execute the interactive installation and obtain a certificate that covers only a single domain, run the certbot command with:

      • sudo certbot --nginx -d your_domain

      The certbot utility can also prompt you for domain information during the certificate request procedure. To use this functionality, call certbot without any domains:

      You will receive a step-by-step guide to customize your certificate options. Certbot will ask you to provide an email address for lost key recovery and notices and to agree to the terms of service. If you did not specify your domains on the command line, Certbot will look for a server_name directive and will give you a list of the domain names found. If your server block files do not specify the domain they serve explicitly using the server_name directive, Certbot will ask you to provide domain names manually.

      For better security, Certbot will automatically configure redirecting all traffic on port 80 to 443.

      When the installation successfully finishes, you will receive a message similar to this:

      Output

      IMPORTANT NOTES: - Congratulations! Your certificate and chain have been saved at: /etc/letsencrypt/live/your_domain/fullchain.pem Your key file has been saved at: /etc/letsencrypt/live/your_domain/privkey.pem Your cert will expire on 2021-02-26. To obtain a new or tweaked version of this certificate in the future, simply run certbot again with the "certonly" option. To non-interactively renew *all* of your certificates, run "certbot renew" - If you like Certbot, please consider supporting our work by: Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate Donating to EFF: https://eff.org/donate-le

      The generated certificate files will be available within a subdirectory named after your base domain in the /etc/letsencrypt/live directory.

      Now that you have finished using Certbot, you can check your SSL certificate status. Verify the status of your SSL certificate by opening the following link in your preferred web browser (don’t forget to replace your_domain with your base domain):

      https://www.ssllabs.com/ssltest/analyze.html?d=your_domain
      

      This site contains an SSL test from SSL Labs, which will start automatically. At the time of this writing, default settings will give an A rating.

      You can now access your website using the https prefix. However, you must renew certificates periodically to keep this setup working. In the next step, you will automate this renewal process.

      Step 4 — Setting Up Auto-Renewal

      Let’s Encrypt certificates are valid for 90 days, but it’s recommended that you renew the certificates every 60 days to allow for a margin of error. The Certbot Let’s Encrypt client has a renew command that automatically checks the currently installed certificates and tries to renew them if they are less than 30 days away from the expiration date.

      You can test automatic renewal for your certificates by running this command:

      • sudo certbot renew --dry-run

      The output will be similar to this:

      Output

      Saving debug log to /var/log/letsencrypt/letsencrypt.log - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Processing /etc/letsencrypt/renewal/your_domain.conf - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Cert not due for renewal, but simulating renewal for dry run Plugins selected: Authenticator nginx, Installer nginx Renewing an existing certificate Performing the following challenges: http-01 challenge for monitoring.pp.ua Waiting for verification... Cleaning up challenges - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - new certificate deployed with reload of nginx server; fullchain is /etc/letsencrypt/live/your_domain/fullchain.pem - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/your_domain/fullchain.pem (success) ...

      Notice that if you created a bundled certificate with multiple domains, only the base domain name will show in the output, but the renewal will work for all domains included in this certificate.

      A practical way to ensure your certificates will not get outdated is to create a cron job that will periodically execute the automatic renewal command for you. Since the renewal first checks for the expiration date and only executes the renewal if the certificate is less than 30 days away from expiration, it is safe to create a cron job that runs every week, or even every day.

      Edit the crontab to create a new job that will run the renewal twice per day. To edit the crontab for the root user, run:

      Your text editor will open the default crontab, which is an empty text file at this point. Enter insert mode by pressing i and add in the following line:

      crontab

      0 0,12 * * * python -c 'import random; import time; time.sleep(random.random() * 3600)' && certbot renew --quiet
      

      When you’re finished, press ESC to leave insert mode, then :wq and ENTER to save and exit the file. To learn more about the text editor Vi and its successor Vim, check out our Installing and Using the Vim Text Editor on a Cloud Server tutorial.

      This will create a new cron job that will execute at noon and midnight every day. python -c 'import random; import time; time.sleep(random.random() * 3600)' will select a random minute within the hour for your renewal tasks.

      The renew command for Certbot will check all certificates installed on the system and update any that are set to expire in less than thirty days. --quiet tells Certbot not to output information or wait for user input.

      More detailed information about renewal can be found in the Certbot documentation.

      Conclusion

      In this guide, you installed the Let’s Encrypt client Certbot, downloaded SSL certificates for your domain, and set up automatic certificate renewal. If you have any questions about using Certbot, you can check the official Certbot documentation.

      You can also check the official Let’s Encrypt blog for important updates from time to time.



      Source link