One place for hosting & domains

      June 2020

      How To Create a Self-Signed SSL Certificate for Apache on CentOS 8


      Not using CentOS 8?


      Choose a different version or distribution.

      Introduction

      TLS, or “transport layer security” — and its predecessor SSL — are protocols used to wrap normal traffic in a protected, encrypted wrapper. Using this technology, servers can safely send information to their clients without their messages being intercepted or read by an outside party.

      In this guide, we will show you how to create and use a self-signed SSL certificate with the Apache web server on a CentOS 8 machine.

      Note: A self-signed certificate will encrypt communication between your server and its clients. However, because it is not signed by any of the trusted certificate authorities included with web browsers and operating systems, users cannot use the certificate to automatically validate the identity of your server. As a result, your users will see a security error when visiting your site.

      Because of this limitation, self-signed certificates are not appropriate for a production environment serving the public. They are typically used for testing, or for securing non-critical services used by a single user or a small group of users that can establish trust in the certificate’s validity through alternate communication channels.

      For a more production-ready certificate solution, check out Let’s Encrypt, a free certificate authority. You can learn how to download and configure a Let’s Encrypt certificate in our How To Secure Apache with Let’s Encrypt on CentOS 8 tutorial.

      Prerequisites

      Before starting this tutorial, you’ll need the following:

      • Access to a CentOS 8 server with a non-root, sudo-enabled user. Our Initial Server Setup with CentOS 8 guide can show you how to create this account.
      • You will also need to have Apache installed. You can install Apache using dnf:

        Enable Apache and start it using systemctl:

        • sudo systemctl enable httpd
        • sudo systemctl start httpd

        And finally, if you have a firewalld firewall set up, open up the http and https ports:

        • sudo firewall-cmd --permanent --add-service=http
        • sudo firewall-cmd --permanent --add-service=https
        • sudo firewall-cmd --reload

      After these steps are complete, be sure you are logged in as your non-root user and continue with the tutorial.

      Step 1 — Installing mod_ssl

      We first need to install mod_ssl, an Apache module that provides support for SSL encryption.

      Install mod_ssl with the dnf command:

      Because of a packaging bug, we need to restart Apache once to properly generate the default SSL certificate and key, otherwise we’ll get an error reading '/etc/pki/tls/certs/localhost.crt' does not exist or is empty.

      • sudo systemctl restart httpd

      The mod_ssl module is now enabled and ready for use.

      Step 2 — Creating the SSL Certificate

      Now that Apache is ready to use encryption, we can move on to generating a new SSL certificate. The certificate will store some basic information about your site, and will be accompanied by a key file that allows the server to securely handle encrypted data.

      We can create the SSL key and certificate files with the openssl command:

      • sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/pki/tls/private/apache-selfsigned.key -out /etc/pki/tls/certs/apache-selfsigned.crt

      After you enter the command, you will be taken to a prompt where you can enter information about your website. Before we go over that, let’s take a look at what is happening in the command we are issuing:

      • openssl: This is the command line tool for creating and managing OpenSSL certificates, keys, and other files.
      • req -x509: This specifies that we want to use X.509 certificate signing request (CSR) management. X.509 is a public key infrastructure standard that SSL and TLS adhere to for key and certificate management.
      • -nodes: This tells OpenSSL to skip the option to secure our certificate with a passphrase. We need Apache to be able to read the file, without user intervention, when the server starts up. A passphrase would prevent this from happening, since we would have to enter it after every restart.
      • -days 365: This option sets the length of time that the certificate will be considered valid. We set it for one year here. Many modern browsers will reject any certificates that are valid for longer than one year.
      • -newkey rsa:2048: This specifies that we want to generate a new certificate and a new key at the same time. We did not create the key that is required to sign the certificate in a previous step, so we need to create it along with the certificate. The rsa:2048 portion tells it to make an RSA key that is 2048 bits long.
      • -keyout: This line tells OpenSSL where to place the generated private key file that we are creating.
      • -out: This tells OpenSSL where to place the certificate that we are creating.

      Fill out the prompts appropriately. The most important line is the one that requests the Common Name. You need to enter either the hostname you’ll use to access the server by, or the public IP of the server. It’s important that this field matches whatever you’ll put into your browser’s address bar to access the site, as a mismatch will cause more security errors.

      The full list of prompts will look something like this:

      Country Name (2 letter code) [XX]:US
      State or Province Name (full name) []:Example
      Locality Name (eg, city) [Default City]:Example 
      Organization Name (eg, company) [Default Company Ltd]:Example Inc
      Organizational Unit Name (eg, section) []:Example Dept
      Common Name (eg, your name or your server's hostname) []:your_domain_or_ip
      Email Address []:[email protected]
      

      Both of the files you created will be placed in the appropriate subdirectories of the /etc/pki/tls directory. This is a standard directory provided by CentOS for this purpose.

      Next we will update our Apache configuration to use the new certificate and key.

      Step 3 — Configuring Apache to Use SSL

      Now that we have our self-signed certificate and key available, we need to update our Apache configuration to use them. On CentOS, you can place new Apache configuration files (they must end in .conf) into /etc/httpd/conf.d and they will be loaded the next time the Apache process is reloaded or restarted.

      For this tutorial we will create a new minimal configuration file. If you already have an Apache <Virtualhost> set up and just need to add SSL to it, you will likely need to copy over the configuration lines that start with SSL, and switch the VirtualHost port from 80 to 443. We will take care of port 80 in the next step.

      Open a new file in the /etc/httpd/conf.d directory:

      • sudo vi /etc/httpd/conf.d/your_domain_or_ip.conf

      Paste in the following minimal VirtualHost configuration:

      /etc/httpd/conf.d/your_domain_or_ip.conf

      <VirtualHost *:443>
          ServerName your_domain_or_ip
          DocumentRoot /var/www/ssl-test
          SSLEngine on
          SSLCertificateFile /etc/pki/tls/certs/apache-selfsigned.crt
          SSLCertificateKeyFile /etc/pki/tls/private/apache-selfsigned.key
      </VirtualHost>
      

      Be sure to update the ServerName line to however you intend to address your server. This can be a hostname, full domain name, or an IP address. Make sure whatever you choose matches the Common Name you chose when making the certificate.

      The remaining lines specify a DocumentRoot directory to serve files from, and the SSL options needed to point Apache to our newly-created certificate and key.

      Now let’s create our DocumentRoot and put an HTML file in it just for testing purposes:

      • sudo mkdir /var/www/ssl-test

      Open a new index.html file with your text editor:

      • sudo vi /var/www/ssl-test/index.html

      Paste the following into the blank file:

      /var/www/ssl-test/index.html

      <h1>it worked!</h1>
      

      This is not a full HTML file, of course, but browsers are lenient and it will be enough to verify our configuration.

      Save and close the file, then check your Apache configuration for syntax errors by typing:

      • sudo apachectl configtest

      You may see some warnings, but as long as the output ends with Syntax OK, you are safe to continue. If this is not part of your output, check the syntax of your files and try again.

      When all is well, reload Apache to pick up the configuration changes:

      • sudo systemctl reload httpd

      Now load your site in a browser, being sure to use https:// at the beginning.

      You should see an error. This is normal for a self-signed certificate! The browser is warning you that it can’t verify the identity of the server, because our certificate is not signed by any of the browser’s known certificate authorities. For testing purposes and personal use this can be fine. You should be able to click through to advanced or more information and choose to proceed.

      After you do so, your browser will load the it worked! message.

      Note: if your browser doesn’t connect at all to the server, make sure your connection isn’t being blocked by a firewall. If you are using firewalld, the following commands will open ports 80 and 443:

      • sudo firewall-cmd --permanent --add-service=http
      • sudo firewall-cmd --permanent --add-service=https
      • sudo firewall-cmd --reload

      Next we will add another VirtualHost section to our configuration to serve plain HTTP requests and redirect them to HTTPS.

      Step 4 — Redirecting HTTP to HTTPS

      Currently, our configuration will only respond to HTTPS requests on port 443. It is good practice to also respond on port 80, even if you want to force all traffic to be encrypted. Let’s set up a VirtualHost to respond to these unencrypted requests and redirect them to HTTPS.

      Open the same Apache configuration file we started in previous steps:

      • sudo vi /etc/httpd/conf.d/your_domain_or_ip.conf

      At the bottom, create another VirtualHost block to match requests on port 80. Use the ServerName directive to again match your domain name or IP address. Then, use Redirect to match any requests and send them to the SSL VirtualHost. Make sure to include the trailing slash:

      /etc/httpd/conf.d/your_domain_or_ip.conf

      <VirtualHost *:80>
          ServerName your_domain_or_ip
          Redirect / https://your_domain_or_ip/
      </VirtualHost>
      

      Save and close this file when you are finished, then test your configuration syntax again, and reload Apache:

      • sudo apachectl configtest
      • sudo systemctl reload httpd

      You can test the new redirect functionality by visiting your site with plain http:// in front of the address. You should be redirected to https:// automatically.

      Conclusion

      You have now configured Apache to serve encrypted requests using a self-signed SSL certificate, and to redirect unecrypted HTTP requests to HTTPS.

      If you are planning on using SSL for a public website, you should look into purchasing a domain name and using a widely supported certificate authority such as Let’s Encrypt.

      For more information on using Let’s Encrypt with Apache, please read our How To Secure Apache with Let’s Encrypt on CentOS 8 tutorial.



      Source link

      How To Install and Configure Zabbix to Securely Monitor Remote Servers on Ubuntu 20.04


      Not using Ubuntu 20.04?


      Choose a different version or distribution.

      The author selected the Computer History Museum to receive a donation as part of the Write for DOnations program.

      Introduction

      Zabbix is open-source monitoring software for networks and applications. It offers real-time monitoring of thousands of metrics collected from servers, virtual machines, network devices, and web applications. These metrics can help you determine the current health of your IT infrastructure and detect problems with hardware or software components before customers complain. Useful information is stored in a database so you can analyze data over time and improve the quality of provided services or plan upgrades of your equipment.

      Zabbix uses several options for collecting metrics, including agentless monitoring of user services and client-server architecture. To collect server metrics, it uses a small agent on the monitored client to gather data and send it to the Zabbix server. Zabbix supports encrypted communication between the server and connected clients, so your data is protected while it travels over insecure networks.

      The Zabbix server stores its data in a relational database powered by MySQL or PostgreSQL. You can also store historical data in NoSQL databases like Elasticsearch and TimescaleDB. Zabbix provides a web interface so you can view data and configure system settings.

      In this tutorial, you will configure Zabbix on two Ubuntu 20.04 machines. One will be configured as the Zabbix server, and the other as a client that you’ll monitor. The Zabbix server will use a MySQL database to record monitoring data and use Nginx to serve the web interface.

      Prerequisites

      To follow this tutorial, you will need:

      • Two Ubuntu 20.04 servers set up by following the Initial Server Setup Guide for Ubuntu 20.04, including a non-root user with sudo privileges and a firewall configured with ufw. On one server, you will install Zabbix; this tutorial will refer to this as the Zabbix server. It will monitor your second server; this second server will be referred to as the second Ubuntu server.

      • The server that will run the Zabbix server needs Nginx, MySQL, and PHP installed. Follow Steps 1–3 of our Ubuntu 20.04 LEMP Stack guide to configure those on your Zabbix server.

      • A registered domain name. This tutorial will use your_domain throughout. You can purchase a domain name from Namecheap, get one for free with Freenom, or use the domain registrar of your choice.

      • Both of the following DNS records set up for your Zabbix server. If you are using DigitalOcean, please see our DNS documentation for details on how to add them.

        • An A record with your_domain pointing to your Zabbix server’s public IP address.
        • An A record with www.your_domain pointing to your Zabbix server’s public IP address.

      Additionally, because the Zabbix Server is used to access valuable information about your infrastructure that you would not want unauthorized users to access, it’s important that you keep your server secure by installing a TLS/SSL certificate. This is optional but strongly encouraged. If you would like to secure your server, follow the Let’s Encrypt on Ubuntu 20.04 guide after Step 3 of this tutorial.

      Step 1 — Installing the Zabbix Server

      First, you need to install Zabbix on the server where you installed MySQL, Nginx, and PHP. Log in to this machine as your non-root user:

      • ssh sammy@zabbix_server_ip_address

      Zabbix is available in Ubuntu’s package manager, but it’s outdated, so use the official Zabbix repository to install the latest stable version. Download and install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/5.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_5.0-1+focal_all.deb
      • sudo dpkg -i zabbix-release_5.0-1+focal_all.deb

      You will see the following output:

      Output

      Selecting previously unselected package zabbix-release. (Reading database ... 64058 files and directories currently installed.) Preparing to unpack zabbix-release_5.0-1+focal_all.deb ... Unpacking zabbix-release (1:5.0-1+focal) ... Setting up zabbix-release (1:5.0-1+focal) ...

      Update the package index so the new repository is included:

      Then install the Zabbix server and web frontend with MySQL database support:

      • sudo apt install zabbix-server-mysql zabbix-frontend-php

      Also, install the Zabbix agent, which will let you collect data about the Zabbix server status itself.

      • sudo apt install zabbix-agent

      Before you can use Zabbix, you have to set up a database to hold the data that the Zabbix server will collect from its agents. You can do this in the next step.

      Step 2 — Configuring the MySQL Database for Zabbix

      You need to create a new MySQL database and populate it with some basic information in order to make it suitable for Zabbix. You’ll also create a specific user for this database so Zabbix isn’t logging in to MySQL with the root account.

      Log in to MySQL as the root user:

      Create the Zabbix database with UTF-8 character support:

      • create database zabbix character set utf8 collate utf8_bin;

      Then create a user that the Zabbix server will use, give it access to the new database, and set the password for the user:

      • create user zabbix@localhost identified by 'your_zabbix_mysql_password';
      • grant all privileges on zabbix.* to zabbix@localhost;

      That takes care of the user and the database. Exit out of the database console.

      Next you have to import the initial schema and data. The Zabbix installation provided you with a file that sets this up.

      Run the following command to set up the schema and import the data into the zabbix database. Use zcat since the data in the file is compressed:

      • zcat /usr/share/doc/zabbix-server-mysql*/create.sql.gz | mysql -uzabbix -p zabbix

      Enter the password for the zabbix MySQL user that you configured when prompted.

      This command may take a minute or two to execute. If you see the error ERROR 1045 (28000): Access denied for userzabbix@'localhost' (using password: YES) then make sure you used the right password for the zabbix user.

      In order for the Zabbix server to use this database, you need to set the database password in the Zabbix server configuration file. Open the configuration file in your preferred text editor. This tutorial will use nano:

      • sudo nano /etc/zabbix/zabbix_server.conf

      Look for the following section of the file:

      /etc/zabbix/zabbix_server.conf

      ...
      ### Option: DBPassword                           
      #       Database password. Ignored for SQLite.   
      #       Comment this line if no password is used.
      #                                                
      # Mandatory: no                                  
      # Default:                                       
      # DBPassword=
      ...
      

      These comments in the file explain how to connect to the database. You need to set the DBPassword value in the file to the password for your database user. Add this line after those comments to configure the database:

      /etc/zabbix/zabbix_server.conf

      ...
      DBPassword=your_zabbix_mysql_password
      ...
      

      Save and close zabbix_server.conf by pressing CTRL+X, followed by Y and then ENTER if you’re using nano.

      You’ve now configured the Zabbix server to connect to the database. Next, you will configure the Nginx web server to serve the Zabbix frontend.

      Step 3 — Configuring Nginx for Zabbix

      To configure Nginx automatically, install the automatic configuration package:

      • sudo apt install zabbix-nginx-conf

      As a result, you will get the configuration file /etc/zabbix/nginx.conf, as well as a link to it in the Nginx configuration directory /etc/nginx/conf.d/zabbix.conf.

      Next, you need to make changes to this file. Open the configuration file:

      • sudo nano /etc/zabbix/nginx.conf

      The file contains an automatically generated Nginx server block configuration. It contains two lines that determine the server name and what port it is listening on:

      /etc/zabbix/nginx.conf

      server {
      #        listen          80;
      #        server_name     example.com;
      ...
      

      Uncomment the two lines, and replace example.com with your domain name. Your settings will look like this:

      /etc/zabbix/nginx.conf

      server {
              listen          80;
              server_name     your_domain;
      ...
      

      Save and close the file. Next, test to make sure that there are no syntax errors in any of your Nginx files and reload the configuration:

      • sudo nginx -t
      • sudo nginx -s reload

      Now that Nginx is set up to serve the Zabbix frontend, you will make some modifications to your PHP setup in order for the Zabbix web interface to work properly.

      Note: As mentioned in the Prerequisites section, it is recommended that you enable SSL/TLS on your server. If you would like to do this, follow our Ubuntu 20.04 Let’s Encrypt tutorial before you move on to Step 4 to obtain a free SSL certificate for Nginx. This process will automatically detect your Zabbix server block and configure it for HTTPS. After obtaining your SSL/TLS certificates, you can come back and complete this tutorial.

      Step 4 — Configuring PHP for Zabbix

      The Zabbix web interface is written in PHP and requires some special PHP server settings. The Zabbix installation process created a PHP-FPM configuration file that contains these settings. It is located in the directory /etc/zabbix and is loaded automatically by PHP-FPM. You need to make a small change to this file, so open it up with the following:

      • sudo nano /etc/zabbix/php-fpm.conf

      The file contains PHP settings that meet the necessary requirements for the Zabbix web interface. However, the timezone setting is commented out by default. To make sure that Zabbix uses the correct time, you need to set the appropriate timezone:

      /etc/zabbix/php-fpm.conf

      ...
      php_value[max_execution_time] = 300
      php_value[memory_limit] = 128M
      php_value[post_max_size] = 16M
      php_value[upload_max_filesize] = 2M
      php_value[max_input_time] = 300
      php_value[max_input_vars] = 10000
      ; php_value[date.timezone] = Europe/Riga
      

      Uncomment the timezone line highlighted in the preceding code block and change it to your timezone. You can use this list of supported time zones to find the right one for you. Then save and close the file.

      Now restart PHP-FPM to apply these new settings:

      • sudo systemctl restart php7.4-fpm.service

      You can now start the Zabbix server:

      • sudo systemctl start zabbix-server

      Then check whether the Zabbix server is running properly:

      • sudo systemctl status zabbix-server

      You will see the following status:

      Output

      ● zabbix-server.service - Zabbix Server Loaded: loaded (/lib/systemd/system/zabbix-server.service; disabled; vendor preset: enabled) Active: active (running) since Fri 2020-06-12 05:59:32 UTC; 36s ago Process: 27026 ExecStart=/usr/sbin/zabbix_server -c $CONFFILE (code=exited, status=0/SUCCESS) ...

      Finally, enable the server to start at boot time:

      • sudo systemctl enable zabbix-server

      The server is set up and connected to the database. Next, set up the web frontend.

      Step 5 — Configuring Settings for the Zabbix Web Interface

      The web interface lets you see reports and add hosts that you want to monitor, but it needs some initial setup before you can use it. Launch your browser and go to the address http://zabbix_server_name or https://zabbix_server_name if you set up Let’s Encrypt. On the first screen, you will see a welcome message. Click Next step to continue.

      On the next screen, you will see the table that lists all of the prerequisites to run Zabbix.

      Prerequisites

      All of the values in this table must be OK, so verify that they are. Be sure to scroll down and look at all of the prerequisites. Once you’ve verified that everything is ready to go, click Next step to proceed.

      The next screen asks for database connection information.

      DB Connection

      You told the Zabbix server about your database, but the Zabbix web interface also needs access to the database to manage hosts and read data. Therefore enter the MySQL credentials you configured in Step 2. Click Next step to proceed.

      On the next screen, you can leave the options at their default values.

      Zabbix Server Details

      The Name is optional; it is used in the web interface to distinguish one server from another in case you have several monitoring servers. Click Next step to proceed.

      The next screen will show the pre-installation summary so you can confirm everything is correct.

      Summary

      Click Next step to proceed to the final screen.

      The web interface setup is now complete. This process creates the configuration file /usr/share/zabbix/conf/zabbix.conf.php, which you could back up and use in the future. Click Finish to proceed to the login screen. The default user is Admin and the password is zabbix.

      Before you log in, set up the Zabbix agent on your second Ubuntu server.

      Step 6 — Installing and Configuring the Zabbix Agent

      Now you need to configure the agent software that will send monitoring data to the Zabbix server.

      Log in to the second Ubuntu server:

      • ssh sammy@second_ubuntu_server_ip_address

      Just like on the Zabbix server, run the following commands to install the repository configuration package:

      • wget https://repo.zabbix.com/zabbix/5.0/ubuntu/pool/main/z/zabbix-release/zabbix-release_5.0-1+focal_all.deb
      • sudo dpkg -i zabbix-release_5.0-1+focal_all.deb

      Next, update the package index:

      Then install the Zabbix agent:

      • sudo apt install zabbix-agent

      While Zabbix supports certificate-based encryption, setting up a certificate authority is beyond the scope of this tutorial. But you can use pre-shared keys (PSK) to secure the connection between the server and agent.

      First, generate a PSK:

      • sudo sh -c "openssl rand -hex 32 > /etc/zabbix/zabbix_agentd.psk"

      Show the key by using cat so you can copy it somewhere:

      • cat /etc/zabbix/zabbix_agentd.psk

      The key will look something like this:

      Output

      75ad6cb5e17d244ac8c00c96a1b074d0550b8e7b15d0ab3cde60cd79af280fca

      Save this for later; you will need it to configure the host.

      Now edit the Zabbix agent settings to set up its secure connection to the Zabbix server. Open the agent configuration file in your text editor:

      • sudo nano /etc/zabbix/zabbix_agentd.conf

      Each setting within this file is documented via informative comments throughout the file, but you only need to edit some of them.

      First you have to edit the IP address of the Zabbix server. Find the following section:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: Server
      #       List of comma delimited IP addresses, optionally in CIDR notation, or DNS names of Zabbix servers and Zabbix proxies.
      #       Incoming connections will be accepted only from the hosts listed here.
      #       If IPv6 support is enabled then '127.0.0.1', '::127.0.0.1', '::ffff:127.0.0.1' are treated equally
      #       and '::/0' will allow any IPv4 or IPv6 address.
      #       '0.0.0.0/0' can be used to allow any IPv4 address.
      #       Example: Server=127.0.0.1,192.168.1.0/24,::1,2001:db8::/32,zabbix.example.com
      #
      # Mandatory: yes, if StartAgents is not explicitly set to 0
      # Default:
      # Server=
      
      Server=127.0.0.1
      ...
      

      Change the default value to the IP of your Zabbix server:

      /etc/zabbix/zabbix_agentd.conf

      ...
      Server=zabbix_server_ip_address
      ...
      

      By default, Zabbix server connects to the agent. But for some checks (for example, monitoring the logs), a reverse connection is required. For correct operation, you need to specify the Zabbix server address and a unique host name.

      Find the section that configures the active checks and change the default values:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ##### Active checks related
      
      ### Option: ServerActive
      #       List of comma delimited IP:port (or DNS name:port) pairs of Zabbix servers and Zabbix proxies for active checks.
      #       If port is not specified, default port is used.
      #       IPv6 addresses must be enclosed in square brackets if port for that host is specified.
      #       If port is not specified, square brackets for IPv6 addresses are optional.
      #       If this parameter is not specified, active checks are disabled.
      #       Example: ServerActive=127.0.0.1:20051,zabbix.domain,[::1]:30051,::1,[12fc::1]
      #
      # Mandatory: no
      # Default:
      # ServerActive=
      
      ServerActive=zabbix_server_ip_address
      
      ### Option: Hostname
      #       Unique, case sensitive hostname.
      #       Required for active checks and must match hostname as configured on the server.
      #       Value is acquired from HostnameItem if undefined.
      #
      # Mandatory: no
      # Default:
      # Hostname=
      
      Hostname=Second Ubuntu Server
      ...
      

      Next, find the section that configures the secure connection to the Zabbix server and enable pre-shared key support. Find the TLSConnect section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSConnect
      #       How the agent should connect to server or proxy. Used for active checks.
      #       Only one value can be specified:
      #               unencrypted - connect without encryption
      #               psk         - connect using TLS and a pre-shared key
      #               cert        - connect using TLS and a certificate
      #
      # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
      # Default:
      # TLSConnect=unencrypted
      ...
      

      Then add this line to configure pre-shared key support:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSConnect=psk
      ...
      

      Next, locate the TLSAccept section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSAccept
      #       What incoming connections to accept.
      #       Multiple values can be specified, separated by comma:
      #               unencrypted - accept connections without encryption
      #               psk         - accept connections secured with TLS and a pre-shared key
      #               cert        - accept connections secured with TLS and a certificate
      #
      # Mandatory: yes, if TLS certificate or PSK parameters are defined (even for 'unencrypted' connection)
      # Default:
      # TLSAccept=unencrypted
      ...
      

      Configure incoming connections to support pre-shared keys by adding this line:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSAccept=psk
      ...
      

      Next, find the TLSPSKIdentity section, which looks like this:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSPSKIdentity
      #       Unique, case sensitive string used to identify the pre-shared key.
      #
      # Mandatory: no
      # Default:
      # TLSPSKIdentity=
      ...
      

      Choose a unique name to identify your pre-shared key by adding this line:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSPSKIdentity=PSK 001
      ...
      

      You’ll use this as the PSK ID when you add your host through the Zabbix web interface.

      Then set the option that points to your previously created pre-shared key. Locate the TLSPSKFile option:

      /etc/zabbix/zabbix_agentd.conf

      ...
      ### Option: TLSPSKFile
      #       Full pathname of a file containing the pre-shared key.
      #
      # Mandatory: no
      # Default:
      # TLSPSKFile=
      ...
      

      Add this line to point the Zabbix agent to your PSK file you created:

      /etc/zabbix/zabbix_agentd.conf

      ...
      TLSPSKFile=/etc/zabbix/zabbix_agentd.psk
      ...
      

      Save and close the file. Now you can restart the Zabbix agent and set it to start at boot time:

      • sudo systemctl restart zabbix-agent
      • sudo systemctl enable zabbix-agent

      For good measure, check that the Zabbix agent is running properly:

      • sudo systemctl status zabbix-agent

      You will see the following status, indicating the agent is running:

      Output

      ● zabbix-agent.service - Zabbix Agent Loaded: loaded (/lib/systemd/system/zabbix-agent.service; enabled; vendor preset: enabled) Active: active (running) since Fri 2020-06-12 08:19:54 UTC; 25s ago ...

      The agent will listen on port 10050 for connections from the server. Configure UFW to allow connections to this port:

      You can learn more about UFW in How To Set Up a Firewall with UFW on Ubuntu 20.04.

      Your agent is now ready to send data to the Zabbix server. But in order to use it, you have to link to it from the server’s web console. In the next step, you will complete the configuration.

      Step 7 — Adding the New Host to the Zabbix Server

      Installing an agent on a server you want to monitor is only half of the process. Each host you want to monitor needs to be registered on the Zabbix server, which you can do through the web interface.

      Log in to the Zabbix Server web interface by navigating to the address http://zabbix_server_name or https://zabbix_server_name:

      The Zabbix login screen

      When you have logged in, click on Configuration and then Hosts in the left navigation bar. Then click the Create host button in the top right corner of the screen. This will open the host configuration page.

      Creating a host

      Adjust the Host name and IP address to reflect the host name and IP address of your second Ubuntu server, then add the host to a group. You can select an existing group, for example Linux servers, or create your own group. The host can be in multiple groups. To do this, enter the name of an existing or new group in the Groups field and select the desired value from the proposed list.

      Before adding the group, click the Templates tab.

      Adding a template to the host

      Type Template OS Linux by Zabbix agent in the Search field and then select it from the list to add this template to the host.

      Next, navigate to the Encryption tab. Select PSK for both Connections to host and Connections from host. Then set PSK identity to PSK 001, which is the value of the TLSPSKIdentity setting of the Zabbix agent you configured previously. Then set PSK value to the key you generated for the Zabbix agent. It’s the one stored in the file /etc/zabbix/zabbix_agentd.psk on the agent machine.

      Setting up the encryption

      Finally, click the Add button at the bottom of the form to create the host.

      You will see your new host in the list. Wait for a minute and reload the page to see green labels indicating that everything is working fine and the connection is encrypted.

      Zabbix shows your new host

      If you have additional servers you need to monitor, log in to each host, install the Zabbix agent, generate a PSK, configure the agent, and add the host to the web interface following the same steps you followed to add your first host.

      The Zabbix server is now monitoring your second Ubuntu server. Now, set up email notifications to be notified about problems.

      Step 8 — Configuring Email Notifications

      Zabbix automatically supports many types of notifications: email, OTRS, Slack, Telegram, SMS, etc. You can see the full list of integrations at the Zabbix website.

      As an example, this tutorial will configure notifications for the Email media type.

      Click on Administration, and then Media types in the left navigation bar. You will see the list of all media types. There are two preconfigured options for emails: for the plain text notification and for the HTML notifications. In this tutorial you will use plain text notification. Click on Email.

      Adjust the SMTP options according to the settings provided by your email service. This tutorial uses Gmail’s SMTP capabilities to set up email notifications; if you would like more information about setting this up, see How To Use Google’s SMTP Server.


      Note: If you use 2-Step Verification with Gmail, you need to generate an App Password for Zabbix. You’ll only have to enter an App password once during setup. You will find instructions on how to generate this password in the Google Help Center.

      If you are using Gmail, type in smtp.gmail.com for the SMTP server field, 465 for the SMTP server port field, gmail.com for SMTP helo, and your email for SMTP email. Then choose SSL/TLS for Connection security and Username and password for Authentication. Enter your Gmail address as the Username, and the App Password you generated from your Google account as the Password.

      Setting up email media type

      On the Message templates tab you can see the list of predefined messages for various types of notifications. Finally, click the Update button at the bottom of the form to update the email parameters.

      Now you can test sending notifications. To do this, click the Test underlined link in the corresponding line.

      You will see a pop-up window. Enter your email address in the Send to field and click the Test button. You will see a message about the successful sending and you will receive a test message.

      Testing email

      Close the pop-up by clicking the Cancel button.

      Now, create a new user. Click on Administration, and then Users in the left navigation bar. You will see the list of users. Then click the Create user button in the top right corner of the screen. This will open the user configuration page:

      Creating a user

      Enter the new username in the Alias field and set up a new password. Next, add the user to the administrator’s group. Type Zabbix administrators in the Groups field and select it from the proposed list.

      Once you’ve added the group, click the Media tab and click on the Add underlined link (not the Add button below it). You will see a pop-up window.

      Adding an email

      Select the Email option from the Type drop down. Enter your email address in the Send to field. You can leave the rest of the options at the default values. Click the Add button at the bottom to submit.

      Now navigate to the Permissions tab. Select Zabbix Super Admin from the User type drop-down menu.

      Finally, click the Add button at the bottom of the form to create the user.

      Note: Using the default password is not safe. In order to change the password of the built-in user Admin click on the alias in the list of users. Then click Change password, enter a new password, and confirm the changes by clicking Update button.

      Now you need to enable notifications. Click on the Configuration tab and then Actions in the left navigation bar. You will see a pre-configured action, which is responsible for sending notifications to all Zabbix administrators. You can review and change the settings by clicking on its name. For the purposes of this tutorial, use the default parameters. To enable the action, click on the red Disabled link in the Status column.

      Now you are ready to receive alerts. In the next step, you will generate one to test your notification setup.

      Step 9 — Generating a Test Alert

      In this step, you will generate a test alert to ensure everything is connected. By default, Zabbix keeps track of the amount of free disk space on your server. It automatically detects all disk mounts and adds the corresponding checks. This discovery is executed every hour, so you need to wait a while for the notification to be triggered.

      Create a temporary file that’s large enough to trigger Zabbix’s file system usage alert. To do this, log in to your second Ubuntu server if you’re not already connected:

      • ssh sammy@second_ubuntu_server_ip_address

      Next, determine how much free space you have on the server. You can use the df command to find out:

      The command df will report the disk space usage of your file system, and the -h will make the output human-readable. You’ll see output like the following:

      Output

      Filesystem Size Used Avail Use% Mounted on /dev/vda1 78G 1.4G 77G 2% /

      In this case, the free space is 77G. Your free space may differ.

      Use the fallocate command, which allows you to pre-allocate or de-allocate space to a file, to create a file that takes up more than 80% of the available disk space. This will be enough to trigger the alert:

      • fallocate -l 70G /tmp/temp.img

      After around an hour, Zabbix will trigger an alert about the amount of free disk space and will run the action you configured, sending the notification message. You can check your inbox for the message from the Zabbix server. You will see a message like:

      Problem started at 09:49:08 on 2020.06.12
      Problem name: /: Disk space is low (used > 80%)
      Host: Second Ubuntu Server
      Severity: Warning
      Operational data: Space used: 71.34 GB of 77.36 GB (92.23 %)
      Original problem ID: 106
      

      You can also navigate to the Monitoring tab and then Dashboard to see the notification and its details.

      Main dashboard

      Now that you know the alerts are working, delete the temporary file you created so you can reclaim your disk space:

      After a minute Zabbix will send the recovery message and the alert will disappear from the main dashboard.

      Conclusion

      In this tutorial, you learned how to set up a simple and secure monitoring solution that will help you monitor the state of your servers. It can now warn you of problems, and you have the opportunity to analyze the processes occurring in your IT infrastructure.

      To learn more about setting up monitoring infrastructure, check out our Monitoring topic page.



      Source link

      How To Build a Node.js Application with Docker on Ubuntu 20.04


      Introduction

      The Docker platform allows developers to package and run applications as containers. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.

      When building and scaling an application with Docker, the starting point is typically creating an image for your application, which you can then run in a container. The image includes your application code, libraries, configuration files, environment variables, and runtime. Using an image ensures that the environment in your container is standardized and contains only what is necessary to build and run your application.

      In this tutorial, you will create an application image for a static website that uses the Express framework and Bootstrap. You will then build a container using that image and push it to Docker Hub for future use. Finally, you will pull the stored image from your Docker Hub repository and build another container, demonstrating how you can recreate and scale your application.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Installing Your Application Dependencies

      To create your image, you will first need to make your application files, which you can then copy to your container. These files will include your application’s static content, code, and dependencies.

      First, create a directory for your project in your non-root user’s home directory. We will call ours node_project, but you should feel free to replace this with something else:

      Navigate to this directory:

      This will be the root directory of the project.

      Next, create a package.json file with your project’s dependencies and other identifying information. Open the file with nano or your favorite editor:

      Add the following information about the project, including its name, author, license, entrypoint, and dependencies. Be sure to replace the author information with your own name and contact details:

      ~/node_project/package.json

      {
        "name": "nodejs-image-demo",
        "version": "1.0.0",
        "description": "nodejs image demo",
        "author": "Sammy the Shark <[email protected]>",
        "license": "MIT",
        "main": "app.js",
        "keywords": [
          "nodejs",
          "bootstrap",
          "express"
        ],
        "dependencies": {
          "express": "^4.16.4"
        }
      }
      

      This file includes the project name, author, and license under which it is being shared. Npm recommends making your project name short and descriptive, and avoiding duplicates in the npm registry. We’ve listed the MIT license in the license field, permitting the free use and distribution of the application code.

      Additionally, the file specifies:

      • "main": The entrypoint for the application, app.js. You will create this file next.
      • "dependencies": The project dependencies — in this case, Express 4.16.4 or above.

      Though this file does not list a repository, you can add one by following these guidelines on adding a repository to your package.json file. This is a good addition if you are versioning your application.

      Save and close the file when you’ve finished making changes.

      To install your project’s dependencies, run the following command:

      This will install the packages you’ve listed in your package.json file in your project directory.

      We can now move on to building the application files.

      Step 2 — Creating the Application Files

      We will create a website that offers users information about sharks. Our application will have a main entrypoint, app.js, and a views directory that will include the project’s static assets. The landing page, index.html, will offer users some preliminary information and a link to a page with more detailed shark information, sharks.html. In the views directory, we will create both the landing page and sharks.html.

      First, open app.js in the main project directory to define the project’s routes:

      The first part of the file will create the Express application and Router objects, and define the base directory and port as constants:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      
      const path = __dirname + '/views/';
      const port = 8080;
      

      The require function loads the express module, which we then use to create the app and router objects. The router object will perform the routing function of the application, and as we define HTTP method routes we will add them to this object to define how our application will handle requests.

      This section of the file also sets a couple of constants, path and port:

      • path: Defines the base directory, which will be the views subdirectory within the current project directory.
      • port: Tells the app to listen on and bind to port 8080.

      Next, set the routes for the application using the router object:

      ~/node_project/app.js

      ...
      
      router.use(function (req,res,next) {
        console.log("https://www.digitalocean.com/" + req.method);
        next();
      });
      
      router.get("https://www.digitalocean.com/", function(req,res){
        res.sendFile(path + 'index.html');
      });
      
      router.get("https://www.digitalocean.com/sharks", function(req,res){
        res.sendFile(path + 'sharks.html');
      });
      

      The router.use function loads a middleware function that will log the router’s requests and pass them on to the application’s routes. These are defined in the subsequent functions, which specify that a GET request to the base project URL should return the index.html page, while a GET request to the /sharks route should return sharks.html.

      Finally, mount the router middleware and the application’s static assets and tell the app to listen on port 8080:

      ~/node_project/app.js

      ...
      
      app.use(express.static(path));
      app.use("https://www.digitalocean.com/", router);
      
      app.listen(port, function () {
        console.log('Example app listening on port 8080!')
      })
      

      The finished app.js file will look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      
      const path = __dirname + '/views/';
      const port = 8080;
      
      router.use(function (req,res,next) {
        console.log("https://www.digitalocean.com/" + req.method);
        next();
      });
      
      router.get("https://www.digitalocean.com/", function(req,res){
        res.sendFile(path + 'index.html');
      });
      
      router.get("https://www.digitalocean.com/sharks", function(req,res){
        res.sendFile(path + 'sharks.html');
      });
      
      app.use(express.static(path));
      app.use("https://www.digitalocean.com/", router);
      
      app.listen(port, function () {
        console.log('Example app listening on port 8080!')
      })
      

      Save and close the file when you are finished.

      Next, let’s add some static content to the application. Start by creating the views directory:

      Open the landing page file, index.html:

      Add the following code to the file, which will import Boostrap and create a jumbotron component with a link to the more detailed sharks.html info page:

      ~/node_project/views/index.html

      <!DOCTYPE html>
      <html lang="en">
      
      <head>
          <title>About Sharks</title>
          <meta charset="utf-8">
          <meta name="viewport" content="width=device-width, initial-scale=1">
          <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
          <link href="https://www.digitalocean.com/css/styles.css" rel="stylesheet">
          <link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css">
      </head>
      
      <body>
          <nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md">
              <div class="container">
                  <button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>
                  </button> <a class="navbar-brand" href="#">Everything Sharks</a>
                  <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
                      <ul class="nav navbar-nav mr-auto">
                          <li class="active nav-item"><a href="https://www.digitalocean.com/" class="nav-link">Home</a>
                          </li>
                          <li class="nav-item"><a href="https://www.digitalocean.com/sharks" class="nav-link">Sharks</a>
                          </li>
                      </ul>
                  </div>
              </div>
          </nav>
          <div class="jumbotron">
              <div class="container">
                  <h1>Want to Learn About Sharks?</h1>
                  <p>Are you ready to learn about sharks?</p>
                  <br>
                  <p><a class="btn btn-primary btn-lg" href="https://www.digitalocean.com/sharks" role="button">Get Shark Info</a>
                  </p>
              </div>
          </div>
          <div class="container">
              <div class="row">
                  <div class="col-lg-6">
                      <h3>Not all sharks are alike</h3>
                      <p>Though some are dangerous, sharks generally do not attack humans. Out of the 500 species known to researchers, only 30 have been known to attack humans.
                      </p>
                  </div>
                  <div class="col-lg-6">
                      <h3>Sharks are ancient</h3>
                      <p>There is evidence to suggest that sharks lived up to 400 million years ago.
                      </p>
                  </div>
              </div>
          </div>
      </body>
      
      </html>
      

      The top-level navbar here allows users to toggle between the Home and Sharks pages. In the navbar-nav subcomponent, we are using Bootstrap’s active class to indicate the current page to the user. We’ve also specified the routes to our static pages, which match the routes we defined in app.js:

      ~/node_project/views/index.html

      ...
      <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
         <ul class="nav navbar-nav mr-auto">
            <li class="active nav-item"><a href="https://www.digitalocean.com/" class="nav-link">Home</a>
            </li>
            <li class="nav-item"><a href="https://www.digitalocean.com/sharks" class="nav-link">Sharks</a>
            </li>
         </ul>
      </div>
      ...
      

      Additionally, we’ve created a link to our shark information page in our jumbotron’s button:

      ~/node_project/views/index.html

      ...
      <div class="jumbotron">
         <div class="container">
            <h1>Want to Learn About Sharks?</h1>
            <p>Are you ready to learn about sharks?</p>
            <br>
            <p><a class="btn btn-primary btn-lg" href="https://www.digitalocean.com/sharks" role="button">Get Shark Info</a>
            </p>
         </div>
      </div>
      ...
      

      There is also a link to a custom style sheet in the header:

      ~/node_project/views/index.html

      ...
      <link href="https://www.digitalocean.com/css/styles.css" rel="stylesheet">
      ...
      

      We will create this style sheet at the end of this step.

      Save and close the file when you are finished.

      With the application landing page in place, we can create our shark information page, sharks.html, which will offer interested users more information about sharks.

      Open the file:

      Add the following code, which imports Bootstrap and the custom style sheet and offers users detailed information about certain sharks:

      ~/node_project/views/sharks.html

      <!DOCTYPE html>
      <html lang="en">
      
      <head>
          <title>About Sharks</title>
          <meta charset="utf-8">
          <meta name="viewport" content="width=device-width, initial-scale=1">
          <link rel="stylesheet" href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css" integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO" crossorigin="anonymous">
          <link href="https://www.digitalocean.com/css/styles.css" rel="stylesheet">
          <link href="https://fonts.googleapis.com/css?family=Merriweather:400,700" rel="stylesheet" type="text/css">
      </head>
      <nav class="navbar navbar-dark bg-dark navbar-static-top navbar-expand-md">
          <div class="container">
              <button type="button" class="navbar-toggler collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false"> <span class="sr-only">Toggle navigation</span>
              </button> <a class="navbar-brand" href="https://www.digitalocean.com/">Everything Sharks</a>
              <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
                  <ul class="nav navbar-nav mr-auto">
                      <li class="nav-item"><a href="https://www.digitalocean.com/" class="nav-link">Home</a>
                      </li>
                      <li class="active nav-item"><a href="https://www.digitalocean.com/sharks" class="nav-link">Sharks</a>
                      </li>
                  </ul>
              </div>
          </div>
      </nav>
      <div class="jumbotron text-center">
          <h1>Shark Info</h1>
      </div>
      <div class="container">
          <div class="row">
              <div class="col-lg-6">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-6">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          </div>
      </div>
      
      </html>
      

      Note that in this file, we again use the active class to indicate the current page.

      Save and close the file when you are finished.

      Finally, create the custom CSS style sheet that you’ve linked to in index.html and sharks.html by first creating a css folder in the views directory:

      Open the style sheet:

      • nano views/css/styles.css

      Add the following code, which will set the desired color and font for our pages:

      ~/node_project/views/css/styles.css

      .navbar {
          margin-bottom: 0;
      }
      
      body {
          background: #020A1B;
          color: #ffffff;
          font-family: 'Merriweather', sans-serif;
      }
      
      h1,
      h2 {
          font-weight: bold;
      }
      
      p {
          font-size: 16px;
          color: #ffffff;
      }
      
      .jumbotron {
          background: #0048CD;
          color: white;
          text-align: center;
      }
      
      .jumbotron p {
          color: white;
          font-size: 26px;
      }
      
      .btn-primary {
          color: #fff;
          text-color: #000000;
          border-color: white;
          margin-bottom: 5px;
      }
      
      img,
      video,
      audio {
          margin-top: 20px;
          max-width: 80%;
      }
      
      div.caption: {
          float: left;
          clear: both;
      }
      

      In addition to setting font and color, this file also limits the size of the images by specifying a max-width of 80%. This will prevent them from taking up more room than we would like on the page.

      Save and close the file when you are finished.

      With the application files in place and the project dependencies installed, you are ready to start the application.

      If you followed the initial server setup tutorial in the prerequisites, you will have an active firewall permitting only SSH traffic. To permit traffic to port 8080 run:

      To start the application, make sure that you are in your project’s root directory:

      Start the application with node app.js:

      Navigate your browser to http://your_server_ip:8080. You will load the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. The following information page will load:

      Shark Info Page

      You now have an application up and running. When you are ready, quit the server by typing CTRL+C. We can now move on to creating the Dockerfile that will allow us to recreate and scale this application as desired.

      Step 3 — Writing the Dockerfile

      Your Dockerfile specifies what will be included in your application container when it is executed. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.

      Following these guidelines on building optimized containers, we will make our image as efficient as possible by minimizing the number of image layers and restricting the image’s function to a single purpose — recreating our application files and static content.

      In your project’s root directory, create the Dockerfile:

      Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application that will form the starting point of the application build.

      Let’s use the node:10-alpine image, since at the time of writing this is the recommended LTS version of Node.js. The alpine image is derived from the Alpine Linux project, and will help us keep our image size down. For more information about whether or not the alpine image is the right choice for your project, please review the full discussion under the Image Variants section of the Docker Hub Node image page.

      Add the following FROM instruction to set the application’s base image:

      ~/node_project/Dockerfile

      FROM node:10-alpine
      

      This image includes Node.js and npm. Each Dockerfile must begin with a FROM instruction.

      By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root. It is a recommended security practice to avoid running containers as root and to restrict capabilities within the container to only those required to run its processes. We will therefore use the node user’s home directory as the working directory for our application and set them as our user inside the container. For more information about best practices when working with the Docker Node image, check out this best practices guide.

      To fine-tune the permissions on our application code in the container, let’s create the node_modules subdirectory in /home/node along with the app directory. Creating these directories will ensure that they have the permissions we want, which will be important when we create local node modules in the container with npm install. In addition to creating these directories, we will set ownership on them to our node user:

      ~/node_project/Dockerfile

      ...
      RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
      

      For more information on the utility of consolidating RUN instructions, read through this discussion of how to manage container layers.

      Next, set the working directory of the application to /home/node/app:

      ~/node_project/Dockerfile

      ...
      WORKDIR /home/node/app
      

      If a WORKDIR isn’t set, Docker will create one by default, so it’s a good idea to set it explicitly.

      Next, copy the package.json and package-lock.json (for npm 5+) files:

      ~/node_project/Dockerfile

      ...
      COPY package*.json ./
      

      Adding this COPY instruction before running npm install or copying the application code allows us to take advantage of Docker’s caching mechanism. At each stage in the build, Docker will check whether it has a layer cached for that particular instruction. If we change package.json, this layer will be rebuilt, but if we don’t, this instruction will allow Docker to use the existing image layer and skip reinstalling our node modules.

      To ensure that all of the application files are owned by the non-root node user, including the contents of the node_modules directory, switch the user to node before running npm install:

      ~/node_project/Dockerfile

      ...
      USER node
      

      After copying the project dependencies and switching our user, we can run npm install:

      ~/node_project/Dockerfile

      ...
      RUN npm install
      

      Next, copy your application code with the appropriate permissions to the application directory on the container:

      ~/node_project/Dockerfile

      ...
      COPY --chown=node:node . .
      

      This will ensure that the application files are owned by the non-root node user.

      Finally, expose port 8080 on the container and start the application:

      ~/node_project/Dockerfile

      ...
      EXPOSE 8080
      
      CMD [ "node", "app.js" ]
      

      EXPOSE does not publish the port, but instead functions as a way of documenting which ports on the container will be published at runtime. CMD runs the command to start the application — in this case, node app.js. Note that there should only be one CMD instruction in each Dockerfile. If you include more than one, only the last will take effect.

      There are many things you can do with the Dockerfile. For a complete list of instructions, please refer to Docker’s Dockerfile reference documentation.

      The complete Dockerfile looks like this:

      ~/node_project/Dockerfile

      
      FROM node:10-alpine
      
      RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
      
      WORKDIR /home/node/app
      
      COPY package*.json ./
      
      USER node
      
      RUN npm install
      
      COPY --chown=node:node . .
      
      EXPOSE 8080
      
      CMD [ "node", "app.js" ]
      

      Save and close the file when you are finished editing.

      Before building the application image, let’s add a .dockerignore file. Working in a similar way to a .gitignore file, .dockerignore specifies which files and directories in your project directory should not be copied over to your container.

      Open the .dockerignore file:

      Inside the file, add your local node modules, npm logs, Dockerfile, and .dockerignore file:

      ~/node_project/.dockerignore

      node_modules
      npm-debug.log
      Dockerfile
      .dockerignore
      

      If you are working with Git then you will also want to add your .git directory and .gitignore file.

      Save and close the file when you are finished.

      You are now ready to build the application image using the docker build command. Using the -t flag with docker build will allow you to tag the image with a memorable name. Because we are going to push the image to Docker Hub, let’s include our Docker Hub username in the tag. We will tag the image as nodejs-image-demo, but feel free to replace this with a name of your own choosing. Remember to also replace your_dockerhub_username with your own Docker Hub username:

      • sudo docker build -t your_dockerhub_username/nodejs-image-demo .

      The . specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will receive the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 8 seconds ago 73MB node 10-alpine f09e7c96b6de 3 weeks ago 70.7MB

      It is now possible to create a container with this image using docker run. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, review this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a memorable name.

      Run the following command to build the container:

      • sudo docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo

      Once your container is up and running, you can inspect a list of your running containers with docker ps:

      You will receive the following output:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e50ad27074a7 your_dockerhub_username/nodejs-image-demo "node app.js" 8 seconds ago Up 7 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

      With your container running, you can now visit your application by navigating your browser to your server IP without the port:

      http://your_server_ip
      

      Your application landing page will load once again.

      Application Landing Page

      Now that you have created an image for your application, you can push it to Docker Hub for future use.

      Step 4 — Using a Repository to Work with Images

      By pushing your application image to a registry like Docker Hub, you make it available for subsequent use as you build and scale your containers. We will demonstrate how this works by pushing the application image to a repository and then using the image to recreate our container.

      The first step to pushing the image is to log in to the Docker Hub account you created in the prerequisites:

      • sudo docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

      You can now push the application image to Docker Hub using the tag you created earlier, your_dockerhub_username/nodejs-image-demo:

      • sudo docker push your_dockerhub_username/nodejs-image-demo

      Let’s test the utility of the image registry by destroying our current application container and image and rebuilding them with the image in our repository.

      First, list your running containers:

      You will get the following output:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e50ad27074a7 your_dockerhub_username/nodejs-image-demo "node app.js" 3 minutes ago Up 3 minutes 0.0.0.0:80->8080/tcp nodejs-image-demo

      Using the CONTAINER ID listed in your output, stop the running application container. Be sure to replace the highlighted ID below with your own CONTAINER ID:

      • sudo docker stop e50ad27074a7

      List your all of your images with the -a flag:

      You will receive the following output with the name of your image, your_dockerhub_username/nodejs-image-demo, along with the node image and the other images from your build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 7 minutes ago 73MB <none> <none> 2e3267d9ac02 4 minutes ago 72.9MB <none> <none> 8352b41730b9 4 minutes ago 73MB <none> <none> 5d58b92823cb 4 minutes ago 73MB <none> <none> 3f1e35d7062a 4 minutes ago 73MB <none> <none> 02176311e4d0 4 minutes ago 73MB <none> <none> 8e84b33edcda 4 minutes ago 70.7MB <none> <none> 6a5ed70f86f2 4 minutes ago 70.7MB <none> <none> 776b2637d3c1 4 minutes ago 70.7MB node 10-alpine f09e7c96b6de 3 weeks ago 70.7MB

      Remove the stopped container and all of the images, including unused or dangling images, with the following command:

      Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

      You have now removed both the container running your application image and the image itself. For more information on removing Docker containers, images, and volumes, please review How To Remove Docker Images, Containers, and Volumes.

      With all of your images and containers deleted, you can now pull the application image from Docker Hub:

      • docker pull your_dockerhub_username/nodejs-image-demo

      List your images once again:

      Your output will have your application image:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 11 minutes ago 73MB

      You can now rebuild your container using the command from Step 3:

      • docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo

      List your running containers:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6bc2f50dff6 your_dockerhub_username/nodejs-image-demo "node app.js" 4 seconds ago Up 3 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

      Visit http://your_server_ip once again to view your running application.

      Conclusion

      In this tutorial you created a static web application with Express and Bootstrap, as well as a Docker image for this application. You used this image to create a container and pushed the image to Docker Hub. From there, you were able to destroy your image and container and recreate them using your Docker Hub repository.

      If you are interested in learning more about how to work with tools like Docker Compose and Docker Machine to create multi-container setups, you can look at the following guides:

      For general tips on working with container data, check out:

      If you are interested in other Docker-related topics, please find our complete library of Docker tutorials.



      Source link