One place for hosting & domains

      Compose

      How To Install WordPress With Docker Compose


      Introduction

      WordPress is a free and open-source Content Management System (CMS) built on a MySQL database with PHP processing. Thanks to its extensible plugin architecture and templating system, and the fact that most of its administration can be done through the web interface, WordPress is a popular choice when creating different types of websites, from blogs to product pages to eCommerce sites.

      Running WordPress typically involves installing a LAMP (Linux, Apache, MySQL, and PHP) or LEMP (Linux, Nginx, MySQL, and PHP) stack, which can be time-consuming. However, by using tools like Docker and Docker Compose, you can simplify the process of setting up your preferred stack and installing WordPress. Instead of installing individual components by hand, you can use images, which standardize things like libraries, configuration files, and environment variables, and run these images in containers, isolated processes that run on a shared operating system. Additionally, by using Compose, you can coordinate multiple containers — for example, an application and database — to communicate with one another.

      In this tutorial, you will build a multi-container WordPress installation. Your containers will include a MySQL database, an Nginx web server, and WordPress itself. You will also secure your installation by obtaining TLS/SSL certificates with Let’s Encrypt for the domain you want associated with your site. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

      Prerequisites

      To follow this tutorial, you will need:

      • A server running Ubuntu 18.04, along with a non-root user with sudo privileges and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
      • Docker installed on your server, following Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04.
      • Docker Compose installed on your server, following Step 1 of How To Install Docker Compose on Ubuntu 18.04.
      • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Defining the Web Server Configuration

      Before running any containers, our first step will be to define the configuration for our Nginx web server. Our configuration file will include some WordPress-specific location blocks, along with a location block to direct Let’s Encrypt verification requests to the Certbot client for automated certificate renewals.

      First, create a project directory for your WordPress setup called wordpress and navigate to it:

      • mkdir wordpress && cd wordpress

      Next, make a directory for the configuration file:

      Open the file with nano or your favorite editor:

      • nano nginx-conf/nginx.conf

      In this file, we will add a server block with directives for our server name and document root, and location blocks to direct the Certbot client's request for certificates, PHP processing, and static asset requests.

      Paste the following code into the file. Be sure to replace example.com with your own domain name:

      ~/wordpress/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              server_name example.com www.example.com;
      
              index index.php index.html index.htm;
      
              root /var/www/html;
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      
              location / {
                      try_files $uri $uri/ /index.php$is_args$args;
              }
      
              location ~ .php$ {
                      try_files $uri =404;
                      fastcgi_split_path_info ^(.+.php)(/.+)$;
                      fastcgi_pass wordpress:9000;
                      fastcgi_index index.php;
                      include fastcgi_params;
                      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                      fastcgi_param PATH_INFO $fastcgi_path_info;
              }
      
              location ~ /.ht {
                      deny all;
              }
      
              location = /favicon.ico { 
                      log_not_found off; access_log off; 
              }
              location = /robots.txt { 
                      log_not_found off; access_log off; allow all; 
              }
              location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
                      expires max;
                      log_not_found off;
              }
      }
      

      Our server block includes the following information:

      Directives:

      • listen: This tells Nginx to listen on port 80, which will allow us to use Certbot's webroot plugin for our certificate requests. Note that we are not including port 443 yet — we will update our configuration to include SSL once we have successfully obtained our certificates.
      • server_name: This defines your server name and the server block that should be used for requests to your server. Be sure to replace example.com in this line with your own domain name.
      • index: The index directive defines the files that will be used as indexes when processing requests to your server. We've modified the default order of priority here, moving index.php in front of index.html so that Nginx prioritizes files called index.php when possible.
      • root: Our root directive names the root directory for requests to our server. This directory, /var/www/html, is created as a mount point at build time by instructions in our WordPress Dockerfile. These Dockerfile instructions also ensure that the files from the WordPress release are mounted to this volume.

      Location Blocks:

      • location ~ /.well-known/acme-challenge: This location block will handle requests to the .well-known directory, where Certbot will place a temporary file to validate that the DNS for our domain resolves to our server. With this configuration in place, we will be able to use Certbot's webroot plugin to obtain certificates for our domain.
      • location /: In this location block, we'll use a try_files directive to check for files that match individual URI requests. Instead of returning a 404 Not Found status as a default, however, we'll pass control to WordPress's index.php file with the request arguments.
      • location ~ .php$: This location block will handle PHP processing and proxy these requests to our wordpress container. Because our WordPress Docker image will be based on the php:fpm image, we will also include configuration options that are specific to the FastCGI protocol in this block. Nginx requires an independent PHP processor for PHP requests: in our case, these requests will be handled by the php-fpm processor that's included with the php:fpm image.
        Additionally, this location block includes FastCGI-specific directives, variables, and options that will proxy requests to the WordPress application running in our wordpress container, set the preferred index for the parsed request URI, and parse URI requests.
      • location ~ /.ht: This block will handle .htaccess files since Nginx won't serve them. The deny_all directive ensures that .htaccess files will never be served to users.
      • location = /favicon.ico, location = /robots.txt: These blocks ensure that requests to /favicon.ico and /robots.txt will not be logged.
      • location ~* .(css|gif|ico|jpeg|jpg|js|png)$: This block turns off logging for static asset requests and ensures that these assets are highly cacheable, as they are typically expensive to serve.

      For more information about FastCGI proxying, see Understanding and Implementing FastCGI Proxying in Nginx. For information about server and location blocks, see Understanding Nginx Server and Location Block Selection Algorithms.

      Save and close the file when you are finished editing. If you used nano, do so by pressing CTRL+X, Y, then ENTER.

      With your Nginx configuration in place, you can move on to creating environment variables to pass to your application and database containers at runtime.

      Step 2 — Defining Environment Variables

      Your database and WordPress application containers will need access to certain environment variables at runtime in order for your application data to persist and be accessible to your application. These variables include both sensitive and non-sensitive information: sensitive values for your MySQL root password and application database user and password, and non-sensitive information for your application database name and host.

      Rather than setting all of these values in our Docker Compose file — the main file that contains information about how our containers will run — we can set the sensitive values in an .env file and restrict its circulation. This will prevent these values from copying over to our project repositories and being exposed publicly.

      In your main project directory, ~/wordpress, open a file called .env:

      The confidential values that we will set in this file include a password for our MySQL root user, and a username and password that WordPress will use to access the database.

      Add the following variable names and values to the file. Remember to supply your own values here for each variable:

      ~/wordpress/.env

      MYSQL_ROOT_PASSWORD=your_root_password
      MYSQL_USER=your_wordpress_database_user
      MYSQL_PASSWORD=your_wordpress_database_password
      

      We have included a password for the root administrative account, as well as our preferred username and password for our application database.

      Save and close the file when you are finished editing.

      Because your .env file contains sensitive information, you will want to ensure that it is included in your project's .gitignore and .dockerignore files, which tell Git and Docker what files not to copy to your Git repositories and Docker images, respectively.

      If you plan to work with Git for version control, initialize your current working directory as a repository with git init:

      Then open a .gitignore file:

      Add .env to the file:

      ~/wordpress/.gitignore

      .env
      

      Save and close the file when you are finished editing.

      Likewise, it's a good precaution to add .env to a .dockerignore file, so that it doesn't end up on your containers when you are using this directory as your build context.

      Open the file:

      Add .env to the file:

      ~/wordpress/.dockerignore

      .env
      

      Below this, you can optionally add files and directories associated with your application's development:

      ~/wordpress/.dockerignore

      .env
      .git
      docker-compose.yml
      .dockerignore
      

      Save and close the file when you are finished.

      With your sensitive information in place, you can now move on to defining your services in a docker-compose.yml file.

      Step 3 — Defining Services with Docker Compose

      Your docker-compose.yml file will contain the service definitions for your setup. A service in Compose is a running container, and service definitions specify information about how each container will run.

      Using Compose, you can define different services in order to run multi-container applications, since Compose allows you to link these services together with shared networks and volumes. This will be helpful for our current setup since we will create different containers for our database, WordPress application, and web server. We will also create a container to run the Certbot client in order to obtain certificates for our webserver.

      To begin, open the docker-compose.yml file:

      Add the following code to define your Compose file version and db database service:

      ~/wordpress/docker-compose.yml

      version: '3'
      
      services:
        db:
          image: mysql:8.0
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MYSQL_DATABASE=wordpress
          volumes: 
            - dbdata:/var/lib/mysql
          command: '--default-authentication-plugin=mysql_native_password'
          networks:
            - app-network
      

      The db service definition contains the following options:

      • image: This tells Compose what image to pull to create the container. We are pinning the mysql:8.0 image here to avoid future conflicts as the mysql:latest image continues to be updated. For more information about version pinning and avoiding dependency conflicts, see the Docker documentation on Dockerfile best practices.
      • container_name: This specifies a name for the container.
      • restart: This defines the container restart policy. The default is no, but we have set the container to restart unless it is stopped manually.
      • env_file: This option tells Compose that we would like to add environment variables from a file called .env, located in our build context. In this case, the build context is our current directory.
      • environment: This option allows you to add additional environment variables, beyond those defined in your .env file. We will set the MYSQL_DATABASE variable equal to wordpress to provide a name for our application database. Because this is non-sensitive information, we can include it directly in the docker-compose.yml file.
      • volumes: Here, we're mounting a named volume called dbdata to the /var/lib/mysql directory on the container. This is the standard data directory for MySQL on most distributions.
      • command: This option specifies a command to override the default CMD instruction for the image. In our case, we will add an option to the Docker image's standard mysqld command, which starts the MySQL server on the container. This option, --default-authentication-plugin=mysql_native_password, sets the --default-authentication-plugin system variable to mysql_native_password, specifying which authentication mechanism should govern new authentication requests to the server. Since PHP and therefore our WordPress image won't support MySQL's newer authentication default, we must make this adjustment in order to authenticate our application database user.
      • networks: This specifies that our application service will join the app-network network, which we will define at the bottom of the file.

      Next, below your db service definition, add the definition for your wordpress application service:

      ~/wordpress/docker-compose.yml

      ...
        wordpress:
          depends_on: 
            - db
          image: wordpress:5.1.1-fpm-alpine
          container_name: wordpress
          restart: unless-stopped
          env_file: .env
          environment:
            - WORDPRESS_DB_HOST=db:3306
            - WORDPRESS_DB_USER=$MYSQL_USER
            - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
            - WORDPRESS_DB_NAME=wordpress
          volumes:
            - wordpress:/var/www/html
          networks:
            - app-network
      

      In this service definition, we are naming our container and defining a restart policy, as we did with the db service. We're also adding some options specific to this container:

      • depends_on: This option ensures that our containers will start in order of dependency, with the wordpress container starting after the db container. Our WordPress application relies on the existence of our application database and user, so expressing this order of dependency will enable our application to start properly.
      • image: For this setup, we are using the 5.1.1-fpm-alpine WordPress image. As discussed in Step 1, using this image ensures that our application will have the php-fpm processor that Nginx requires to handle PHP processing. This is also an alpine image, derived from the Alpine Linux project, which will help keep our overall image size down. For more information about the benefits and drawbacks of using alpine images and whether or not this makes sense for your application, see the full discussion under the Image Variants section of the Docker Hub WordPress image page.
      • env_file: Again, we specify that we want to pull values from our .env file, since this is where we defined our application database user and password.
      • environment: Here, we're using the values we defined in our .env file, but we're assigning them to the variable names that the WordPress image expects: WORDPRESS_DB_USER and WORDPRESS_DB_PASSWORD. We're also defining a WORDPRESS_DB_HOST, which will be the MySQL server running on the db container that's accessible on MySQL's default port, 3306. Our WORDPRESS_DB_NAME will be the same value we specified in the MySQL service definition for our MYSQL_DATABASE: wordpress.
      • volumes: We are mounting a named volume called wordpress to the /var/www/html mountpoint created by the WordPress image. Using a named volume in this way will allow us to share our application code with other containers.
      • networks: We're also adding the wordpress container to the app-network network.

      Next, below the wordpress application service definition, add the following definition for your webserver Nginx service:

      ~/wordpress/docker-compose.yml

      ...
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      

      Again, we're naming our container and making it dependent on the wordpress container in order of starting. We're also using an alpine image — the 1.15.12-alpine Nginx image.

      This service definition also includes the following options:

      • ports: This exposes port 80 to enable the configuration options we defined in our nginx.conf file in Step 1.
      • volumes: Here, we are defining a combination of named volumes and bind mounts:
        • wordpress:/var/www/html: This will mount our WordPress application code to the /var/www/html directory, the directory we set as the root in our Nginx server block.
        • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
        • certbot-etc:/etc/letsencrypt: This will mount the relevant Let's Encrypt certificates and keys for our domain to the appropriate directory on the container.

      And again, we've added this container to the app-network network.

      Finally, below your webserver definition, add your last service definition for the certbot service. Be sure to replace the email address and domain names listed here with your own information:

      ~/wordpress/docker-compose.yml

        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com
      

      This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc and the application code in wordpress.

      Again, we've used depends_on to specify that the certbot container should be started once the webserver service is running.

      We've also included a command option that specifies a subcommand to run with the container's default certbot command. The certonly subcommand will obtain a certificate with the following options:

      • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.
      • --webroot-path: This specifies the path of the webroot directory.
      • --email: Your preferred email for registration and recovery.
      • --agree-tos: This specifies that you agree to ACME's Subscriber Agreement.
      • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
      • --staging: This tells Certbot that you would like to use Let's Encrypt's staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let's Encrypt's rate limits documentation.
      • -d: This allows you to specify domain names you would like to apply to your request. In this case, we've included example.com and www.example.com. Be sure to replace these with your own domain.

      Below the certbot service definition, add your network and volume definitions:

      ~/wordpress/docker-compose.yml

      ...
      volumes:
        certbot-etc:
        wordpress:
        dbdata:
      
      networks:
        app-network:
          driver: bridge  
      

      Our top-level volumes key defines the volumes certbot-etc, wordpress, and dbdata. When Docker creates volumes, the contents of the volume are stored in a directory on the host filesystem, /var/lib/docker/volumes/, that's managed by Docker. The contents of each volume then get mounted from this directory to any container that uses the volume. In this way, it's possible to share code and data between containers.

      The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network without exposing any ports to the outside world. Thus, our db, wordpress, and webserver containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

      The finished docker-compose.yml file will look like this:

      ~/wordpress/docker-compose.yml

      version: '3'
      
      services:
        db:
          image: mysql:8.0
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MYSQL_DATABASE=wordpress
          volumes: 
            - dbdata:/var/lib/mysql
          command: '--default-authentication-plugin=mysql_native_password'
          networks:
            - app-network
      
        wordpress:
          depends_on: 
            - db
          image: wordpress:5.1.1-fpm-alpine
          container_name: wordpress
          restart: unless-stopped
          env_file: .env
          environment:
            - WORDPRESS_DB_HOST=db:3306
            - WORDPRESS_DB_USER=$MYSQL_USER
            - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
            - WORDPRESS_DB_NAME=wordpress
          volumes:
            - wordpress:/var/www/html
          networks:
            - app-network
      
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      
        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com -d www.example.com
      
      volumes:
        certbot-etc:
        wordpress:
        dbdata:
      
      networks:
        app-network:
          driver: bridge  
      

      Save and close the file when you are finished editing.

      With your service definitions in place, you are ready to start the containers and test your certificate requests.

      Step 4 — Obtaining SSL Certificates and Credentials

      We can start our containers with the docker-compose up command, which will create and run our containers in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

      Create the containers with docker-compose up and the -d flag, which will run the db, wordpress, and webserver containers in the background:

      You will see output confirming that your services have been created:

      Output

      Creating db ... done Creating wordpress ... done Creating webserver ... done Creating certbot ... done

      Using docker-compose ps, check the status of your services:

      If everything was successful, your db, wordpress, and webserver services will be Up and the certbot container will have exited with a 0 status message:

      Output

      Name Command State Ports ------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp wordpress docker-entrypoint.sh php-fpm Up 9000/tcp

      If you see anything other than Up in the State column for the db, wordpress, or webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

      • docker-compose logs service_name

      You can now check that your certificates have been mounted to the webserver container with docker-compose exec:

      • docker-compose exec webserver ls -la /etc/letsencrypt/live

      If your certificate requests were successful, you will see output like this:

      Output

      total 16 drwx------ 3 root root 4096 May 10 15:45 . drwxr-xr-x 9 root root 4096 May 10 15:45 .. -rw-r--r-- 1 root root 740 May 10 15:45 README drwxr-xr-x 2 root root 4096 May 10 15:45 example.com

      Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

      Open docker-compose.yml:

      Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition will now look like this:

      ~/wordpress/docker-compose.yml

      ...
        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      ...
      

      You can now run docker-compose up to recreate the certbot container. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

      • docker-compose up --force-recreate --no-deps certbot

      You will see output indicating that your certificate request was successful:

      Output

      Recreating certbot ... done Attaching to certbot certbot | Saving debug log to /var/log/letsencrypt/letsencrypt.log certbot | Plugins selected: Authenticator webroot, Installer None certbot | Renewing an existing certificate certbot | Performing the following challenges: certbot | http-01 challenge for example.com certbot | http-01 challenge for www.example.com certbot | Using the webroot path /var/www/html for all unmatched domains. certbot | Waiting for verification... certbot | Cleaning up challenges certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-08-08. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

      With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

      Step 5 — Modifying the Web Server Configuration and Service Definition

      Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS, specifying our SSL certificate and key locations, and adding security parameters and headers.

      Since you are going to recreate the webserver service to include these additions, you can stop it now:

      • docker-compose stop webserver

      Before we modify the configuration file itself, let's first get the recommended Nginx security parameters from Certbot using curl:

      • curl -sSLo nginx-conf/options-ssl-nginx.conf https://raw.githubusercontent.com/certbot/certbot/master/certbot-nginx/certbot_nginx/options-ssl-nginx.conf

      This command will save these parameters in a file called options-ssl-nginx.conf, located in the nginx-conf directory.

      Next, remove the Nginx configuration file you created earlier:

      Open another version of the file:

      • nano nginx-conf/nginx.conf

      Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

      ~/wordpress/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              server_name example.com www.example.com;
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      
              location / {
                      rewrite ^ https://$host$request_uri? permanent;
              }
      }
      
      server {
              listen 443 ssl http2;
              listen [::]:443 ssl http2;
              server_name example.com www.example.com;
      
              index index.php index.html index.htm;
      
              root /var/www/html;
      
              server_tokens off;
      
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
      
              include /etc/nginx/conf.d/options-ssl-nginx.conf;
      
              add_header X-Frame-Options "SAMEORIGIN" always;
              add_header X-XSS-Protection "1; mode=block" always;
              add_header X-Content-Type-Options "nosniff" always;
              add_header Referrer-Policy "no-referrer-when-downgrade" always;
              add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
              # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
              # enable strict transport security only if you understand the implications
      
              location / {
                      try_files $uri $uri/ /index.php$is_args$args;
              }
      
              location ~ .php$ {
                      try_files $uri =404;
                      fastcgi_split_path_info ^(.+.php)(/.+)$;
                      fastcgi_pass wordpress:9000;
                      fastcgi_index index.php;
                      include fastcgi_params;
                      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                      fastcgi_param PATH_INFO $fastcgi_path_info;
              }
      
              location ~ /.ht {
                      deny all;
              }
      
              location = /favicon.ico { 
                      log_not_found off; access_log off; 
              }
              location = /robots.txt { 
                      log_not_found off; access_log off; allow all; 
              }
              location ~* .(css|gif|ico|jpeg|jpg|js|png)$ {
                      expires max;
                      log_not_found off;
              }
      }
      

      The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

      The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04.

      This block also includes our SSL certificate and key locations, along with the recommended Certbot security parameters that we saved to nginx-conf/options-ssl-nginx.conf.

      Additionally, we've included some security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its "preload" functionality.

      Our root and index directives are also located in this block, as are the rest of the WordPress-specific location blocks discussed in Step 1.

      Once you have finished editing, save and close the file.

      Before recreating the webserver service, you will need to add a 443 port mapping to your webserver service definition.

      Open your docker-compose.yml file:

      In the webserver service definition, add the following port mapping:

      ~/wordpress/docker-compose.yml

      ...
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      

      The docker-compose.yml file will look like this when finished:

      ~/wordpress/docker-compose.yml

      version: '3'
      
      services:
        db:
          image: mysql:8.0
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MYSQL_DATABASE=wordpress
          volumes: 
            - dbdata:/var/lib/mysql
          command: '--default-authentication-plugin=mysql_native_password'
          networks:
            - app-network
      
        wordpress:
          depends_on: 
            - db
          image: wordpress:5.1.1-fpm-alpine
          container_name: wordpress
          restart: unless-stopped
          env_file: .env
          environment:
            - WORDPRESS_DB_HOST=db:3306
            - WORDPRESS_DB_USER=$MYSQL_USER
            - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
            - WORDPRESS_DB_NAME=wordpress
          volumes:
            - wordpress:/var/www/html
          networks:
            - app-network
      
        webserver:
          depends_on:
            - wordpress
          image: nginx:1.15.12-alpine
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - wordpress:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
          networks:
            - app-network
      
        certbot:
          depends_on:
            - webserver
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - wordpress:/var/www/html
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      
      volumes:
        certbot-etc:
        wordpress:
        dbdata:
      
      networks:
        app-network:
          driver: bridge  
      

      Save and close the file when you are finished editing.

      Recreate the webserver service:

      • docker-compose up -d --force-recreate --no-deps webserver

      Check your services with docker-compose ps:

      You should see output indicating that your db, wordpress, and webserver services are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 db docker-entrypoint.sh --def ... Up 3306/tcp, 33060/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp wordpress docker-entrypoint.sh php-fpm Up 9000/tcp

      With your containers running, you can now complete your WordPress installation through the web interface.

      Step 6 — Completing the Installation Through the Web Interface

      With our containers running, we can finish the installation through the WordPress web interface.

      In your web browser, navigate to your server's domain. Remember to substitute example.com here with your own domain name:

      https://example.com
      

      Select the language you would like to use:

      WordPress Language Selector

      After clicking Continue, you will land on the main setup page, where you will need to pick a name for your site and a username. It's a good idea to choose a memorable username here (rather than "admin") and a strong password. You can use the password that WordPress generates automatically or create your own.

      Finally, you will need to enter your email address and decide whether or not you want to discourage search engines from indexing your site:

      WordPress Main Setup Page

      Clicking on Install WordPress at the bottom of the page will take you to a login prompt:

      WordPress Login Screen

      Once logged in, you will have access to the WordPress administration dashboard:

      WordPress Main Admin Dashboard

      With your WordPress installation complete, you can now take steps to ensure that your SSL certificates will renew automatically.

      Step 7 — Renewing Certificates

      Let's Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will create a cron job to periodically run a script that will renew our certificates and reload our Nginx configuration.

      First, open a script called ssl_renew.sh:

      Add the following code to the script to renew your certificates and reload your web server configuration. Remember to replace the example username here with your own non-root username:

      ~/wordpress/ssl_renew.sh

      #!/bin/bash
      
      COMPOSE="/usr/local/bin/docker-compose --no-ansi"
      
      cd /home/sammy/wordpress/
      $COMPOSE run certbot renew --dry-run && $COMPOSE kill -s SIGHUP webserver
      

      This script first assigns the docker-compose binary to a variable called COMPOSE, and specifies the --no-ansi option, which will run docker-compose commands without ANSI control characters. It then changes to the ~/wordpress project directory and runs the following docker-compose commands:

      • docker-compose run: This will start a certbot container and override the command provided in our certbot service definition. Instead of using the certonly subcommand, we're using the renew subcommand here, which will renew certificates that are close to expiring. We've included the --dry-run option here to test our script.
      • docker-compose kill: This will send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

      Close the file when you are finished editing. Make it executable:

      Next, open your root crontab file to run the renewal script at a specified interval:

      If this is your first time editing this file, you will be asked to choose an editor:

      Output

      no crontab for root - using an empty one Select an editor. To change later, run 'select-editor'. 1. /bin/nano <---- easiest 2. /usr/bin/vim.basic 3. /usr/bin/vim.tiny 4. /bin/ed Choose 1-4 [1]: ...

      At the bottom of the file, add the following line:

      crontab

      ...
      */5 * * * * /home/sammy/wordpress/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

      After five minutes, check cron.log to see whether or not the renewal request has succeeded:

      • tail -f /var/log/cron.log

      You should see output confirming a successful renewal:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

      You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

      crontab

      ...
      0 12 * * * /home/sammy/wordpress/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      You will also want to remove the --dry-run option from your ssl_renew.sh script:

      ~/wordpress/ssl_renew.sh

      #!/bin/bash
      
      COMPOSE="/usr/local/bin/docker-compose --no-ansi"
      
      cd /home/sammy/wordpress/
      $COMPOSE run certbot renew && $COMPOSE kill -s SIGHUP webserver
      

      Your cron job will ensure that your Let's Encrypt certificates don't lapse by renewing them when they are eligible. You can also set up log rotation with the Logrotate utility to rotate and compress your log files.

      Conclusion

      In this tutorial, you used Docker Compose to create a WordPress installation with an Nginx web server. As part of this workflow, you obtained TLS/SSL certificates for the domain you want associated with your WordPress site. Additionally, you created a cron job to renew these certificates when necessary.

      As additional steps to improve site performance and redundancy, you can consult the following articles on delivering and backing up WordPress assets:

      If you are interested in exploring a containerized workflow with Kubernetes, you can also check out:



      Source link

      How To Migrate a Docker Compose Workflow to Kubernetes


      Introduction

      When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

      • Extracting necessary configuration information from your code.
      • Offloading your application’s state.
      • Packaging your application for repeated use.

      You will also have written service definitions that specify how your container images should run.

      To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

      In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Node.js application with a MongoDB database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Node.js Application with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

      Prerequisites

      Step 1 — Installing kompose

      To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.18.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

      • curl -L https://github.com/kubernetes/kompose/releases/download/v1.18.0/kompose-linux-amd64 -o kompose

      For details about installing on non-Linux systems, please refer to the installation instructions.

      Make the binary executable:

      Move it to your PATH:

      • sudo mv ./kompose /usr/local/bin/kompose

      To verify that it has been installed properly, you can do a version check:

      If the installation was successful, you will see output like the following:

      Output

      1.18.0 (06a2e56)

      With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

      Step 2 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

      Our first step will be to clone the node-mongo-docker-dev repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Node.js Application for Development With Docker Compose, which uses a demo Node.js application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series From Containers to Kubernetes with Node.js.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/node-mongo-docker-dev.git node_project

      Navigate to the node_project directory:

      The node_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application's state has been offloaded to a MongoDB database.

      For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      The project directory includes a Dockerfile with instructions for building the application image. Let's build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

      Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it node-kubernetes or a name of your own choosing:

      • docker build -t your_dockerhub_username/node-kubernetes .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/node-kubernetes latest 9c6f897e1fbc 3 seconds ago 90MB node 10-alpine 94f3c8956482 12 days ago 71MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user's home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_username with your own Docker Hub username:

      • docker push your_dockerhub_username/node-kubernetes

      You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

      Step 3 — Translating Compose Services to Kubernetes Objects with kompose

      Our Docker Compose file, here called docker-compose.yaml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

      We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application's database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

      First, we will need to modify some of the definitions in our docker-compose.yaml file to work with Kubernetes. We will include a reference to our newly-built application image in our nodejs service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we'll redefine both containers' restart policies to be in line with the behavior Kubernetes expects.

      Open the file with nano or your favorite editor:

      The current definition for the nodejs application service looks like this:

      ~/node_project/docker-compose.yaml

      ...
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB 
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      ...
      

      Make the following edits to your service definition:

      • Use your node-kubernetes image instead of the local Dockerfile.
      • Change the container restart policy from unless-stopped to always.
      • Remove the volumes list and the command instruction.

      The finished service definition will now look like this:

      ~/node_project/docker-compose.yaml

      ...
      services:
        nodejs:
          image: your_dockerhub_username/node-kubernetes
          container_name: nodejs
          restart: always
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB 
          ports:
            - "80:8080"
          networks:
            - app-network
      ...
      

      Next, scroll down to the db service definition. Here, change the restart policy for the service to always and remove the .env file. Instead of using values from the .env file, we will pass the values for our MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD to the database container using the Secret we will create in Step 4.

      The db service definition will now look like this:

      ~/node_project/docker-compose.yaml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: always
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network
      ...  
      

      Finally, at the bottom of the file, remove the node_modules volumes from the top-level volumes key. The key will now look like this:

      ~/node_project/docker-compose.yaml

      ...
      volumes:
        dbdata:
      

      Save and close the file when you are finished editing.

      Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Node.js Application for Development With Docker Compose for a longer explanation of this file.

      In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the node-mongo-docker-dev repository in Step 2 of this tutorial. We will therefore need to recreate it now.

      Create the file:

      kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the nodejs service definition in our Compose file, we will add only the MONGO_DB database name and the MONGO_PORT. We will assign the database username and password separately when we manually create a Secret object in Step 4.

      Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

      ~/node_project/.env

      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Save and close the file when you are finished editing.

      You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

      • Create yaml files based on the service definitions in your docker-compose.yaml file with kompose convert.
      • Create Kubernetes objects directly with kompose up.
      • Create a Helm chart with kompose convert -c.

      For now, we will convert our service definitions to yaml files and then add to and revise the files kompose creates.

      Convert your service definitions to yaml files with the following command:

      You can also name specific or multiple Compose files using the -f flag.

      After you run this command, kompose will output information about the files it has created:

      Output

      INFO Kubernetes file "nodejs-service.yaml" created INFO Kubernetes file "db-deployment.yaml" created INFO Kubernetes file "dbdata-persistentvolumeclaim.yaml" created INFO Kubernetes file "nodejs-deployment.yaml" created INFO Kubernetes file "nodejs-env-configmap.yaml" created

      These include yaml files with specs for the Node application Service, Deployment, and ConfigMap, as well as for the dbdata PersistentVolumeClaim and MongoDB database Deployment.

      These files are a good starting point, but in order for our application's functionality to match the setup described in Containerizing a Node.js Application for Development With Docker Compose we will need to make a few additions and changes to the files kompose has generated.

      Step 4 — Creating Kubernetes Secrets

      In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database username and password.

      The first step in manually creating a Secret will be to convert your username and password to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

      Convert your database username:

      • echo -n 'your_database_username' | base64

      Note down the value you see in the output.

      Next, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define your MONGO_USERNAME and MONGO_PASSWORD using the encoded values you just created. Be sure to replace the dummy values here with your encoded username and password:

      ~/node_project/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: mongo-secret
      data:
        MONGO_USERNAME: your_encoded_username
        MONGO_PASSWORD: your_encoded_password
      

      We have named the Secret object mongo-secret, but you are free to name it anything you would like.

      Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

      With secret.yaml written, our next step will be to ensure that our application and database Pods both use the values we added to the file. Let's start by adding references to the Secret to our application Deployment.

      Open the file called nodejs-deployment.yaml:

      • nano nodejs-deployment.yaml

      The file's container specifications include the following environment variables defined under the env key:

      ~/node_project/nodejs-deployment.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      ...
          spec:
            containers:
            - env:
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: nodejs-env
              - name: MONGO_HOSTNAME
                value: db
              - name: MONGO_PASSWORD
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: nodejs-env
              - name: MONGO_USERNAME
      

      We will need to add references to our Secret to the MONGO_USERNAME and MONGO_PASSWORD variables listed here, so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our nodejs-env ConfigMap, as is the case with the values for MONGO_DB and MONGO_PORT, we'll include a secretKeyRef key to point to the values in our mongo-secret secret.

      Add the following Secret references to the MONGO_USERNAME and MONGO_PASSWORD variables:

      ~/node_project/nodejs-deployment.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      ...
          spec:
            containers:
            - env:
              - name: MONGO_DB
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_DB
                    name: nodejs-env
              - name: MONGO_HOSTNAME
                value: db
              - name: MONGO_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: mongo-secret
                    key: MONGO_PASSWORD
              - name: MONGO_PORT
                valueFrom:
                  configMapKeyRef:
                    key: MONGO_PORT
                    name: nodejs-env
              - name: MONGO_USERNAME
                valueFrom:
                  secretKeyRef:
                    name: mongo-secret
                    key: MONGO_USERNAME
      

      Save and close the file when you are finished editing.

      Next, we'll add the same values to the db-deployment.yaml file.

      Open the file for editing:

      In this file, we will add references to our Secret for following variable keys: MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD. The mongo image makes these variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the database container starts.

      Using the values we set in our Secret ensures that we will have an application user with root privileges on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

      Under the MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD variables, add references to the Secret values:

      ~/node_project/db-deployment.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      ...
          spec:
            containers:
            - env:
              - name: MONGO_INITDB_ROOT_PASSWORD
                valueFrom:
                  secretKeyRef:
                    name: mongo-secret
                    key: MONGO_PASSWORD        
              - name: MONGO_INITDB_ROOT_USERNAME
                valueFrom:
                  secretKeyRef:
                    name: mongo-secret
                    key: MONGO_USERNAME
              image: mongo:4.1.8-xenial
      ...
      

      Save and close the file when you are finished editing.

      With your Secret in place, you can move on to creating your database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

      Step 5 — Creating the Database Service and an Application Init Container

      Now that we have our Secret, we can move on to creating our database Service and an Init Container that will poll this Service to ensure that our application only attempts to connect to the database once the database startup tasks, including creating the MONGO_INITDB user and password, are complete.

      For a discussion of how to implement this functionality in Compose, please see Step 4 of Containerizing a Node.js Application for Development with Docker Compose.

      Open a file to define the specs for the database Service:

      Add the following code to the file to define the Service:

      ~/node_project/db-service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        annotations: 
          kompose.cmd: kompose convert
          kompose.version: 1.18.0 (06a2e56)
        creationTimestamp: null
        labels:
          io.kompose.service: db
        name: db
      spec:
        ports:
        - port: 27017
          targetPort: 27017
        selector:
          io.kompose.service: db
      status:
        loadBalancer: {}
      

      The selector that we have included here will match this Service object with our database Pods, which have been defined with the label io.kompose.service: db by kompose in the db-deployment.yaml file. We've also named this service db.

      Save and close the file when you are finished editing.

      Next, let's add an Init Container field to the containers array in nodejs-deployment.yaml. This will create an Init Container that we can use to delay our application container from starting until the db Service has been created with a Pod that is reachable. This is one of the possible uses for Init Containers; to learn more about other use cases, please see the official documentation.

      Open the nodejs-deployment.yaml file:

      • nano nodejs-deployment.yaml

      Within the Pod spec and alongside the containers array, we are going to add an initContainers field with a container that will poll the db Service.

      Add the following code below the ports and resources fields and above the restartPolicy in the nodejs containers array:

      ~/node_project/nodejs-deployment.yaml

      apiVersion: extensions/v1beta1
      kind: Deployment
      ...
          spec:
            containers:
            ...
              name: nodejs
              ports:
              - containerPort: 8080
              resources: {}
            initContainers:
            - name: init-db
              image: busybox
              command: ['sh', '-c', 'until nc -z db:27017; do echo waiting for db; sleep 2; done;']
            restartPolicy: Always
      ...               
      

      This Init Container uses the BusyBox image, a lightweight image that includes many UNIX utilities. In this case, we'll use the netcat utility to poll whether or not the Pod associated with the db Service is accepting TCP connections on port 27017.

      This container command replicates the functionality of the wait-for script that we removed from our docker-compose.yaml file in Step 3. For a longer discussion of how and why our application used the wait-for script when working with Compose, please see Step 4 of Containerizing a Node.js Application for Development with Docker Compose.

      Init Containers run to completion; in our case, this means that our Node application container will not start until the database container is running and accepting connections on port 27017. The db Service definition allows us to guarantee this functionality regardless of the exact location of the database container, which is mutable.

      Save and close the file when you are finished editing.

      With your database Service created and your Init Container in place to control the startup order of your containers, you can move on to checking the storage requirements in your PersistentVolumeClaim and exposing your application service using a LoadBalancer.

      Step 6 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend

      Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

      First, let's modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application's state.

      To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage.

      We can check this by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER AGE do-block-storage (default) dobs.csi.digitalocean.com 76m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      When kompose created dbdata-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

      Open dbdata-persistentvolumeclaim.yaml:

      • nano dbdata-persistentvolumeclaim.yaml

      Replace the storage value with 1Gi:

      ~/node_project/dbdata-persistentvolumeclaim.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: dbdata
        name: dbdata
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      status: {}
      

      Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

      Save and close the file when you are finished.

      Next, open nodejs-service.yaml:

      We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

      Within the Service spec, specify LoadBalancer as the Service type:

      ~/node_project/nodejs-service.yaml

      apiVersion: v1
      kind: Service
      ...
      spec:
        type: LoadBalancer
        ports:
      ...
      

      When we create the nodejs Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

      Save and close the file when you are finished editing.

      With all of our files in place, we are ready to start and test the application.

      Step 7 — Starting and Accessing the Application

      It's time to create our Kubernetes objects and test that our application is working as expected.

      To create the objects we've defined, we'll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Node application and MongoDB database Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

      • kubectl create -f nodejs-service.yaml,nodejs-deployment.yaml,nodejs-env-configmap.yaml,db-service.yaml,db-deployment.yaml,dbdata-persistentvolumeclaim.yaml,secret.yaml

      You will see the following output indicating that the objects have been created:

      Output

      service/nodejs created deployment.extensions/nodejs created configmap/nodejs-env created service/db created deployment.extensions/db created persistentvolumeclaim/dbdata created secret/mongo-secret created

      To check that your Pods are running, type:

      You don't need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this command, along with the name of your Namespace.

      You will see the following output while your db container is starting and your application Init Container is running:

      Output

      NAME READY STATUS RESTARTS AGE db-679d658576-kfpsl 0/1 ContainerCreating 0 10s nodejs-6b9585dc8b-pnsws 0/1 Init:0/1 0 10s

      Once that container has run and your application and database containers have started, you will see this output:

      Output

      NAME READY STATUS RESTARTS AGE db-679d658576-kfpsl 1/1 Running 0 54s nodejs-6b9585dc8b-pnsws 1/1 Running 0 54s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      With your containers running, you can now access the application. To get the IP for the LoadBalancer, type:

      You will see the following output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE db ClusterIP 10.245.189.250 <none> 27017/TCP 93s kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 25m12s nodejs LoadBalancer 10.245.15.56 your_lb_ip 80:30729/TCP 93s

      The EXTERNAL_IP associated with the nodejs service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip.

      You should see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You now have a single instance setup of a Node.js application with a MongoDB database running on a Kubernetes cluster.

      Conclusion

      The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:



      Source link

      Containerizing a Node.js Application for Development With Docker Compose


      Introduction

      If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

      • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
      • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
      • Environments are portable, allowing you to package and share your code with others.

      This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers — one for the Node application and another for the MongoDB database — with Docker Compose. Because this application works with Node and MongoDB, our setup will do the following:

      • Synchronize the application code on the host with the code in the container to facilitate changes during development.
      • Ensure that changes to the application code work without a restart.
      • Create a user and password-protected database for the application’s data.
      • Persist this data.

      At the end of this tutorial, you will have a working shark information application running on Docker containers:

      Complete Shark Collection

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Cloning the Project and Modifying Dependencies

      The first step in building this setup will be cloning the project code and modifying its package.json file, which includes the project’s dependencies. We will add nodemon to the project’s devDependencies, specifying that we will be using it during development. Running the application with nodemon ensures that it will be automatically restarted whenever you make changes to your code.

      First, clone the nodejs-mongo-mongoose repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Integrate MongoDB with Your Node Application, which explains how to integrate a MongoDB database with an existing Node application using Mongoose.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project

      Navigate to the node_project directory:

      Open the project's package.json file using nano or your favorite editor:

      Beneath the project dependencies and above the closing curly brace, create a new devDependencies object that includes nodemon:

      ~/node_project/package.json

      ...
      "dependencies": {
          "ejs": "^2.6.1",
          "express": "^4.16.4",
          "mongoose": "^5.4.10"
        },
        "devDependencies": {
          "nodemon": "^1.18.10"
        }    
      }
      

      Save and close the file when you are finished editing.

      With the project code in place and its dependencies modified, you can move on to refactoring the code for a containerized workflow.

      Step 2 — Configuring Your Application to Work with Containers

      Modifying our application for a containerized workflow means making our code more modular. Containers offer portability between environments, and our code should reflect that by remaining as decoupled from the underlying operating system as possible. To achieve this, we will refactor our code to make greater use of Node's process.env property, which returns an object with information about your user environment at runtime. We can use this object in our code to dynamically assign configuration information at runtime with environment variables.

      Let's begin with app.js, our main application entrypoint. Open the file:

      Inside, you will see a definition for a port constant, as well a listen function that uses this constant to specify the port the application will listen on:

      ~/home/node_project/app.js

      ...
      const port = 8080;
      ...
      app.listen(port, function () {
        console.log('Example app listening on port 8080!');
      });
      

      Let's redefine the port constant to allow for dynamic assignment at runtime using the process.env object. Make the following changes to the constant definition and listen function:

      ~/home/node_project/app.js

      ...
      const port = process.env.PORT || 8080;
      ...
      app.listen(port, function () {
        console.log(`Example app listening on ${port}!`);
      });
      

      Our new constant definition assigns port dynamically using the value passed in at runtime or 8080. Similarly, we've rewritten the listen function to use a template literal, which will interpolate the port value when listening for connections. Because we will be mapping our ports elsewhere, these revisions will prevent our having to continuously revise this file as our environment changes.

      When you are finished editing, save and close the file.

      Next, we will modify our database connection information to remove any configuration credentials. Open the db.js file, which contains this information:

      Currently, the file does the following things:

      • Imports Mongoose, the Object Document Mapper (ODM) that we're using to create schemas and models for our application data.
      • Sets the database credentials as constants, including the username and password.
      • Connects to the database using the mongoose.connect method.

      For more information about the file, please see Step 3 of How To Integrate MongoDB with Your Node Application.

      Our first step in modifying the file will be redefining the constants that include sensitive information. Currently, these constants look like this:

      ~/node_project/db.js

      ...
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      ...
      

      Instead of hardcoding this information, you can use the process.env object to capture the runtime values for these constants. Modify the block to look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      ...
      

      Save and close the file when you are finished editing.

      At this point, you have modified db.js to work with your application's environment variables, but you still need a way to pass these variables to your application. Let's create an .env file with values that you can pass to your application at runtime.

      Open the file:

      This file will include the information that you removed from db.js: the username and password for your application's database, as well as the port setting and database name. Remember to update the username, password, and database name listed here with your own information:

      ~/node_project/.env

      MONGO_USERNAME=sammy
      MONGO_PASSWORD=your_password
      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Note that we have removed the host setting that originally appeared in db.js. We will now define our host at the level of the Docker Compose file, along with other information about our services and containers.

      Save and close this file when you are finished editing.

      Because your .env file contains sensitive information, you will want to ensure that it is included in your project's .dockerignore and .gitignore files so that it does not copy to your version control or containers.

      Open your .dockerignore file:

      Add the following line to the bottom of the file:

      ~/node_project/.dockerignore

      ...
      .gitignore
      .env
      

      Save and close the file when you are finished editing.

      The .gitignore file in this repository already includes .env, but feel free to check that it is there:

      ~~/node_project/.gitignore

      ...
      .env
      ...
      

      At this point, you have successfully extracted sensitive information from your project code and taken measures to control how and where this information gets copied. Now you can add more robustness to your database connection code to optimize it for a containerized workflow.

      Step 3 — Modifying Database Connection Settings

      Our next step will be to make our database connection method more robust by adding code that handles cases where our application fails to connect to our database. Introducing this level of resilience to your application code is a recommended practice when working with containers using Compose.

      Open db.js for editing:

      You will see the code that we added earlier, along with the url constant for Mongo's connection URI and the Mongoose connect method:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Currently, our connect method accepts an option that tells Mongoose to use Mongo's new URL parser. Let's add a few more options to this method to define parameters for reconnection attempts. We can do this by creating an options constant that includes the relevant information, in addition to the new URL parser option. Below your Mongo constants, add the following definition for an options constant:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500, 
        connectTimeoutMS: 10000,
      };
      ...
      

      The reconnectTries option tells Mongoose to continue trying to connect indefinitely, while reconnectInterval defines the period between connection attempts in milliseconds. connectTimeoutMS defines 10 seconds as the period that the Mongo driver will wait before failing the connection attempt.

      We can now use the new options constant in the Mongoose connect method to fine tune our Mongoose connection settings. We will also add a promise to handle potential connection errors.

      Currently, the Mongoose connect method looks like this:

      ~/node_project/db.js

      ...
      mongoose.connect(url, {useNewUrlParser: true});
      

      Delete the existing connect method and replace it with the following code, which includes the options constant and a promise:

      ~/node_project/db.js

      ...
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      In the case of a successful connection, our function logs an appropriate message; otherwise it will catch and log the error, allowing us to troubleshoot.

      The finished file will look like this:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      Save and close the file when you have finished editing.

      You have now added resiliency to your application code to handle cases where your application might fail to connect to your database. With this code in place, you can move on to defining your services with Compose.

      Step 4 — Defining Services with Docker Compose

      With your code refactored, you are ready to write the docker-compose.yml file with your service definitions. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

      Before defining our services, however, we will add a tool to our project called wait-for to ensure that our application only attempts to connect to our database once the database startup tasks are complete. This wrapper script uses netcat to poll whether or not a specific host and port are accepting TCP connections. Using it allows you to control your application's attempts to connect to your database by testing whether or not the database is ready to accept connections.

      Though Compose allows you to specify dependencies between services using the depends_on option, this order is based on whether or not the container is running rather than its readiness. Using depends_on won't be optimal for our setup, since we want our application to connect only when the database startup tasks, including adding a user and password to the admin authentication database, are complete. For more information on using wait-for and other tools to control startup order, please see the relevant recommendations in the Compose documentation.

      Open a file called wait-for.sh:

      Paste the following code into the file to create the polling function:

      ~/node_project/app/wait-for.sh

      #!/bin/sh
      
      # original script: https://github.com/eficode/wait-for/blob/master/wait-for
      
      TIMEOUT=15
      QUIET=0
      
      echoerr() {
        if [ "$QUIET" -ne 1 ]; then printf "%sn" "$*" 1>&2; fi
      }
      
      usage() {
        exitcode="$1"
        cat << USAGE >&2
      Usage:
        $cmdname host:port [-t timeout] [-- command args]
        -q | --quiet                        Do not output any status messages
        -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
        -- COMMAND ARGS                     Execute command with args after the test finishes
      USAGE
        exit "$exitcode"
      }
      
      wait_for() {
        for i in `seq $TIMEOUT` ; do
          nc -z "$HOST" "$PORT" > /dev/null 2>&1
      
          result=$?
          if [ $result -eq 0 ] ; then
            if [ $# -gt 0 ] ; then
              exec "$@"
            fi
            exit 0
          fi
          sleep 1
        done
        echo "Operation timed out" >&2
        exit 1
      }
      
      while [ $# -gt 0 ]
      do
        case "$1" in
          *:* )
          HOST=$(printf "%sn" "$1"| cut -d : -f 1)
          PORT=$(printf "%sn" "$1"| cut -d : -f 2)
          shift 1
          ;;
          -q | --quiet)
          QUIET=1
          shift 1
          ;;
          -t)
          TIMEOUT="$2"
          if [ "$TIMEOUT" = "" ]; then break; fi
          shift 2
          ;;
          --timeout=*)
          TIMEOUT="${1#*=}"
          shift 1
          ;;
          --)
          shift
          break
          ;;
          --help)
          usage 0
          ;;
          *)
          echoerr "Unknown argument: $1"
          usage 1
          ;;
        esac
      done
      
      if [ "$HOST" = "" -o "$PORT" = "" ]; then
        echoerr "Error: you need to provide a host and port to test."
        usage 2
      fi
      
      wait_for "$@"
      

      Save and close the file when you are finished adding the code.

      Make the script executable:

      Next, open the docker-compose.yml file:

      First, define the nodejs application service by adding the following code to the file:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB 
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      

      The nodejs service definition includes the following options:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the image build — in this case, the current project directory.
      • dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image. For more information about this file, please see How To Build a Node.js Application with Docker.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.
      • env_file: This tells Compose that we would like to add environment variables from a file called .env, located in our build context.
      • environment: Using this option allows you to add the Mongo connection settings you defined in the .env file. Note that we are not setting NODE_ENV to development, since this is Express's default behavior if NODE_ENV is not set. When moving to production, you can set this to production to enable view caching and less verbose error messages.
        Also note that we have specified the db database container as the host, as discussed in Step 2.
      • ports: This maps port 80 on the host to port 8080 on the container.
      • volumes: We are including two types of mounts here:

        • The first is a bind mount that mounts our application code on the host to the /home/node/app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.
        • The second is a named volume, node_modules. When Docker runs the npm install instruction listed in the application Dockerfile, npm will create a new node_modules directory on the container that includes the packages required to run the application. The bind mount we just created will hide this newly created node_modules directory, however. Since node_modules on the host is empty, the bind will map an empty directory to the container, overriding the new node_modules directory and preventing our application from starting. The named node_modules volume solves this problem by persisting the contents of the /home/node/app/node_modules directory and mounting it to the container,
          hiding the bind.

        Keep the following points in mind when using this approach:

        • Your bind will mount the contents of the node_modules directory on the container to the host and this directory will be owned by root, since the named volume was created by Docker.
        • If you have a pre-existing node_modules directory on the host, it will override the node_modules directory created on the container. The setup that we're building in this tutorial assumes that you do not have a pre-existing node_modules directory and that you won't be working with npm on your host. This is in keeping with a twelve-factor approach to application development, which minimizes dependencies between execution environments.
      • networks: This specifies that our application service will join the app-network network, which we will define at the bottom on the file.

      • command: This option lets you set the command that should be executed when Compose runs the image. Note that this will override the CMD instruction that we set in our application Dockerfile. Here, we are running the application using the wait-for script, which will poll the db service on port 27017 to test whether or not the database service is ready. Once the readiness test succeeds, the script will execute the command we have set, /home/node/app/node_modules/.bin/nodemon app.js, to start the application with nodemon. This will ensure that any future changes we make to our code are reloaded without our having to restart the application.

      Next, create the db service by adding the following code below the application service definition:

      ~/node_project/docker-compose.yml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network  
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes to the image, environment, and volumes definitions:

      • image: To create this service, Compose will pull the 4.1.8-xenial Mongo image from Docker Hub. We are pinning a particular version to avoid possible future conflicts as the Mongo image changes. For more information about version pinning, please see the Docker documentation on Dockerfile best practices.
      • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: The mongo image makes these environment variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the container starts. We have set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using the values from our .env file, which we pass to the db service using the env_file option. Doing this means that our sammy application user will be a root user on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

        Note: Keep in mind that these variables will not take effect if you start the container with an existing data directory in place.
      • dbdata:/data/db: The named volume dbdata will persist the data stored in Mongo's default data directory, /data/db. This will ensure that you don't lose data in cases where you stop or remove containers.

      We've also added the db service to the app-network network with the networks option.

      As a final step, add the volume and network definitions to the bottom of the file:

      ~/node_project/docker-compose.yml

      ...
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, our db and nodejs containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

      Our top-level volumes key defines the volumes dbdata and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that's managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the dbdata volume even if we remove and recreate the db container.

      The finished docker-compose.yml file will look like this:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js 
      
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:     
            - dbdata:/data/db
          networks:
            - app-network  
      
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      Save and close the file when you are finished editing.

      With your service definitions in place, you are ready to start the application.

      Step 5 — Testing the Application

      With your docker-compose.yml file in place, you can create your services with the docker-compose up command. You can also test that your data will persist by stopping and removing your containers with docker-compose down.

      First, build the container images and create the services by running docker-compose up with the -d flag, which will then run the nodejs and db containers in the background:

      You will see output confirming that your services have been created:

      Output

      ... Creating db ... done Creating nodejs ... done

      You can also get more detailed information about the startup processes by displaying the log output from the services:

      You will see something like this if everything has started correctly:

      Output

      ... nodejs | [nodemon] starting `node app.js` nodejs | Example app listening on 8080! nodejs | MongoDB is connected ... db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin

      You can also check the status of your containers with docker-compose ps:

      You will see output indicating that your containers are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------- db docker-entrypoint.sh mongod Up 27017/tcp nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp

      With your services running, you can visit http://your_server_ip in the browser. You will see a landing page that looks like this:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      As a final step, we can test that the data you've just entered will persist if you remove your database container.

      Back at your terminal, type the following command to stop and remove your containers and network:

      Note that we are not including the --volumes option; hence, our dbdata volume is not removed.

      The following output confirms that your containers and network have been removed:

      Output

      Stopping nodejs ... done Stopping db ... done Removing nodejs ... done Removing db ... done Removing network node_project_app-network

      Recreate the containers:

      Now head back to the shark information form:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database without the loss of the data you've already entered:

      Complete Shark Collection

      Your application is now running on Docker containers with data persistence and code synchronization enabled.

      Conclusion

      By following this tutorial, you have created a development setup for your Node application using Docker containers. You've made your project more modular and portable by extracting sensitive information and decoupling your application's state from your application code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

      As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics.

      To learn more about the code used in this tutorial, please see How To Build a Node.js Application with Docker and How To Integrate MongoDB with Your Node Application. For information about deploying a Node application with an Nginx reverse proxy using containers, please see How To Secure a Containerized Node.js Application with Nginx, Let's Encrypt, and Docker Compose.



      Source link