One place for hosting & domains

      Compose

      How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose


      Introduction

      There are multiple ways to enhance the flexibility and security of your Node.js application. Using a reverse proxy like Nginx offers you the ability to load balance requests, cache static content, and implement Transport Layer Security (TLS). Enabling encrypted HTTPS on your server ensures that communication to and from your application remains secure.

      Implementing a reverse proxy with TLS/SSL on containers involves a different set of procedures from working directly on a host operating system. For example, if you were obtaining certificates from Let’s Encrypt for an application running on a server, you would install the required software directly on your host. Containers allow you to take a different approach. Using Docker Compose, you can create containers for your application, your web server, and the Certbot client that will enable you to obtain your certificates. By following these steps, you can take advantage of the modularity and portability of a containerized workflow.

      In this tutorial, you will deploy a Node.js application with an Nginx reverse proxy using Docker Compose. You will obtain TLS/SSL certificates for the domain associated with your application and ensure that it receives a high security rating from SSL Labs. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 18.04 server, a non-root user with sudo privileges, and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
      • Docker and Docker Compose installed on your server. For guidance on installing Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04. For guidance on installing Compose, follow Step 1 of How To Install Docker Compose on Ubuntu 18.04.
      • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Cloning and Testing the Node Application

      As a first step, we will clone the repository with the Node application code, which includes the Dockerfile that we will use to build our application image with Compose. We can first test the application by building and running it with the docker run command, without a reverse proxy or SSL.

      In your non-root user’s home directory, clone the nodejs-image-demo repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git node_project

      Change to the node_project directory:

      In this directory, there is a Dockerfile that contains instructions for building a Node application using the Docker node:10 image and the contents of your current project directory. You can look at the contents of the Dockerfile by typing:

      Output

      FROM node:10 RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app WORKDIR /home/node/app COPY package*.json ./ RUN npm install COPY . . COPY --chown=node:node . . USER node EXPOSE 8080 CMD [ "node", "app.js" ]

      These instructions build a Node image by copying the project code from the current directory to the container and installing dependencies with npm install. They also take advantage of Docker's caching and image layering by separating the copy of package.json and package-lock.json, containing the project's listed dependencies, from the copy of the rest of the application code. Finally, the instructions specify that the container will be run as the non-root node user with the appropriate permissions set on the application code and node_modules directories.

      For more information about this Dockerfile and Node image best practices, please see the complete discussion in Step 3 of How To Build a Node.js Application with Docker.

      To test the application without SSL, you can build and tag the image using docker build and the -t flag. We will call the image node-demo, but you are free to name it something else:

      • docker build -t node-demo .

      Once the build process is complete, you can list your images with docker images:

      You will see the following output, confirming the application image build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE node-demo latest 23961524051d 7 seconds ago 896MB node 10 8a752d5af4ce 10 days ago 894MB

      Next, create the container with docker run. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a memorable name.

      Run the following command to build the container:

      • docker run --name node-demo -p 80:8080 -d node-demo

      Inspect your running containers with docker ps:

      You will see output confirming that your application container is running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

      You can now visit your domain to test your setup: http://example.com. Remember to replace example.com with your own domain name. Your application will display the following landing page:

      Application Landing Page

      Now that you have tested the application, you can stop the container and remove the images. Use docker ps again to get your CONTAINER ID:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

      Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

      You can now remove the stopped container and all of the images, including unused and dangling images, with docker system prune and the -a flag:

      Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

      With your application image tested, you can move on to building the rest of your setup with Docker Compose.

      Step 2 — Defining the Web Server Configuration

      With our application Dockerfile in place, we can create a configuration file to run our Nginx container. We will start with a minimal configuration that will include our domain name, document root, proxy information, and a location block to direct Certbot's requests to the .well-known directory, where it will place a temporary file to validate that the DNS for our domain resolves to our server.

      First, create a directory in the current project directory for the configuration file:

      Open the file with nano or your favorite editor:

      • nano nginx-conf/nginx.conf

      Add the following server block to proxy user requests to your Node application container and to direct Certbot's requests to the .well-known directory. Be sure to replace example.com with your own domain name:

      ~/node_project/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
      
              server_name example.com www.example.com;
      
              location / {
                      proxy_pass http://nodejs:8080;
              }
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      }
      

      This server block will allow us to start the Nginx container as a reverse proxy, which will pass requests to our Node application container. It will also allow us to use Certbot's webroot plugin to obtain certificates for our domain. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.

      Once you have finished editing, save and close the file. To learn more about Nginx server and location block algorithms, please refer to this article on Understanding Nginx Server and Location Block Selection Algorithms.

      With the web server configuration details in place, we can move on to creating our docker-compose.yml file, which will allow us to create our application services and the Certbot container we will use to obtain our certificates.

      Step 3 — Creating the Docker Compose File

      The docker-compose.yml file will define our services, including the Node application and web server. It will specify details like named volumes, which will be critical to sharing SSL credentials between containers, as well as network and port information. It will also allow us to specify specific commands to run when our containers are created. This file is the central resource that will define how our services will work together.

      Open the file in your current directory:

      First, define the application service:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
      

      The nodejs service definition includes the following:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the application image build. In this case, it's the current project directory.
      • dockerfile: This specifies the Dockerfile that Compose will use for the build — the Dockerfile you looked at in Step 1.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.

      Note that we are not including bind mounts with this service, since our setup is focused on deployment rather than development. For more information, please see the Docker documentation on bind mounts and volumes.

      To enable communication between the application and web server containers, we will also add a bridge network called app-network below the restart definition:

      ~/node_project/docker-compose.yml

      services:
        nodejs:
      ...
          networks:
            - app-network
      

      A user-defined bridge network like this enables communication between containers on the same Docker daemon host. This streamlines traffic and communication within your application, since it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, you can be selective about opening only the ports you need to expose your frontend services.

      Next, define the webserver service:

      ~/node_project/docker-compose.yml

      ...
       webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
          depends_on:
            - nodejs
          networks:
            - app-network
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes:

      • image: This tells Compose to pull the latest Nginx image from Docker Hub.
      • ports: This exposes port 80 to enable the configuration options we've defined in our Nginx configuration.

      We have also specified the following named volumes and bind mounts:

      • web-root:/var/www/html: This will add our site's static assets, copied to a volume called web-root, to the the /var/www/html directory on the container.
      • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
      • certbot-etc:/etc/letsencrypt: This will mount the relevant Let's Encrypt certificates and keys for our domain to the appropriate directory on the container.
      • certbot-var:/var/lib/letsencrypt: This mounts Let's Encrypt's default working directory to the appropriate directory on the container.

      Next, add the configuration options for the certbot container. Be sure to replace the domain and email information with your own domain name and contact email:

      ~/node_project/docker-compose.yml

      ...
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com 
      

      This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc, the Let's Encrypt working directory in certbot-var, and the application code in web-root.

      Again, we've used depends_on to specify that the certbot container should be started once the webserver service is running.

      We've also included a command option that specifies the command to run when the container is started. It includes the certonly subcommand with the following options:

      • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication.
      • --webroot-path: This specifies the path of the webroot directory.
      • --email: Your preferred email for registration and recovery.
      • --agree-tos: This specifies that you agree to ACME's Subscriber Agreement.
      • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
      • --staging: This tells Certbot that you would like to use Let's Encrypt's staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let's Encrypt's rate limits documentation.
      • -d: This allows you to specify domain names you would like to apply to your request. In this case, we've included example.com and www.example.com. Be sure to replace these with your own domain preferences.

      As a final step, add the volume and network definitions. Be sure to replace the username here with your own non-root user:

      ~/node_project/docker-compose.yml

      ...
      volumes:
        certbot-etc:
        certbot-var:
        web-root:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/views/
            o: bind
      
      networks:
        app-network:
          driver: bridge
      

      Our named volumes include our Certbot certificate and working directory volumes, and the volume for our site's static assets, web-root. In most cases, the default driver for Docker volumes is the local driver, which on Linux accepts options similar to the mount command. Thanks to this, we are able to specify a list of driver options with driver_opts that mount the views directory on the host, which contains our application's static assets, to the volume at runtime. The directory contents can then be shared between containers. For more information about the contents of the views directory, please see Step 2 of How To Build a Node.js Application with Docker.

      The docker-compose.yml file will look like this when finished:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          networks:
            - app-network
      
        webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
          depends_on:
            - nodejs
          networks:
            - app-network
      
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com 
      
      volumes:
        certbot-etc:
        certbot-var:
        web-root:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/views/
            o: bind
      
      networks:
        app-network:
          driver: bridge  
      

      With the service definitions in place, you are ready to start the containers and test your certificate requests.

      Step 4 — Obtaining SSL Certificates and Credentials

      We can start our containers with docker-compose up, which will create and run our containers and services in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

      Create the services with docker-compose up and the -d flag, which will run the nodejs and webserver containers in the background:

      You will see output confirming that your services have been created:

      Output

      Creating nodejs ... done Creating webserver ... done Creating certbot ... done

      Using docker-compose ps, check the status of your services:

      If everything was successful, your nodejs and webserver services should be Up and the certbot container will have exited with a 0 status message:

      Output

      Name Command State Ports ------------------------------------------------------------------------ certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp

      If you see anything other than Up in the State column for the nodejs and webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

      • docker-compose logs service_name

      You can now check that your credentials have been mounted to the webserver container with docker-compose exec:

      • docker-compose exec webserver ls -la /etc/letsencrypt/live

      If your request was successful, you will see output like this:

      Output

      total 16 drwx------ 3 root root 4096 Dec 23 16:48 . drwxr-xr-x 9 root root 4096 Dec 23 16:48 .. -rw-r--r-- 1 root root 740 Dec 23 16:48 README drwxr-xr-x 2 root root 4096 Dec 23 16:48 example.com

      Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

      Open docker-compose.yml:

      Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition should now look like this:

      ~/node_project/docker-compose.yml

      ...
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      ...
      

      You can now run docker-compose up to recreate the certbot container and its relevant volumes. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

      • docker-compose up --force-recreate --no-deps certbot

      You will see output indicating that your certificate request was successful:

      Output

      certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-03-26. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

      With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

      Step 5 — Modifying the Web Server Configuration and Service Definition

      Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS and specifying our SSL certificate and key locations. It will also involve specifying our Diffie-Hellman group, which we will use for Perfect Forward Secrecy.

      Since you are going to recreate the webserver service to include these additions, you can stop it now:

      • docker-compose stop webserver

      Next, create a directory in your current project directory for your Diffie-Hellman key:

      Generate your key with the openssl command:

      • sudo openssl dhparam -out /home/sammy/node_project/dhparam/dhparam-2048.pem 2048

      It will take a few moments to generate the key.

      To add the relevant Diffie-Hellman and SSL information to your Nginx configuration, first remove the Nginx configuration file you created earlier:

      Open another version of the file:

      • nano nginx-conf/nginx.conf

      Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

      ~/node_project/nginx-conf/nginx.conf

      
      server {
              listen 80;
              listen [::]:80;
              server_name example.com www.example.com;
      
              location ~ /.well-known/acme-challenge {
                allow all;
                root /var/www/html;
              }
      
              location / {
                      rewrite ^ https://$host$request_uri? permanent;
              }
      }
      
      server {
              listen 443 ssl http2;
              listen [::]:443 ssl http2;
              server_name example.com www.example.com;
      
              server_tokens off;
      
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
      
              ssl_buffer_size 8k;
      
              ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
      
              ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
              ssl_prefer_server_ciphers on;
      
              ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
      
              ssl_ecdh_curve secp384r1;
              ssl_session_tickets off;
      
              ssl_stapling on;
              ssl_stapling_verify on;
              resolver 8.8.8.8;
      
              location / {
                      try_files $uri @nodejs;
              }
      
              location @nodejs {
                      proxy_pass http://nodejs:8080;
                      add_header X-Frame-Options "SAMEORIGIN" always;
                      add_header X-XSS-Protection "1; mode=block" always;
                      add_header X-Content-Type-Options "nosniff" always;
                      add_header Referrer-Policy "no-referrer-when-downgrade" always;
                      add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
                      # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
                      # enable strict transport security only if you understand the implications
              }
      
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
      }
      

      The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

      The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04. This block also includes a series of options to ensure that you are using the most up-to-date SSL protocols and ciphers and that OSCP stapling is turned on. OSCP stapling allows you to offer a time-stamped response from your certificate authority during the initial TLS handshake, which can speed up the authentication process.

      The block also specifies your SSL and Diffie-Hellman credentials and key locations.

      Finally, we've moved the proxy pass information to this block, including a location block with a try_files directive, pointing requests to our aliased Node.js application container, and a location block for that alias, which includes security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its "preload" functionality.

      Once you have finished editing, save and close the file.

      Before recreating the webserver service, you will need to add a few things to the service definition in your docker-compose.yml file, including relevant port information for HTTPS and a Diffie-Hellman volume definition.

      Open the file:

      In the webserver service definition, add the following port mapping and the dhparam named volume:

      ~/node_project/docker-compose.yml

      ...
       webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - dhparam:/etc/ssl/certs
          depends_on:
            - nodejs
          networks:
            - app-network
      

      Next, add the dhparam volume to your volumes definitions:

      ~/node_project/docker-compose.yml

      ...
      volumes:
        ...
        dhparam:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/dhparam/
            o: bind
      

      Similarly to the web-root volume, the dhparam volume will mount the Diffie-Hellman key stored on the host to the webserver container.

      Save and close the file when you are finished editing.

      Recreate the webserver service:

      • docker-compose up -d --force-recreate --no-deps webserver

      Check your services with docker-compose ps:

      You should see output indicating that your nodejs and webserver services are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp

      Finally, you can visit your domain to ensure that everything is working as expected. Navigate your browser to https://example.com, making sure to substitute example.com with your own domain name. You will see the following landing page:

      Application Landing Page

      You should also see the lock icon in your browser's security indicator. If you would like, you can navigate to the SSL Labs Server Test landing page or the Security Headers server test landing page. The configuration options we've included should earn your site an A rating on both.

      Step 6 — Renewing Certificates

      Let's Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will schedule a cron job using a script that will renew our certificates and reload our Nginx configuration.

      Open a script called ssl_renew.sh in your project directory:

      Add the following code to the script to renew your certificates and reload your web server configuration:

      ~/node_project/ssl_renew.sh

      #!/bin/bash
      
      /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew --dry-run 
      && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver
      

      In addition to specifying the location of our docker-compose binary, we also specify the location of our docker-compose.yml file in order to run docker-compose commands. In this case, we are using docker-compose run to start a certbot container and to override the command provided in our service definition with another: the renew subcommand, which will renew certificates that are close to expiring. We've included the --dry-run option here to test our script.

      The script then uses docker-compose kill to send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

      Close the file when you are finished editing. Make it executable:

      Next, open your root crontab file to run the renewal script at a specified interval:

      If this is your first time editing this file, you will be asked to choose an editor:

      crontab

      no crontab for root - using an empty one
      Select an editor.  To change later, run 'select-editor'.
        1. /bin/ed
        2. /bin/nano        <---- easiest
        3. /usr/bin/vim.basic
        4. /usr/bin/vim.tiny
      Choose 1-4 [2]: 
      ...
      

      At the bottom of the file, add the following line:

      crontab

      ...
      */5 * * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

      After five minutes, check cron.log to see whether or not the renewal request has succeeded:

      • tail -f /var/log/cron.log

      You should see output confirming a successful renewal:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Killing webserver ... done

      You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

      crontab

      ...
      0 12 * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      You will also want to remove the --dry-run option from your ssl_renew.sh script:

      ~/node_project/ssl_renew.sh

      #!/bin/bash
      
      /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew 
      && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver
      

      Your cron job will ensure that your Let's Encrypt certificates don't lapse by renewing them when they are eligible.

      Conclusion

      You have used containers to set up and run a Node application with an Nginx reverse proxy. You have also secured SSL certificates for your application's domain and set up a cron job to renew these certificates when necessary.

      If you are interested in learning more about Let's Encrypt plugins, please see our articles on using the Nginx plugin or the standalone plugin.

      You can also learn more about Docker Compose by looking at the following resources:

      The Compose documentation is also a great resource for learning more about multi-container applications.



      Source link

      How To Set Up Laravel, Nginx, and MySQL with Docker Compose


      The author selected The FreeBSD Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the past few years, Docker has become a frequently used solution for deploying applications thanks to how it simplifies running and deploying applications in ephemeral containers. When using a LEMP application stack, for example, with PHP, Nginx, MySQL and the Laravel framework, Docker can significantly streamline the setup process.

      Docker Compose has further simplified the development process by allowing developers to define their infrastructure, including application services, networks, and volumes, in a single file. Docker Compose offers an efficient alternative to running multiple docker container create and docker container run commands.

      In this tutorial, you will build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, all inside Docker containers. You will define the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

      Prerequisites

      Before you start, you will need:

      Step 1 — Downloading Laravel and Installing Dependencies

      As a first step, we will get the latest version of Laravel and install the dependencies for the project, including Composer, the application-level package manager for PHP. We will install these dependencies with Docker to avoid having to install Composer globally.

      First, check that you are in your home directory and clone the latest Laravel release to a directory called laravel-app:

      • cd ~
      • git clone https://github.com/laravel/laravel.git laravel-app

      Move into the laravel-app directory:

      Next, use Docker's composer image to mount the directories that you will need for your Laravel project and avoid the overhead of installing Composer globally:

      • docker run --rm -v $(pwd):/app composer install

      Using the -v and --rm flags with docker run creates an ephemeral container that will be bind-mounted to your current directory before being removed. This will copy the contents of your ~/laravel-app directory to the container and also ensure that the vendor folder Composer creates inside the container is copied to your current directory.

      As a final step, set permissions on the project directory so that it is owned by your non-root user:

      • sudo chown -R $USER:$USER ~/laravel-app

      This will be important when you write the Dockerfile for your application image in Step 4, as it will allow you to work with your application code and run processes in your container as a non-root user.

      With your application code in place, you can move on to defining your services with Docker Compose.

      Step 2 — Creating the Docker Compose File

      Building your applications with Docker Compose simplifies the process of setting up and versioning your infrastructure. To set up our Laravel application, we will write a docker-compose file that defines our web server, database, and application services.

      Open the file:

      • nano ~/laravel-app/docker-compose.yml

      In the docker-compose file, you will define three services: app, webserver, and db. Add the following code to the file, being sure to replace the root password for MYSQL_ROOT_PASSWORD, defined as an environment variable under the db service, with a strong password of your choice:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      

      The services defined here include:

      • app: This service definition contains the Laravel application and runs a custom Docker image, digitalocean.com/php, that you will define in Step 4. It also sets the working_dir in the container to /var/www.
      • webserver: This service definition pulls the nginx:alpine image from Docker and exposes ports 80 and 443.
      • db: This service definition pulls the mysql:5.7.22 image from Docker and defines a few environmental variables, including a database called laravel for your application and the root password for the database. You are free to name the database whatever you would like, and you should replace your_mysql_root_password with your own strong password. This service definition also maps port 3306 on the host to port 3306 on the container.

      Each container_name property defines a name for the container, which corresponds to the name of the service. If you don't define this property, Docker will assign a name to each container by combining a historically famous person's name and a random word separated by an underscore.

      To facilitate communication between containers, the services are connected to a bridge network called app-network. A bridge network uses a software bridge that allows containers connected to the same bridge network to communicate with each other. The bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. This creates a greater level of security for applications, ensuring that only related services can communicate with one another. It also means that you can define multiple networks and services connecting to related functions: front-end application services can use a frontend network, for example, and back-end services can use a backend network.

      Let's look at how to add volumes and bind mounts to your service definitions to persist your application data.

      Step 3 — Persisting Data

      Docker has powerful and convenient features for persisting data. In our application, we will make use of volumes and bind mounts for persisting the database, and application and configuration files. Volumes offer flexibility for backups and persistence beyond a container's lifecycle, while bind mounts facilitate code changes during development, making changes to your host files or directories immediately available in your containers. Our setup will make use of both.

      Warning: By using bind mounts, you make it possible to change the host filesystem through processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability with security implications, and could impact non-Docker processes on the host system. Use bind mounts with care.

      In the docker-compose file, define a volume called dbdata under the db service definition to persist the MySQL database:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
          networks:
            - app-network
        ...
      

      The named volume dbdata persists the contents of the /var/lib/mysql folder present inside the container. This allows you to stop and restart the db service without losing data.

      At the bottom of the file, add the definition for the dbdata volume:

      ~/laravel-app/docker-compose.yml

      ...
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      With this definition in place, you will be able to use this volume across services.

      Next, add a bind mount to the db service for the MySQL configuration files you will create in Step 7:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
            - ./mysql/my.cnf:/etc/mysql/my.cnf
        ...
      

      This bind mount binds ~/laravel-app/mysql/my.cnf to /etc/mysql/my.cnf in the container.

      Next, add bind mounts to the webserver service. There will be two: one for your application code and another for the Nginx configuration definition that you will create in Step 6:

      ~/laravel-app/docker-compose.yml

      #Nginx Service
      webserver:
        ...
        volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
        networks:
            - app-network
      

      The first bind mount binds the application code in the ~/laravel-app directory to the /var/www directory inside the container. The configuration file that you will add to ~/laravel-app/nginx/conf.d/ will also be mounted to /etc/nginx/conf.d/ in the container, allowing you to add or modify the configuration directory's contents as needed.

      Finally, add the following bind mounts to the app service for the application code and configuration files:

      ~/laravel-app/docker-compose.yml

      #PHP Service
      app:
        ...
        volumes:
             - ./:/var/www
             - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
            - app-network
      

      The app service is bind-mounting the ~/laravel-app folder, which contains the application code, to the /var/www folder in the container. This will speed up the development process, since any changes made to your local application directory will be instantly reflected inside the container. You are also binding your PHP configuration file, ~/laravel-app/php/local.ini, to /usr/local/etc/php/conf.d/local.ini inside the container. You will create the local PHP configuration file in Step 5.

      Your docker-compose file will now look like this:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          volumes:
            - ./:/var/www
            - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - dbdata:/var/lib/mysql/
            - ./mysql/my.cnf:/etc/mysql/my.cnf
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      Save the file and exit your editor when you are finished making changes.

      With your docker-compose file written, you can now build the custom image for your application.

      Step 4 — Creating the Dockerfile

      Docker allows you to specify the environment inside of individual containers with a Dockerfile. A Dockerfile enables you to create custom images that you can use to install the software required by your application and configure settings based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

      Our Dockerfile will be located in our ~/laravel-app directory. Create the file:

      • nano ~/laravel-app/Dockerfile

      This Dockerfile will set the base image and specify the necessary commands and instructions to build the Laravel application image. Add the following code to the file:

      ~/laravel-app/php/Dockerfile

      FROM php:7.2-fpm
      
      # Copy composer.lock and composer.json
      COPY composer.lock composer.json /var/www/
      
      # Set working directory
      WORKDIR /var/www
      
      # Install dependencies
      RUN apt-get update && apt-get install -y 
          build-essential 
          mysql-client 
          libpng-dev 
          libjpeg62-turbo-dev 
          libfreetype6-dev 
          locales 
          zip 
          jpegoptim optipng pngquant gifsicle 
          vim 
          unzip 
          git 
          curl
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install extensions
      RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
      RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
      RUN docker-php-ext-install gd
      
      # Install composer
      RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
      
      # Add user for laravel application
      RUN groupadd -g 1000 www
      RUN useradd -u 1000 -ms /bin/bash -g www www
      
      # Copy existing application directory contents
      COPY . /var/www
      
      # Copy existing application directory permissions
      COPY --chown=www:www . /var/www
      
      # Change current user to www
      USER www
      
      # Expose port 9000 and start php-fpm server
      EXPOSE 9000
      CMD ["php-fpm"]
      

      First, the Dockerfile creates an image on top of the php:7.2-fpm Docker image. This is a Debian-based image that has the PHP FastCGI implementation PHP-FPM installed. The file also installs the prerequisite packages for Laravel: mcrypt, pdo_mysql, mbstring, and imagick with composer.

      The RUN directive specifies the commands to update, install, and configure settings inside the container, including creating a dedicated user and group called www. The WORKDIR instruction specifies the /var/www directory as the working directory for the application.

      Creating a dedicated user and group with restricted permissions mitigates the inherent vulnerability when running Docker containers, which run by default as root. Instead of running this container as root, we've created the www user, who has read/write access to the /var/www folder thanks to the COPY instruction that we are using with the --chown flag to copy the application folder's permissions.

      Finally, the EXPOSE command exposes a port in the container, 9000, for the php-fpm server. CMD specifies the command that should run once the container is created. Here, CMD specifies "php-fpm", which will start the server.

      Save the file and exit your editor when you are finished making changes.

      You can now move on to defining your PHP configuration.

      Step 5 — Configuring PHP

      Now that you have defined your infrastructure in the docker-compose file, you can configure the PHP service to act as a PHP processor for incoming requests from Nginx.

      To configure PHP, you will create the local.ini file inside the php folder. This is the file that you bind-mounted to /usr/local/etc/php/conf.d/local.ini inside the container in Step 2. Creating this file will allow you to override the default php.ini file that PHP reads when it starts.

      Create the php directory:

      Next, open the local.ini file:

      • nano ~/laravel-app/php/local.ini

      To demonstrate how to configure PHP, we'll add the following code to set size limitations for uploaded files:

      ~/laravel-app/php/local.ini

      upload_max_filesize=40M
      post_max_size=40M
      

      The upload_max_filesize and post_max_size directives set the maximum allowed size for uploaded files, and demonstrate how you can set php.ini configurations from your local.ini file. You can put any PHP-specific configuration that you want to override in the local.ini file.

      Save the file and exit your editor.

      With your PHP local.ini file in place, you can move on to configuring Nginx.

      Step 6 — Configuring Nginx

      With the PHP service configured, you can modify the Nginx service to use PHP-FPM as the FastCGI server to serve dynamic content. The FastCGI server is based on a binary protocol for interfacing interactive programs with a web server. For more information, please refer to this article on Understanding and Implementing FastCGI Proxying in Nginx.

      To configure Nginx, you will create an app.conf file with the service configuration in the ~/laravel-app/nginx/conf.d/ folder.

      First, create the nginx/conf.d/ directory:

      • mkdir -p ~/laravel-app/nginx/conf.d

      Next, create the app.conf configuration file:

      • nano ~/laravel-app/nginx/conf.d/app.conf

      Add the following code to the file to specify your Nginx configuration:

      ~/laravel-app/nginx/conf.d/app.conf

      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      The server block defines the configuration for the Nginx web server with the following directives:

      • listen: This directive defines the port on which the server will listen to incoming requests.
      • error_log and access_log: These directives define the files for writing logs.
      • root: This directive sets the root folder path, forming the complete path to any requested file on the local file system.

      In the php location block, the fastcgi_pass directive specifies that the app service is listening on a TCP socket on port 9000. This makes the PHP-FPM server listen over the network rather than on a Unix socket. Though a Unix socket has a slight advantage in speed over a TCP socket, it does not have a network protocol and thus skips the network stack. For cases where hosts are located on one machine, a Unix socket may make sense, but in cases where you have services running on different hosts, a TCP socket offers the advantage of allowing you to connect to distributed services. Because our app container is running on a different host from our webserver container, a TCP socket makes the most sense for our configuration.

      Save the file and exit your editor when you are finished making changes.

      Thanks to the bind mount you created in Step 2, any changes you make inside the nginx/conf.d/ folder will be directly reflected inside the webserver container.

      Next, let's look at our MySQL settings.

      Step 7 — Configuring MySQL

      With PHP and Nginx configured, you can enable MySQL to act as the database for your application.

      To configure MySQL, you will create the my.cnf file in the mysql folder. This is the file that you bind-mounted to /etc/mysql/my.cnf inside the container in Step 2. This bind mount allows you to override the my.cnf settings as and when required.

      To demonstrate how this works, we'll add settings to the my.cnf file that enable the general query log and specify the log file.

      First, create the mysql directory:

      • mkdir ~/laravel-app/mysql

      Next, make the my.cnf file:

      • nano ~/laravel-app/mysql/my.cnf

      In the file, add the following code to enable the query log and set the log file location:

      ~/laravel-app/mysql/my.cnf

      [mysqld]
      general_log = 1
      general_log_file = /var/lib/mysql/general.log
      

      This my.cnf file enables logs, defining the general_log setting as 1 to allow general logs. The general_log_file setting specifies where the logs will be stored.

      Save the file and exit your editor.

      Our next step will be to start the containers.

      Step 8 — Running the Containers and Modifying Environment Settings

      Now that you have defined all of your services in your docker-compose file and created the configuration files for these services, you can start the containers. As a final step, though, we will make a copy of the .env.example file that Laravel includes by default and name the copy .env, which is the file Laravel expects to define its environment:

      We will configure the specific details of our setup in this file once we have started the containers.

      With all of your services defined in your docker-compose file, you just need to issue a single command to start all of the containers, create the volumes, and set up and connect the networks:

      When you run docker-compose up for the first time, it will download all of the necessary Docker images, which might take a while. Once the images are downloaded and stored in your local machine, Compose will create your containers. The -d flag daemonizes the process, running your containers in the background.

      Once the process is complete, use the following command to list all of the running containers:

      You will see the following output with details about your app, webserver, and db containers:

      Output

      CONTAINER ID NAMES IMAGE STATUS PORTS c31b7b3251e0 db mysql:5.7.22 Up 2 seconds 0.0.0.0:3306->3306/tcp ed5a69704580 app digitalocean.com/php Up 2 seconds 9000/tcp 5ce4ee31d7c0 webserver nginx:alpine Up 2 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

      The CONTAINER ID in this output is a unique identifier for each container, while NAMES lists the service name associated with each. You can use both of these identifiers to access the containers. IMAGE defines the image name for each container, while STATUS provides information about the container's state: whether it's running, restarting, or stopped.

      You can now modify the .env file on the app container to include specific details about your setup.

      Open the file using docker-compose exec, which allows you to run specific commands in containers. In this case, you are opening the file for editing:

      • docker-compose exec app nano .env

      Find the block that specifies DB_CONNECTION and update it to reflect the specifics of your setup. You will modify the following fields:

      • DB_HOST will be your db database container.
      • DB_DATABASE will be the laravel database.
      • DB_USERNAME will be the username you will use for your database. In this case, we will use laraveluser.
      • DB_PASSWORD will be the secure password you would like to use for this user account.

      /var/www/.env

      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=laravel
      DB_USERNAME=laraveluser
      DB_PASSWORD=your_laravel_db_password
      

      Save your changes and exit your editor.

      Next, set the application key for the Laravel application with the php artisan key:generate command. This command will generate a key and copy it to your .env file, ensuring that your user sessions and encrypted data remain secure:

      • docker-compose exec app php artisan key:generate

      You now have the environment settings required to run your application. To cache these settings into a file, which will boost your application's load speed, run:

      • docker-compose exec app php artisan config:cache

      Your configuration settings will be loaded into /var/www/bootstrap/cache/config.php on the container.

      As a final step, visit http://your_server_ip in the browser. You will see the following home page for your Laravel application:

      Laravel Home Page

      With your containers running and your configuration information in place, you can move on to configuring your user information for the laravel database on the db container.

      Step 9 — Creating a User for MySQL

      The default MySQL installation only creates the root administrative account, which has unlimited privileges on the database server. In general, it's better to avoid using the root administrative account when interacting with the database. Instead, let's create a dedicated database user for our application's Laravel database.

      To create a new user, execute an interactive bash shell on the db container with docker-compose exec:

      • docker-compose exec db bash

      Inside the container, log into the MySQL root administrative account:

      You will be prompted for the password you set for the MySQL root account during installation in your docker-compose file.

      Start by checking for the database called laravel, which you defined in your docker-compose file. Run the show databases command to check for existing databases:

      You will see the laravel database listed in the output:

      Output

      +--------------------+ | Database | +--------------------+ | information_schema | | laravel | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

      Next, create the user account that will be allowed to access this database. Our username will be laraveluser, though you can replace this with another name if you'd prefer. Just be sure that your username and password here match the details you set in your .env file in the previous step:

      • GRANT ALL ON laravel.* TO 'laraveluser'@'%' IDENTIFIED BY 'your_laravel_db_password';

      Flush the privileges to notify the MySQL server of the changes:

      Exit MySQL:

      Finally, exit the container:

      You have configured the user account for your Laravel application database and are ready to migrate your data and work with the Tinker console.

      Step 10 — Migrating Data and Working with the Tinker Console

      With your application running, you can migrate your data and experiment with the tinker command, which will initiate a PsySH console with Laravel preloaded. PsySH is a runtime developer console and interactive debugger for PHP, and Tinker is a REPL specifically for Laravel. Using the tinker command will allow you to interact with your Laravel application from the command line in an interactive shell.

      First, test the connection to MySQL by running the Laravel artisan migrate command, which creates a migrations table in the database from inside the container:

      • docker-compose exec app php artisan migrate

      This command will migrate the default Laravel tables. The output confirming the migration will look like this:

      Output

      Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table

      Once the migration is complete, you can run a query to check if you are properly connected to the database using the tinker command:

      • docker-compose exec app php artisan tinker

      Test the MySQL connection by getting the data you just migrated:

      • DB::table('migrations')->get();

      You will see output that looks like this:

      Output

      => IlluminateSupportCollection {#2856 all: [ {#2862 +"id": 1, +"migration": "2014_10_12_000000_create_users_table", +"batch": 1, }, {#2865 +"id": 2, +"migration": "2014_10_12_100000_create_password_resets_table", +"batch": 1, }, ], }

      You can use tinker to interact with your databases and to experiment with services and models.

      With your Laravel application in place, you are ready for further development and experimentation.

      Conclusion

      You now have a LEMP stack application running on your server, which you've tested by accessing the Laravel welcome page and creating MySQL database migrations.

      Key to the simplicity of this installation is Docker Compose, which allows you to create a group of Docker containers, defined in a single file, with a single command. If you would like to learn more about how to do CI with Docker Compose, take a look at How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04. If you want to streamline your Laravel application deployment process then How to Automatically Deploy Laravel Applications with Deployer on Ubuntu 16.04 will be a relevant resource.



      Source link

      How To Install Docker Compose on Debian 9


      Introduction

      Docker is a great tool for automating the deployment of Linux applications inside software containers, but to take full advantage of its potential each component of an application should run in its own individual container. For complex applications with a lot of components, orchestrating all the containers to start up, communicate, and shut down together can quickly become unwieldy.

      The Docker community came up with a popular solution called Fig, which allowed you to use a single YAML file to orchestrate all of your Docker containers and configurations. This became so popular that the Docker team decided to make Docker Compose based on the Fig source, which is now deprecated. Docker Compose makes it easier for users to orchestrate the processes of Docker containers, including starting up, shutting down, and setting up intra-container linking and volumes.

      In this tutorial, we’ll show you how to install the latest version of Docker Compose to help you manage multi-container applications on a Debian 9 server.

      Prerequisites

      To follow this article, you will need:

      Note: Even though the Prerequisites give instructions for installing Docker on Debian 9, the docker commands in this article should work on other operating systems as long as Docker is installed.

      Step 1 — Installing Docker Compose

      Although we can install Docker Compose from the official Debian repositories, it is several minor versions behind the latest release, so we’ll install it from Docker’s GitHub repository. The command below is slightly different than the one you’ll find on the Releases page. By using the -o flag to specify the output file first rather than redirecting the output, this syntax avoids running into a permission denied error caused when using sudo.

      We’ll check the current release and, if necessary, update it in the command below:

      • sudo curl -L https://github.com/docker/compose/releases/download/1.22.0/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose

      Next we’ll set the permissions:

      • sudo chmod +x /usr/local/bin/docker-compose

      Then we’ll verify that the installation was successful by checking the version:

      This will print out the version we installed:

      Output

      docker-compose version 1.22.0, build f46880fe

      Now that we have Docker Compose installed, we're ready to run a "Hello World" example.

      Step 2 — Running a Container with Docker Compose

      The public Docker registry, Docker Hub, includes a Hello World image for demonstration and testing. It illustrates the minimal configuration required to run a container using Docker Compose: a YAML file that calls a single image. We'll create this minimal configuration to run our hello-world container.

      First, we'll create a directory for the YAML file and move into it:

      • mkdir hello-world
      • cd hello-world

      Then, we'll create the YAML file:

      Put the following contents into the file, save the file, and exit the text editor:

      docker-compose.yml

      my-test:
       image: hello-world
      

      The first line in the YAML file is used as part of the container name. The second line specifies which image to use to create the container. When we run the docker-compose up command, it will look for a local image by the name we specified, hello-world. With this in place, we’ll save and exit the file.

      We can look manually at images on our system with the docker images command:

      When there are no local images at all, only the column headings display:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE

      Now, while still in the ~/hello-world directory, we'll execute the following command:

      The first time we run the command, if there's no local image named hello-world, Docker Compose will pull it from the Docker Hub public repository:

      Output

      Pulling my-test (hello-world:)... latest: Pulling from library/hello-world 9db2ca6ccae0: Pull complete Digest: sha256:4b8ff392a12ed9ea17784bd3c9a8b1fa3299cac44aca35a85c90c5e3c7afacdc Status: Downloaded newer image for hello-world:latest . . .

      After pulling the image, docker-compose creates a container, attaches, and runs the hello program, which in turn confirms that the installation appears to be working:

      Output

      . . . Creating helloworld_my-test_1... Attaching to helloworld_my-test_1 my-test_1 | my-test_1 | Hello from Docker. my-test_1 | This message shows that your installation appears to be working correctly. my-test_1 | . . .

      Then it prints an explanation of what it did:

      Output

      To generate this message, Docker took the following steps: my-test_1 | 1. The Docker client contacted the Docker daemon. my-test_1 | 2. The Docker daemon pulled the "hello-world" image from the Docker Hub. my-test_1 | (amd64) my-test_1 | 3. The Docker daemon created a new container from that image which runs the my-test_1 | executable that produces the output you are currently reading. my-test_1 | 4. The Docker daemon streamed that output to the Docker client, which sent it my-test_1 | to your terminal.

      Docker containers only run as long as the command is active, so once hello finished running, the container stopped. Consequently, when we look at active processes, the column headers will appear, but the hello-world container won't be listed because it's not running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES

      We can see the container information, which we'll need in the next step, by using the -a flag. This shows all containers, not just active ones:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 06069fd5ca23 hello-world "/hello" 35 minutes ago Exited (0) 35 minutes ago hello-world_my-test_1

      This displays the information we'll need to remove the container when we're done with it.

      Step 3 — Removing the Image (Optional)

      To avoid using unnecessary disk space, we'll remove the local image. To do so, we'll need to delete all the containers that reference the image using the docker rm command, followed by either the CONTAINER ID or the NAME. Below, we're using the CONTAINER ID from the docker ps -a command we just ran. Be sure to substitute the ID of your container:

      Once all containers that reference the image have been removed, we can remove the image:

      Conclusion

      We've now installed Docker Compose, tested our installation by running a Hello World example, and removed the test image and container.

      While the Hello World example confirmed our installation, the simple configuration does not show one of the main benefits of Docker Compose — being able to bring a group of Docker containers up and down all at the same time. To see the power of Docker Compose in action, you might like to check out this practical example, How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04.



      Source link