One place for hosting & domains

      Containerizing

      Containerizing a Laravel 6 Application for Development with Docker Compose on Ubuntu 18.04


      Introduction

      To containerize an application refers to the process of adapting an application and its components in order to be able to run it in lightweight environments known as containers. Such environments are isolated and disposable, and can be leveraged for developing, testing, and deploying applications to production.

      In this guide, we’ll use Docker Compose to containerize a Laravel 6 application for development. When you’re finished, you’ll have a demo Laravel application running on three separate service containers:

      • An app service running PHP7.4-FPM;
      • A db service running MySQL 5.7;
      • An nginx service that uses the app service to parse PHP code before serving the Laravel application to the final user.

      To allow for a streamlined development process and facilitate application debugging, we’ll keep application files in sync by using shared volumes. We’ll also see how to use docker-compose exec commands to run Composer and Artisan on the app container.

      Prerequisites

      Step 1 — Obtaining the Demo Application

      To get started, we’ll fetch the demo Laravel application from its Github repository. We’re interested in the tutorial-01 branch, which contains the basic Laravel application we’ve created in the first guide of this series.

      To obtain the application code that is compatible with this tutorial, download release tutorial-1.0.1 to your home directory with:

      • cd ~
      • curl -L https://github.com/do-community/travellist-laravel-demo/archive/tutorial-1.0.1.zip -o travellist.zip

      We’ll need the unzip command to unpack the application code. In case you haven’t installed this package before, do so now with:

      • sudo apt update
      • sudo apt install unzip

      Now, unzip the contents of the application and rename the unpacked directory for easier access:

      • unzip travellist.zip
      • mv travellist-laravel-demo-tutorial-1.0.1 travellist-demo

      Navigate to the travellist-demo directory:

      In the next step, we’ll create a .env configuration file to set up the application.

      Step 2 — Setting Up the Application’s .env File

      The Laravel configuration files are located in a directory called config, inside the application’s root directory. Additionally, a .env file is used to set up environment-dependent configuration, such as credentials and any information that might vary between deploys. This file is not included in revision control.

      Warning: The environment configuration file contains sensitive information about your server, including database credentials and security keys. For that reason, you should never share this file publicly.

      The values contained in the .env file will take precedence over the values set in regular configuration files located at the config directory. Each installation on a new environment requires a tailored environment file to define things such as database connection settings, debug options, application URL, among other items that may vary depending on which environment the application is running.

      We’ll now create a new .env file to customize the configuration options for the development environment we’re setting up. Laravel comes with an example.env file that we can copy to create our own:

      Open this file using nano or your text editor of choice:

      The current .env file from the travellist demo application contains settings to use a local MySQL database, with 127.0.0.1 as database host. We need to update the DB_HOST variable so that it points to the database service we will create in our Docker environment. In this guide, we’ll call our database service db. Go ahead and replace the listed value of DB_HOST with the database service name:

      .env

      APP_NAME=Travellist
      APP_ENV=dev
      APP_KEY=
      APP_DEBUG=true
      APP_URL=http://localhost:8000
      
      LOG_CHANNEL=stack
      
      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=travellist
      DB_USERNAME=travellist_user
      DB_PASSWORD=password
      ...
      

      Feel free to also change the database name, username, and password, if you wish. These variables will be leveraged in a later step where we’ll set up the docker-compose.yml file to configure our services.

      Save the file when you’re done editing. If you used nano, you can do that by pressing Ctrl+x, then Y and Enter to confirm.

      Step 3 — Setting Up the Application’s Dockerfile

      Although both our MySQL and Nginx services will be based on default images obtained from the Docker Hub, we still need to build a custom image for the application container. We’ll create a new Dockerfile for that.

      Our travellist image will be based on the php:7.4-fpm official PHP image from Docker Hub. On top of that basic PHP-FPM environment, we’ll install a few extra PHP modules and the Composer dependency management tool.

      We’ll also create a new system user; this is necessary to execute artisan and composer commands while developing the application. The uid setting ensures that the user inside the container has the same uid as your system user on your host machine, where you’re running Docker. This way, any files created by these commands are replicated in the host with the correct permissions. This also means that you’ll be able to use your code editor of choice in the host machine to develop the application that is running inside containers.

      Create a new Dockerfile with:

      Copy the following contents to your Dockerfile:

      Dockerfile

      FROM php:7.4-fpm
      
      # Arguments defined in docker-compose.yml
      ARG user
      ARG uid
      
      # Install system dependencies
      RUN apt-get update && apt-get install -y 
          git 
          curl 
          libpng-dev 
          libonig-dev 
          libxml2-dev 
          zip 
          unzip
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install PHP extensions
      RUN docker-php-ext-install pdo_mysql mbstring exif pcntl bcmath gd
      
      # Get latest Composer
      COPY --from=composer:latest /usr/bin/composer /usr/bin/composer
      
      # Create system user to run Composer and Artisan Commands
      RUN useradd -G www-data,root -u $uid -d /home/$user $user
      RUN mkdir -p /home/$user/.composer && 
          chown -R $user:$user /home/$user
      
      # Set working directory
      WORKDIR /var/www
      
      USER $user
      
      

      Don’t forget to save the file when you’re done.

      Our Dockerfile starts by defining the base image we’re using: php:7.4-fpm.

      After installing system packages and PHP extensions, we install Composer by copying the composer executable from its latest official image to our own application image.

      A new system user is then created and set up using the user and uid arguments that were declared at the beginning of the Dockerfile. These values will be injected by Docker Compose at build time.

      Finally, we set the default working dir as /var/www and change to the newly created user. This will make sure you’re connecting as a regular user, and that you’re on the right directory, when running composer and artisan commands on the application container.

      Step 4 — Setting Up Nginx Configuration and Database Dump Files

      When creating development environments with Docker Compose, it is often necessary to share configuration or initialization files with service containers, in order to set up or bootstrap those services. This practice facilitates making changes to configuration files to fine-tune your environment while you’re developing the application.

      We’ll now set up a folder with files that will be used to configure and initialize our service containers.

      To set up Nginx, we’ll share a travellist.conf file that will configure how the application is served. Create the docker-compose/nginx folder with:

      • mkdir -p docker-compose/nginx

      Open a new file named travellist.conf within that directory:

      • nano docker-compose/nginx/travellist.conf

      Copy the following Nginx configuration to that file:

      docker-compose/nginx/travellist.conf

      
      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      This file will configure Nginx to listen on port 80 and use index.php as default index page. It will set the document root to /var/www/public, and then configure Nginx to use the app service on port 9000 to process *.php files.

      Save and close the file when you’re done editing.

      To set up the MySQL database, we’ll share a database dump that will be imported when the container is initialized. This is a feature provided by the MySQL 5.7 image we’ll be using on that container.

      Create a new folder for your MySQL initialization files inside the docker-compose folder:

      • mkdir docker-compose/mysql

      Open a new .sql file:

      • nano docker-compose/mysql/init_db.sql

      The following MySQL dump is based on the database we’ve set up in our Laravel on LEMP guide. It will create a new table named places. Then, it will populate the table with a set of sample places.

      Add the following code to the file:

      docker-compose/mysql/db_init.sql

      DROP TABLE IF EXISTS `places`;
      
      CREATE TABLE `places` (
        `id` bigint(20) unsigned NOT NULL AUTO_INCREMENT,
        `name` varchar(255) COLLATE utf8mb4_unicode_ci NOT NULL,
        `visited` tinyint(1) NOT NULL DEFAULT '0',
        PRIMARY KEY (`id`)
      ) ENGINE=InnoDB AUTO_INCREMENT=12 DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_unicode_ci;
      
      INSERT INTO `places` (name, visited) VALUES ('Berlin',0),('Budapest',0),('Cincinnati',1),('Denver',0),('Helsinki',0),('Lisbon',0),('Moscow',1),('Nairobi',0),('Oslo',1),('Rio',0),('Tokyo',0);
      

      The places table contains three fields: id, name, and visited. The visited field is a flag used to identify the places that are still to go. Feel free to change the sample places or include new ones. Save and close the file when you’re done.

      We’ve finished setting up the application’s Dockerfile and the service configuration files. Next, we’ll set up Docker Compose to use these files when creating our services.

      Step 5 — Creating a Multi-Container Environment with Docker Compose

      Docker Compose enables you to create multi-container environments for applications running on Docker. It uses service definitions to build fully customizable environments with multiple containers that can share networks and data volumes. This allows for a seamless integration between application components.

      To set up our service definitions, we’ll create a new file called docker-compose.yml. Typically, this file is located at the root of the application folder, and it defines your containerized environment, including the base images you will use to build your containers, and how your services will interact.

      We’ll define three different services in our docker-compose.yml file: app, db, and nginx.

      The app service will build an image called travellist, based on the Dockerfile we’ve previously created. The container defined by this service will run a php-fpm server to parse PHP code and send the results back to the nginx service, which will be running on a separate container. The mysql service defines a container running a MySQL 5.7 server. Our services will share a bridge network named travellist.

      The application files will be synchronized on both the app and the nginx services via bind mounts. Bind mounts are useful in development environments because they allow for a performant two-way sync between host machine and containers.

      Create a new docker-compose.yml file at the root of the application folder:

      A typical docker-compose.yml file starts with a version definition, followed by a services node, under which all services are defined. Shared networks are usually defined at the bottom of that file.

      To get started, copy this boilerplate code into your docker-compose.yml file:

      docker-compose.yml

      version: "3.7"
      services:
      
      
      networks:
        travellist:
          driver: bridge
      

      We’ll now edit the services node to include the app, db and nginx services.

      The app Service

      The app service will set up a container named travellist-app. It builds a new Docker image based on a Dockerfile located in the same path as the docker-compose.yml file. The new image will be saved locally under the name travellist.

      Even though the document root being served as the application is located in the nginx container, we need the application files somewhere inside the app container as well, so we’re able to execute command line tasks with the Laravel Artisan tool.

      Copy the following service definition under your services node, inside the docker-compose.yml file:

      docker-compose.yml

        app:
          build:
            args:
              user: sammy
              uid: 1000
            context: ./
            dockerfile: Dockerfile
          image: travellist
          container_name: travellist-app
          restart: unless-stopped
          working_dir: /var/www/
          volumes:
            - ./:/var/www
          networks:
            - travellist
      

      These settings do the following:

      • build: This configuration tells Docker Compose to build a local image for the app service, using the specified path (context) and Dockerfile for instructions. The arguments user and uid are injected into the Dockerfile to customize user creation commands at build time.
      • image: The name that will be used for the image being built.
      • container_name: Sets up the container name for this service.
      • restart: Always restart, unless the service is stopped.
      • working_dir: Sets the default directory for this service as /var/www.
      • volumes: Creates a shared volume that will synchronize contents from the current directory to /var/www inside the container. Notice that this is not your document root, since that will live in the nginx container.
      • networks: Sets up this service to use a network named travellist.

      The db Service

      The db service uses a pre-built MySQL 5.7 image from Docker Hub. Because Docker Compose automatically loads .env variable files located in the same directory as the docker-compose.yml file, we can obtain our database settings from the Laravel .env file we created in a previous step.

      Include the following service definition in your services node, right after the app service:

      docker-compose.yml

        db:
          image: mysql:5.7
          container_name: travellist-db
          restart: unless-stopped
          environment:
            MYSQL_DATABASE: ${DB_DATABASE}
            MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
            MYSQL_PASSWORD: ${DB_PASSWORD}
            MYSQL_USER: ${DB_USERNAME}
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - ./docker-compose/mysql:/docker-entrypoint-initdb.d
          networks:
            - travellist
      

      These settings do the following:

      • image: Defines the Docker image that should be used for this container. In this case, we’re using a MySQL 5.7 image from Docker Hub.
      • container_name: Sets up the container name for this service: travellist-db.
      • restart: Always restart this service, unless it is explicitly stopped.
      • environment: Defines environment variables in the new container. We’re using values obtained from the Laravel .env file to set up our MySQL service, which will automatically create a new database and user based on the provided environment variables.
      • volumes: Creates a volume to share a .sql database dump that will be used to initialize the application database. The MySQL image will automatically import .sql files placed in the /docker-entrypoint-initdb.d directory inside the container.
      • networks: Sets up this service to use a network named travellist.

      The nginx Service

      The nginx service uses a pre-built Nginx image on top of Alpine, a lightweight Linux distribution. It creates a container named travellist-nginx, and it uses the ports definition to create a redirection from port 8000 on the host system to port 80 inside the container.

      Include the following service definition in your services node, right after the db service:

      docker-compose.yml

        nginx:
          image: nginx:1.17-alpine
          container_name: travellist-nginx
          restart: unless-stopped
          ports:
            - 8000:80
          volumes:
            - ./:/var/www
            - ./docker-compose/nginx:/etc/nginx/conf.d
          networks:
            - travellist
      

      These settings do the following:

      • image: Defines the Docker image that should be used for this container. In this case, we’re using the Alpine Nginx 1.17 image.
      • container_name: Sets up the container name for this service: travellist-nginx.
      • restart: Always restart this service, unless it is explicitly stopped.
      • ports: Sets up a port redirection that will allow external access via port 8000 to the web server running on port 80 inside the container.
      • volumes: Creates two shared volumes. The first one will synchronize contents from the current directory to /var/www inside the container. This way, when you make local changes to the application files, they will be quickly reflected in the application being served by Nginx inside the container. The second volume will make sure our Nginx configuration file, located at docker-compose/nginx/travellist.conf, is copied to the container’s Nginx configuration folder.
      • networks: Sets up this service to use a network named travellist.

      Finished docker-compose.yml File

      This is how our finished docker-compose.yml file looks like:

      docker-compose.yml

      version: "3.7"
      services:
        app:
          build:
            args:
              user: sammy
              uid: 1000
            context: ./
            dockerfile: Dockerfile
          image: travellist
          container_name: travellist-app
          restart: unless-stopped
          working_dir: /var/www/
          volumes:
            - ./:/var/www
          networks:
            - travellist
      
        db:
          image: mysql:5.7
          container_name: travellist-db
          restart: unless-stopped
          environment:
            MYSQL_DATABASE: ${DB_DATABASE}
            MYSQL_ROOT_PASSWORD: ${DB_PASSWORD}
            MYSQL_PASSWORD: ${DB_PASSWORD}
            MYSQL_USER: ${DB_USERNAME}
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - ./docker-compose/mysql:/docker-entrypoint-initdb.d
          networks:
            - travellist
      
        nginx:
          image: nginx:alpine
          container_name: travellist-nginx
          restart: unless-stopped
          ports:
            - 8000:80
          volumes:
            - ./:/var/www
            - ./docker-compose/nginx:/etc/nginx/conf.d/
          networks:
            - travellist
      
      networks:
        travellist:
          driver: bridge
      

      Make sure you save the file when you’re done.

      Step 5 — Running the Application with Docker Compose

      We’ll now use docker-compose commands to build the application image and run the services we specified in our setup.

      Build the app image with the following command:

      This command might take a few minutes to complete. You’ll see output similar to this:

      Output

      Building app Step 1/11 : FROM php:7.4-fpm ---> fa37bd6db22a Step 2/11 : ARG user ---> Running in f71eb33b7459 Removing intermediate container f71eb33b7459 ---> 533c30216f34 Step 3/11 : ARG uid ---> Running in 60d2d2a84cda Removing intermediate container 60d2d2a84cda ---> 497fbf904605 Step 4/11 : RUN apt-get update && apt-get install -y git curl libpng-dev libonig-dev ... Step 7/11 : COPY --from=composer:latest /usr/bin/composer /usr/bin/composer ---> e499f74896e3 Step 8/11 : RUN useradd -G www-data,root -u $uid -d /home/$user $user ---> Running in 232ef9c7dbd1 Removing intermediate container 232ef9c7dbd1 ---> 870fa3220ffa Step 9/11 : RUN mkdir -p /home/$user/.composer && chown -R $user:$user /home/$user ---> Running in 7ca8c0cb7f09 Removing intermediate container 7ca8c0cb7f09 ---> 3d2ef9519a8e Step 10/11 : WORKDIR /var/www ---> Running in 4a964f91edfa Removing intermediate container 4a964f91edfa ---> 00ada639da21 Step 11/11 : USER $user ---> Running in 9f8e874fede9 Removing intermediate container 9f8e874fede9 ---> fe176ff4702b Successfully built fe176ff4702b Successfully tagged travellist:latest

      When the build is finished, you can run the environment in background mode with:

      Output

      Creating travellist-db ... done Creating travellist-app ... done Creating travellist-nginx ... done

      This will run your containers in the background. To show information about the state of your active services, run:

      You’ll see output like this:

      Output

      Name Command State Ports ------------------------------------------------------------------------------- travellist-app docker-php-entrypoint php-fpm Up 9000/tcp travellist-db docker-entrypoint.sh mysqld Up 3306/tcp, 33060/tcp travellist-nginx nginx -g daemon off; Up 0.0.0.0:8000->80/tcp

      Your environment is now up and running, but we still need to execute a couple commands to finish setting up the application. You can use the docker-compose exec command to execute commands in the service containers, such as an ls -l to show detailed information about files in the application directory:

      • docker-compose exec app ls -l

      Output

      total 256 -rw-rw-r-- 1 sammy 1001 738 Jan 15 16:46 Dockerfile -rw-rw-r-- 1 sammy 1001 101 Jan 7 08:05 README.md drwxrwxr-x 6 sammy 1001 4096 Jan 7 08:05 app -rwxr-xr-x 1 sammy 1001 1686 Jan 7 08:05 artisan drwxrwxr-x 3 sammy 1001 4096 Jan 7 08:05 bootstrap -rw-rw-r-- 1 sammy 1001 1501 Jan 7 08:05 composer.json -rw-rw-r-- 1 sammy 1001 179071 Jan 7 08:05 composer.lock drwxrwxr-x 2 sammy 1001 4096 Jan 7 08:05 config drwxrwxr-x 5 sammy 1001 4096 Jan 7 08:05 database drwxrwxr-x 4 sammy 1001 4096 Jan 15 16:46 docker-compose -rw-rw-r-- 1 sammy 1001 1015 Jan 15 16:45 docker-compose.yml -rw-rw-r-- 1 sammy 1001 1013 Jan 7 08:05 package.json -rw-rw-r-- 1 sammy 1001 1405 Jan 7 08:05 phpunit.xml drwxrwxr-x 2 sammy 1001 4096 Jan 7 08:05 public -rw-rw-r-- 1 sammy 1001 273 Jan 7 08:05 readme.md drwxrwxr-x 6 sammy 1001 4096 Jan 7 08:05 resources drwxrwxr-x 2 sammy 1001 4096 Jan 7 08:05 routes -rw-rw-r-- 1 sammy 1001 563 Jan 7 08:05 server.php drwxrwxr-x 5 sammy 1001 4096 Jan 7 08:05 storage drwxrwxr-x 4 sammy 1001 4096 Jan 7 08:05 tests -rw-rw-r-- 1 sammy 1001 538 Jan 7 08:05 webpack.mix.js

      We’ll now run composer install to install the application dependencies:

      • docker-compose exec app composer install

      You’ll see output like this:

      Output

      Loading composer repositories with package information Installing dependencies (including require-dev) from lock file Package operations: 85 installs, 0 updates, 0 removals - Installing doctrine/inflector (1.3.1): Downloading (100%) - Installing doctrine/lexer (1.2.0): Downloading (100%) - Installing dragonmantank/cron-expression (v2.3.0): Downloading (100%) - Installing erusev/parsedown (1.7.4): Downloading (100%) - Installing symfony/polyfill-ctype (v1.13.1): Downloading (100%) - Installing phpoption/phpoption (1.7.2): Downloading (100%) - Installing vlucas/phpdotenv (v3.6.0): Downloading (100%) - Installing symfony/css-selector (v5.0.2): Downloading (100%) … Generating optimized autoload files > IlluminateFoundationComposerScripts::postAutoloadDump > @php artisan package:discover --ansi Discovered Package: facade/ignition Discovered Package: fideloper/proxy Discovered Package: laravel/tinker Discovered Package: nesbot/carbon Discovered Package: nunomaduro/collision Package manifest generated successfully.

      The last thing we need to do before testing the application is to generate a unique application key with the artisan Laravel command-line tool. This key is used to encrypt user sessions and other sensitive data:

      • docker-compose exec app php artisan key:generate

      Output

      Application key set successfully.

      Now go to your browser and access your server’s domain name or IP address on port 8000:

      http://server_domain_or_IP:8000
      

      You’ll see a page like this:

      [Demo Laravel Application](s3://assets.digitalocean.com/articles/laravelatscale/travellist_docker.png)

      You can use the logs command to check the logs generated by your services:

      • docker-compose logs nginx
      Attaching to travellist-nginx
      travellist-nginx | 192.168.160.1 - - [23/Jan/2020:13:57:25 +0000] "GET / HTTP/1.1" 200 626 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
      travellist-nginx | 192.168.160.1 - - [23/Jan/2020:13:57:26 +0000] "GET /favicon.ico HTTP/1.1" 200 0 "http://localhost:8000/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
      travellist-nginx | 192.168.160.1 - - [23/Jan/2020:13:57:42 +0000] "GET / HTTP/1.1" 200 626 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.88 Safari/537.36"
      …
      

      If you want to pause your Docker Compose environment while keeping the state of all its services, run:

      Output

      Pausing travellist-db ... done Pausing travellist-nginx ... done Pausing travellist-app ... done

      You can then resume your services with:

      Output

      Unpausing travellist-app ... done Unpausing travellist-nginx ... done Unpausing travellist-db ... done

      To shut down your Docker Compose environment and remove all of its containers, networks, and volumes, run:

      Output

      Stopping travellist-nginx ... done Stopping travellist-db ... done Stopping travellist-app ... done Removing travellist-nginx ... done Removing travellist-db ... done Removing travellist-app ... done Removing network travellist-laravel-demo_travellist

      For an overview of all Docker Compose commands, please check the Docker Compose command-line reference.

      Conclusion

      In this guide, we’ve set up a Docker environment with three containers using Docker Compose to define our infrastructure in a YAML file.

      From this point on, you can work on your Laravel application without needing to install and set up a local web server for development and testing. Moreover, you’ll be working with a disposable environment that can be easily replicated and distributed, which can be helpful while developing your application and also when moving towards a production environment.



      Source link

      Containerizing a Node.js Application for Development With Docker Compose


      Introdução

      Se você estiver desenvolvendo ativamente um aplicativo, usar o Docker pode simplificar seu fluxo de trabalho e o processo de implantação do seu aplicativo para produção. Trabalhar com contêineres no desenvolvimento oferece os seguintes benefícios:

      • Os ambientes são consistentes, o que significa que você pode escolher as linguagens e dependências que quiser para seu projeto sem se preocupar com conflitos de sistema.
      • Os ambientes são isolados, tornando mais fácil a resolução de problemas e a adição de novos membros de equipe.
      • Os ambientes são portáteis, permitindo que você empacote e compartilhe seu código com outros.

      Este tutorial mostrará como configurar um ambiente de desenvolvimento para um aplicativo Node.js usando o Docker. Você criará dois contêineres — um para o aplicativo Node e outro para o banco de dados MongoDB — com o Docker Compose. Como este aplicativo funciona com o Node e o MongoDB, nossa configuração fará o seguinte:

      • Sincronizar o código do aplicativo no host com o código no contêiner para facilitar as alterações durante o desenvolvimento.
      • Garante que as alterações no código do aplicativo funcionem sem um reinício.
      • Cria um usuário e um banco de dados protegido por senha para os dados do aplicativo.
      • Persistir esses dados.

      No final deste tutorial, você terá um aplicativo funcional de informações sobre tubarões sendo executado em contêineres do Docker:

      Complete Shark Collection

      Pré-requisitos

      Para seguir este tutorial, será necessário:

      Passo 1 — Clonando o projeto e modificando as dependências

      O primeiro passo na construção desta configuração será clonar o código do projeto e modificar seu arquivo package.json, que inclui as dependências do projeto. Vamos adicionar o nodemon às devDependecies do projeto, especificando que vamos usá-lo durante o desenvolvimento. Ao executar o aplicativo com o nodemon, fica garantido que ele será reiniciado automaticamente sempre que você fizer alterações no seu código.

      Primeiro, clone o repositório nodejs-mongo-mongoose da conta comunitária do GitHub da DigitalOcean. Este repositório inclui o código da configuração descrita em Como integrar o MongoDB com seu aplicativo Node, que explica como integrar um banco de dados MongoDB com um aplicativo Node existente usando o Mongoose.

      Clone o repositório em um diretório chamado node_project:

      • git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project

      Navegue até o diretório node_project:

      Abra o arquivo do projeto package.json usando o nano ou seu editor favorito:

      Por baixo das dependências do projeto e acima da chave de fechamento, crie um novo objeto devDependencies que inclua o nodemon:

      ~/node_project/package.json

      ...
      "dependencies": {
          "ejs": "^2.6.1",
          "express": "^4.16.4",
          "mongoose": "^5.4.10"
        },
        "devDependencies": {
          "nodemon": "^1.18.10"
        }    
      }
      

      Salve e feche o arquivo quando você terminar a edição.

      Com o código do projeto funcionando e suas dependências modificadas, você pode seguir para a refatoração do código para um fluxo de trabalho em contêiner.

      Modificar nosso aplicativo para um fluxo de trabalho em contêiner significa tornar nosso código mais modular. Os contêineres oferecem portabilidade entre ambientes, e nosso código deve refletir isso mantendo-se dissociado do sistema operacional subjacente o máximo possível. Para conseguir isso, vamos refatorar nosso código para fazer maior uso da propriedade do Node process.env, que retorna um objeto com informações sobre seu ambiente de usuário em tempo de execução. Podemos usar este objeto no nosso código para atribuir dinamicamente informações de configuração em tempo de execução com variáveis de ambiente.

      Vamos começar com o app.js, nosso principal ponto de entrada do aplicativo. Abra o arquivo:

      Dentro, você verá uma definição constante para uma port, bem como uma função listen que usa essa constante para especificar a porta na qual o aplicativo irá escutar:

      ~/home/node_project/app.js

      ...
      const port = 8080;
      ...
      app.listen(port, function () {
        console.log('Example app listening on port 8080!');
      });
      

      Vamos redefinir a constante port para permitir uma atribuição dinâmica em tempo de execução usando o objeto process.env. Faça as alterações a seguir na definição da constante e função listen:

      ~/home/node_project/app.js

      ...
      const port = process.env.PORT || 8080;
      ...
      app.listen(port, function () {
        console.log(`Example app listening on ${port}!`);
      });
      

      Nossa nova definição da constante atribui port dinamicamente usando o valor passado em tempo de execução ou 8080. De forma similar, reescrevemos a função listen para usar um template literal, que vai interpolar o valor port ao escutar conexões. Como vamos mapear nossas portas em outro lugar, essas revisões impedirão que tenhamos que revisar continuamente este arquivo como nossas alterações de ambiente.

      Quando terminar a edição, salve e feche o arquivo.

      Em seguida, vamos modificar nossa informação de conexão de banco de dados para remover quaisquer credenciais de configuração. Abra o arquivo db.js, que contém essa informação:

      Atualmente, o arquivo faz as seguintes coisas:

      • Importa o Mongoose, o Object Document Mapper (ODM) que estamos usando para criar esquemas e modelos para nossos dados do aplicativo.
      • Define as credenciais de banco de dados como constantes, incluindo o nome de usuário e senha.
      • Conecta-se ao banco de dados usando o método mongoose.connect.

      Para maiores informações sobre o arquivo, consulte o Passo 3 de Como integrar o MongoDB com seu aplicativo Node.

      Nosso primeiro passo na modificação do arquivo será redefinir as constantes que incluem informações sensíveis. Atualmente, essas constantes se parecem com isso:

      ~/node_project/db.js

      ...
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      ...
      

      Em vez de codificar essas informações de maneira rígida, é possível usar o objeto process.env para capturar os valores de tempo de execução para essas constantes. Modifique o bloco para que se pareça com isso:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      ...
      

      Salve e feche o arquivo quando você terminar a edição.

      Neste ponto, você modificou o db.js para trabalhar com as variáveis de ambiente do seu aplicativo, mas ainda precisa de uma maneira de passar essas variáveis ao seu aplicativo. Vamos criar um arquivo .env com valores que você pode passar para seu aplicativo em tempo de execução.

      Abra o arquivo:

      Este arquivo incluirá as informações que você removeu do db.js: o nome de usuário e senha para o banco de dados do seu aplicativo, além da configuração de porta e nome do banco de dados. Lembre-se de atualizar o nome de usuário, senha e nome do banco de dados listados aqui com suas próprias informações:

      ~/node_project/.env

      MONGO_USERNAME=sammy
      MONGO_PASSWORD=your_password
      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Note que removemos a configuração de host que originalmente apareceu em db.js. Agora, vamos definir nosso host no nível do arquivo do Docker Compose, junto com outras informações sobre nossos serviços e contêineres.

      Salve e feche esse arquivo quando terminar a edição.

      Como seu arquivo .env contém informações sensíveis, você vai querer garantir que ele esteja incluído nos arquivos .dockerignore e .gitignore“ do seu projeto para que ele não copie para o seu controle de versão ou contêineres.

      Abra seu arquivo .dockerignore:

      Adicione a seguinte linha ao final do arquivo:

      ~/node_project/.dockerignore

      ...
      .gitignore
      .env
      

      Salve e feche o arquivo quando você terminar a edição.

      O arquivo .gitignore neste repositório já inclui o .env, mas sinta-se à vontade para verificar se ele está lá:

      ~~/node_project/.gitignore

      ...
      .env
      ...
      

      Neste ponto, você extraiu informações sensíveis do seu código de projeto com sucesso e tomou medidas para controlar como e onde essas informações são copiadas. Agora, você pode adicionar mais robustez ao seu código de conexão de banco de dados para otimizá-lo para um fluxo de trabalho em contêiner.

      Passo 3 — Modificando as configurações de conexão de banco de dados

      Nosso próximo passo será tornar nosso método de conexão do banco de dados mais robusto adicionando códigos que lidem com casos onde nosso aplicativo falhe em se conectar ao nosso banco de dados. Introduzir este nível de resistência ao código do seu aplicativo é uma prática recomendada ao trabalhar com contêineres usando o Compose.

      Abra o db.js para edição:

      Você verá o código que adicionamos mais cedo, junto com a constante url para a conexão URI do Mongo e o método connect do Mongoose:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Atualmente, nosso método connect aceita uma opção que diz ao Mongoose para usar o novo analisador de URL do Mongo. Vamos adicionar mais algumas opções a este método para definir parâmetros para tentativas de reconexão. Podemos fazer isso criando uma constante options que inclua as informações relevantes, além da nova opção de analisador de URL. Abaixo das suas constantes do Mongo, adicione a seguinte definição para uma constante options:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      ...
      

      A opção reconnectTries diz ao Mongoose para continuar tentando se conectar indefinidamente, ao mesmo tempo que a reconnectInterval define o período entre tentativas de conexão em milissegundos. A connectTimeoutMS define 10 segundos como o período que o condutor do Mongo irá esperar antes de falhar a tentativa de conexão.

      Agora, podemos usar as novas constantes options no método connect do Mongoose para ajustar nossas configurações de conexão do Mongoose. Também vamos adicionar uma promise para lidar com possíveis erros de conexão.

      Atualmente, o método connect do Mongoose se parece com isso:

      ~/node_project/db.js

      ...
      mongoose.connect(url, {useNewUrlParser: true});
      

      Exclua o método connect existente e substitua-o pelo seguinte código, que inclui as constantes options e uma promise:

      ~/node_project/db.js

      ...
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      No caso de uma conexão bem sucedida, nossa função registra uma mensagem apropriada; caso contrário, ela irá catch o erro e registrá-lo, permitindo que resolvamos o problema.

      O arquivo final se parecerá com isso:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      Salve e feche o arquivo quando terminar a edição.

      Agora, você adicionou resiliência ao código do seu aplicativo para lidar com casos onde ele talvez falhasse em se conectar ao seu banco de dados. Com esse código funcionando, você pode seguir em frente para definir seus serviços com o Compose.

      Com seu código refatorado, você está pronto para escrever o arquivo docker-compose.yml com as definições do serviço. Um serviço no Compose é um contêiner em execução e as definições de serviço — que você incluirá no seu arquivo docker-compose.yml — contém informações sobre como cada imagem de contêiner será executada. A ferramenta Compose permite que você defina vários serviços para construir aplicativos multi-contêiner.

      No entanto, antes de definir nossos serviços, vamos adicionar uma ferramenta ao nosso projeto chamada wait-for para garantir que nosso aplicativo tente se conectar apenas ao nosso banco de dados assim que as tarefas de inicialização do banco de dados estiverem completas. Este script de empacotamento usa o netcat para verificar se um host e porta específicos estão ou não aceitando conexões TCP. Usar ele permite que você controle as tentativas do seu aplicativo para se conectar ao seu banco de dados testando se ele está ou não pronto para aceitar conexões.

      Embora o Compose permita que você especifique dependências entre serviços usando a opção depends_on, essa ordem é baseada em se o contêiner está ou não em funcionamento ao invés da sua disponibilidade. Usar o depends_on não será ideal para nossa configuração, uma vez que queremos que nosso aplicativo se conecte apenas quando as tarefas de inicialização do banco de dados, incluindo a adição de um usuário e senha ao banco de dados de autenticação do admin, estejam completas. Para maiores informações sobre como usar o wait-for e outras ferramentas para controlar a ordem de inicialização, consulte as recomendações na documentação do Compose relevantes.

      Abra um arquivo chamado wait-for.sh:

      Cole o código a seguir no arquivo para criar a função de votação:

      ~/node_project/app/wait-for.sh

      #!/bin/sh
      
      # original script: https://github.com/eficode/wait-for/blob/master/wait-for
      
      TIMEOUT=15
      QUIET=0
      
      echoerr() {
        if [ "$QUIET" -ne 1 ]; then printf "%sn" "$*" 1>&2; fi
      }
      
      usage() {
        exitcode="$1"
        cat << USAGE >&2
      Usage:
        $cmdname host:port [-t timeout] [-- command args]
        -q | --quiet                        Do not output any status messages
        -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
        -- COMMAND ARGS                     Execute command with args after the test finishes
      USAGE
        exit "$exitcode"
      }
      
      wait_for() {
        for i in `seq $TIMEOUT` ; do
          nc -z "$HOST" "$PORT" > /dev/null 2>&1
      
          result=$?
          if [ $result -eq 0 ] ; then
            if [ $# -gt 0 ] ; then
              exec "$@"
            fi
            exit 0
          fi
          sleep 1
        done
        echo "Operation timed out" >&2
        exit 1
      }
      
      while [ $# -gt 0 ]
      do
        case "$1" in
          *:* )
          HOST=$(printf "%sn" "$1"| cut -d : -f 1)
          PORT=$(printf "%sn" "$1"| cut -d : -f 2)
          shift 1
          ;;
          -q | --quiet)
          QUIET=1
          shift 1
          ;;
          -t)
          TIMEOUT="$2"
          if [ "$TIMEOUT" = "" ]; then break; fi
          shift 2
          ;;
          --timeout=*)
          TIMEOUT="${1#*=}"
          shift 1
          ;;
          --)
          shift
          break
          ;;
          --help)
          usage 0
          ;;
          *)
          echoerr "Unknown argument: $1"
          usage 1
          ;;
        esac
      done
      
      if [ "$HOST" = "" -o "$PORT" = "" ]; then
        echoerr "Error: you need to provide a host and port to test."
        usage 2
      fi
      
      wait_for "$@"
      

      Salve e feche o arquivo quando terminar de adicionar o código.

      Crie o executável do script:

      Em seguida, abra o arquivo docker-compose.yml:

      Primeiro, defina o serviço do aplicativo nodejs adicionando o seguinte código ao arquivo:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      

      A definição de serviço do nodejs inclui as seguintes opções:

      • build: define as opções de configuração, incluindo o context e dockerfile, que serão aplicadas quando o Compose construir a imagem do aplicativo. Se quisesse usar uma imagem existente de um registro como o Docker Hub, você poderia usar como alternativa a instrução image, com informações sobre seu nome de usuário, repositório e tag da imagem.
      • context: define o contexto de construção para a construção da imagem — neste caso, o diretório atual do projeto.
      • dockerfile: especifica o Dockerfile no diretório atual do seu projeto como o arquivo que o Compose usará para construir a imagem do aplicativo. Para maiores informações sobre este arquivo, consulte Como construir um aplicativo Node.js com o Docker.
      • image, container_name: aplicam nomes à imagem e contêiner.
      • restart: define a política de reinício. A padrão é no, mas definimos o contêiner para reiniciar a menos que ele seja interrompido.
      • env_file: diz ao Compose que gostaríamos de adicionar variáveis de ambiente de um arquivo chamado .env, localizado em nosso contexto de construção.
      • environment: usar essa opção permite que você adicione as configurações de conexão do Mongo que definiu no arquivo .env. Note que não estamos definindo o NODE_ENV para development, já que é o comportamento padrão do Express se o NODE_ENV não estiver definido. Quando seguir para a produção, será possível definir isso para production de forma a habilitar a visualização de mensagens de erro de cache e mensagens de erros menos detalhadas. Note também que especificamos o contêiner do banco de dados db como host, como discutido no Passo 2.
      • ports: mapeia a porta 80 no host para a porta 8080 no contêiner.
      • volumes: estamos incluindo dois tipos de montagens aqui:
        • A primeira é uma bind mount que monta nosso código do aplicativo no host no diretório /home/node/app no contêiner. Isso facilitará o desenvolvimento rápido, uma vez que quaisquer alterações que você faça no código do seu host serão povoadas imediatamente no contêiner.
        • A segunda é uma volume com o nome, node_modules. Quando o Docker executa a instrução npm install listada no aplicativo Dockerfile, o npm cria um novo diretório node_modules no contêiner que inclui os pacotes necessários para executar o aplicativo. No entanto, o bind mount que acabamos de criar irá esconder este diretório node_modules recém-criado. Como o node_modules no host está vazio, o bind irá mapear um diretório vazio para o contêiner, sobrepondo o novo diretório node_modules e impedir que nosso aplicativo seja iniciado. O volume chamado node_modules resolve este problema persistindo o conteúdo do diretório /home/node/app/node_modules” e montando-o no contêiner, escondendo o bind.

      Lembre-se disso ao usar esta abordagem:

      • Seu bind irá montar o conteúdo do diretório node_modules no contêiner para o host e este diretório será propriedade do root, uma vez que o volume nomeado foi criado pelo Docker.
      • Se você tiver um diretório pré-existente node_modules no host, ele irá sobrepor o diretório node_modules criado no contêiner. A configuração que estamos construindo neste tutorial supõe que você não tenha um diretório pré-existente node_modules e que você não estará trabalhando com o npm no seu host. Isso está de acordo com uma abordagem de doze fatores para o desenvolvimento do aplicativo, que minimiza dependências entre ambientes de execução.

        • networks: especifica que nosso serviço de aplicativo irá juntar-se à rede app-network que vamos definir no final no arquivo.
        • command: essa opção permite que você defina o comando que deve ser executado quando o Compose executar a imagem. Note que isso irá sobrepor a instrução CMD que definimos no nosso aplicativo Dockerfile. Aqui, estamos executando o aplicativo usando o script wait-for, que irá apurar o serviço db na porta 27017 para testar se o serviço de banco de dados está ou não pronto. Assim que o teste de prontidão for bem sucedido, o script executará o comando que definimos, /home/node/app/node_modules/.bin/nodemon app.js, para iniciar o aplicativo com o nodemon. Isso irá garantir que quaisquer alterações futuras que façamos no nosso código sejam recarregadas sem que tenhamos que reiniciar o aplicativo.

      Em seguida, crie o serviço db adicionando o seguinte código abaixo da definição do serviço do aplicativo:

      ~/node_project/docker-compose.yml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network  
      

      Algumas das configurações que definimos para o serviço nodejs continuam as mesmas, mas também fizemos as seguintes alterações nas definições image, environment e volumes:

      • image: para criar esse serviço, o Compose irá puxar a imagem do Mongo 4.1.8-xenial do hub do Docker. Estamos fixando uma versão específica para evitar possíveis conflitos futuros conforme a imagem do Mongo muda. Para maiores informações sobre a fixação da versão, consulte a documentação do Docker sobre as práticas recomendadas do Dockerfile.
      • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: a imagem mongo torna essas variáveis de ambiente disponíveis para que você possa modificar a inicialização da instância do seu banco de dados. O MONGO_INITDB_ROOT_USERNAME e o MONGO_INITDB_ROOT_PASSWORD criam juntos um usuário root no banco de dados de autenticação do admin e garantem que a autenticação esteja habilitada quando o contêiner iniciar. Definimos o MONGO_INITDB_ROOT_USERNAME e o MONGO_INITDB_ROOT_PASSWORD usando os valores do nosso arquivo .env, que passamos ao serviço db usando a opção env_file. Fazer isso significa que nosso usuário do aplicativo sammy será um usuário root na instância do banco de dados, com acesso a todos os privilégios administrativos e operacionais dessa função. Ao trabalhar na produção, será necessário criar um usuário de aplicativo dedicado com privilégios adequados ao escopo.

        Nota: lembre-se de que essas variáveis não irão surtir efeito caso inicie o contêiner com um diretório de dados já existente em funcionamento.

      • dbdata:/data/db: o volume chamado dbdata irá persistir os dados armazenados no diretório padrão de dados do Mongo, o /data/db. Isso garantirá que não perca dados nos casos em que você interrompa ou remova contêineres.

      Também adicionamos o serviço db à rede app-network com a opção networks.

      Como passo final, adicione as definições de volume e rede ao final do arquivo:

      ~/node_project/docker-compose.yml

      ...
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      A rede bridge app-network definida pelo usuário habilita a comunicação entre nossos contêineres, uma vez que eles estão no mesmo host daemon do Docker. Isso simplifica o tráfego e a comunicação dentro do aplicativo, uma vez que todas as portas entre os contêineres na mesma rede bridge são abertas, ao mesmo tempo em que nenhuma porta é exposta ao mundo exterior. Assim, nossos contêineres db e nodejs podem se comunicar um com o outro, e precisamos apenas expor a porta 80 para o acesso front-end ao aplicativo.

      Nossa chave de nível superior volumes define os volumes dbdata e node_modules. Quando o Docker cria volumes, o conteúdo do volume é armazenado em uma parte do sistema de arquivos do host, /var/lib/docker/volumes/, que é gerenciado pelo Docker. O conteúdo de cada volume é armazenado em um diretório em /var/lib/docker/volumes/ e é montado em qualquer contêiner que utilize o volume. Desta forma, os dados de informações sobre tubarões que nossos usuários criarão vão persistir no volume dbdata mesmo se removermos e recriarmos o contêiner db.

      O arquivo final docker-compose.yml se parecerá com isso:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:     
            - dbdata:/data/db
          networks:
            - app-network  
      
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      Salve e feche o arquivo quando você terminar a edição.

      Com as definições do seu serviço instaladas, você está pronto para iniciar o aplicativo.

      Passo 5 — Testando o aplicativo

      Com seu arquivo docker-compose.yml funcionando, você pode criar seus serviços com o comando docker-compose up. Você também pode testar se seus dados irão persistir parando e removendo seus contêineres com o docker-compose down.

      Primeiro, construa as imagens dos contêineres e crie os serviços executando o docker-compose up com a flag -d, que executará, em seguida, os contêineres nodejs e db em segundo plano:

      Você verá um resultado confirmando que seus serviços foram criados:

      Output

      ... Creating db ... done Creating nodejs ... done

      Você também pode obter informações mais detalhadas sobre os processos de inicialização exibindo o resultado do registro dos serviços:

      Você verá algo simelhante a isso caso tudo tenha iniciado corretamente:

      Output

      ... nodejs | [nodemon] starting `node app.js` nodejs | Example app listening on 8080! nodejs | MongoDB is connected ... db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin

      Você também pode verificar o status dos seus contêineres com o docker-compose ps:

      Você verá um resultado indicando que seus contêineres estão funcionando:

      Output

      Name Command State Ports ---------------------------------------------------------------------- db docker-entrypoint.sh mongod Up 27017/tcp nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp

      Com seus serviços em funcionamento, visite http://your_server_ip no navegador. Você verá uma página de destino que se parece com esta:

      Application Landing Page

      Clique no botão Get Shark Info. Você verá uma página com um formulário de entrada onde é possível digitar um nome de tubarão e uma descrição das características gerais desse tubarão:

      Shark Info Form

      No formulário, adicione um tubarão da sua escolha. Para o propósito dessa demonstração, vamos adicionar Megalodon Shark ao campo Shark Name e Ancient ao campo Shark Character:

      Filled Shark Form

      Clique no botão Submit. Você verá uma página com estas informações do tubarão exibidas para você:

      Shark Output

      Como passo final, podemos testar se os dados que acabou de digitar persistirão caso você remova seu contêiner de banco de dados.

      De volta ao seu terminal, digite o seguinte comando para parar e remover seus contêineres e rede:

      Note que não estamos incluindo a opção --volumes; desta forma, nosso volume dbdata não é removido.

      O resultado a seguir confirma que seus contêineres e rede foram removidos:

      Output

      Stopping nodejs ... done Stopping db ... done Removing nodejs ... done Removing db ... done Removing network node_project_app-network

      Recrie os contêineres:

      Agora, volte para o formulário de informações do tubarão:

      Shark Info Form

      Digite um novo tubarão da sua escolha. Vamos escolher Whale Shark e Large:

      Enter New Shark

      Assim que clicar em Submit, verá que o novo tubarão foi adicionado à coleção de tubarões no seu banco de dados sem a perda dos dados que já introduziu:

      Complete Shark Collection

      Seu aplicativo agora está funcionando em contêineres do Docker com persistência de dados e sincronização de código habilitados.

      Conclusão

      Ao seguir este tutorial, você criou uma configuração de desenvolvimento para seu aplicativo Node usando contêineres do Docker. Você tornou seu projeto mais modular e portátil extraindo informações sensíveis e desassociando o estado do seu aplicativo do código dele. Você também configurou um arquivo clichê docker-compose.yml que pode revisar conforme suas necessidades de desenvolvimento e exigências mudem.

      Conforme for desenvolvendo, você pode se interessar em aprender mais sobre a concepção de aplicativos para fluxos de trabalho em contêiner e Cloud Native. Consulte Arquitetando aplicativos para o Kubernetes e Modernizando aplicativos para o Kubernetes para maiores informações sobre esses tópicos.

      Para aprender mais sobre o código usado neste tutorial, consulte Como construir um aplicativo Node.js com o Docker e Como integrar o MongoDB com seu aplicativo Node. Para informações sobre como implantar um aplicativo Node com um proxy reverso Nginx usando contêineres, consulte Como proteger um aplicativo Node.js em contêiner com o Nginx, Let’s Encrypt e o Docker Compose.



      Source link

      Containerizing a Ruby on Rails Application for Development with Docker Compose


      Introduction

      If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

      • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
      • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
      • Environments are portable, allowing you to package and share your code with others.

      This tutorial will show you how to set up a development environment for a Ruby on Rails application using Docker. You will create multiple containers – for the application itself, the PostgreSQL database, Redis, and a Sidekiq service – with Docker Compose. The setup will do the following:

      • Synchronize the application code on the host with the code in the container to facilitate changes during development.
      • Persist application data between container restarts.
      • Configure Sidekiq workers to process jobs as expected.

      At the end of this tutorial, you will have a working shark information application running on Docker containers:

      Sidekiq App Home

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Cloning the Project and Adding Dependencies

      Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Add Sidekiq and Redis to a Ruby on Rails Application, which explains how to add Sidekiq to an existing Rails 5 project.

      Clone the repository into a directory called rails-docker:

      • git clone https://github.com/do-community/rails-sidekiq.git rails-docker

      Navigate to the rails-docker directory:

      In this tutorial we will use PostgreSQL as a database. In order to work with PostgreSQL instead of SQLite 3, you will need to add the pg gem to the project’s dependencies, which are listed in its Gemfile. Open that file for editing using nano or your favorite editor:

      Add the gem anywhere in the main project dependencies (above development dependencies):

      ~/rails-docker/Gemfile

      . . . 
      # Reduces boot times through caching; required in config/boot.rb
      gem 'bootsnap', '>= 1.1.0', require: false
      gem 'sidekiq', '~>6.0.0'
      gem 'pg', '~>1.1.3'
      
      group :development, :test do
      . . .
      

      We can also comment out the sqlite gem, since we won’t be using it anymore:

      ~/rails-docker/Gemfile

      . . . 
      # Use sqlite3 as the database for Active Record
      # gem 'sqlite3'
      . . .
      

      Finally, comment out the spring-watcher-listen gem under development:

      ~/rails-docker/Gemfile

      . . . 
      gem 'spring'
      # gem 'spring-watcher-listen', '~> 2.0.0'
      . . .
      

      If we do not disable this gem, we will see persistent error messages when accessing the Rails console. These error messages derive from the fact that this gem has Rails use listen to watch for changes in development, rather than polling the filesystem for changes. Because this gem watches the root of the project, including the node_modules directory, it will throw error messages about which directories are being watched, cluttering the console. If you are concerned about conserving CPU resources, however, disabling this gem may not work for you. In this case, it may be a good idea to upgrade your Rails application to Rails 6.

      Save and close the file when you are finished editing.

      With your project repository in place, the pg gem added to your Gemfile, and the spring-watcher-listen gem commented out, you are ready to configure your application to work with PostgreSQL.

      Step 2 — Configuring the Application to Work with PostgreSQL and Redis

      To work with PostgreSQL and Redis in development, we will want to do the following:

      • Configure the application to work with PostgreSQL as the default adapter.
      • Add an .env file to the project with our database username and password and Redis host.
      • Create an init.sql script to create a sammy user for the database.
      • Add an initializer for Sidekiq so that it can work with our containerized redis service.
      • Add the .env file and other relevant files to the project’s gitignore and dockerignore files.
      • Create database seeds so that our application has some records for us to work with when we start it up.

      First, open your database configuration file, located at config/database.yml:

      Currently, the file includes the following default settings, which are applied in the absence of other settings:

      ~/rails-docker/config/database.yml

      default: &default
        adapter: sqlite3
        pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
        timeout: 5000
      

      We need to change these to reflect the fact that we will use the postgresql adapter, since we will be creating a PostgreSQL service with Docker Compose to persist our application data.

      Delete the code that sets SQLite as the adapter and replace it with the following settings, which will set the adapter appropriately and the other variables necessary to connect:

      ~/rails-docker/config/database.yml

      default: &default
        adapter: postgresql
        encoding: unicode
        database: <%= ENV['DATABASE_NAME'] %>
        username: <%= ENV['DATABASE_USER'] %>
        password: <%= ENV['DATABASE_PASSWORD'] %>
        port: <%= ENV['DATABASE_PORT'] || '5432' %>
        host: <%= ENV['DATABASE_HOST'] %>
        pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
        timeout: 5000
      . . .
      

      Next, we’ll modify the setting for the development environment, since this is the environment we’re using in this setup.

      Delete the existing SQLite database configuration so that section looks like this:

      ~/rails-docker/config/database.yml

      . . . 
      development:
        <<: *default
      . . .
      

      Finally, delete the database settings for the production and test environments as well:

      ~/rails-docker/config/database.yml

      . . . 
      test:
        <<: *default
      
      production:
        <<: *default
      . . . 
      

      These modifications to our default database settings will allow us to set our database information dynamically using environment variables defined in .env files, which will not be committed to version control.

      Save and close the file when you are finished editing.

      Note that if you are creating a Rails project from scratch, you can set the adapter with the rails new command, as described in Step 3 of How To Use PostgreSQL with Your Ruby on Rails Application on Ubuntu 18.04. This will set your adapter in config/database.yml and automatically add the pg gem to the project.

      Now that we have referenced our environment variables, we can create a file for them with our preferred settings. Extracting configuration settings in this way is part of the 12 Factor approach to application development, which defines best practices for application resiliency in distributed environments. Now, when we are setting up our production and test environments in the future, configuring our database settings will involve creating additional .env files and referencing the appropriate file in our Docker Compose files.

      Open an .env file:

      Add the following values to the file:

      ~/rails-docker/.env

      DATABASE_NAME=rails_development
      DATABASE_USER=sammy
      DATABASE_PASSWORD=shark
      DATABASE_HOST=database
      REDIS_HOST=redis
      

      In addition to setting our database name, user, and password, we’ve also set a value for the DATABASE_HOST. The value, database, refers to the database PostgreSQL service we will create using Docker Compose. We’ve also set a REDIS_HOST to specify our redis service.

      Save and close the file when you are finished editing.

      To create the sammy database user, we can write an init.sql script that we can then mount to the database container when it starts.

      Open the script file:

      Add the following code to create a sammy user with administrative privileges:

      ~/rails-docker/init.sql

      CREATE USER sammy;
      ALTER USER sammy WITH SUPERUSER;
      

      This script will create the appropriate user on the database and grant this user administrative privileges.

      Set appropriate permissions on the script:

      Next, we’ll configure Sidekiq to work with our containerized redis service. We can add an initializer to the config/initializers directory, where Rails looks for configuration settings once frameworks and plugins are loaded, that sets a value for a Redis host.

      Open a sidekiq.rb file to specify these settings:

      • nano config/initializers/sidekiq.rb

      Add the following code to the file to specify values for a REDIS_HOST and REDIS_PORT:

      ~/rails-docker/config/initializers/sidekiq.rb

      Sidekiq.configure_server do |config|
        config.redis = {
          host: ENV['REDIS_HOST'],
          port: ENV['REDIS_PORT'] || '6379'
        }
      end
      
      Sidekiq.configure_client do |config|
        config.redis = {
          host: ENV['REDIS_HOST'],
          port: ENV['REDIS_PORT'] || '6379'
        }
      end
      

      Much like our database configuration settings, these settings give us the ability to set our host and port parameters dynamically, allowing us to substitute the appropriate values at runtime without having to modify the application code itself. In addition to a REDIS_HOST, we have a default value set for REDIS_PORT in case it is not set elsewhere.

      Save and close the file when you are finished editing.

      Next, to ensure that our application’s sensitive data is not copied to version control, we can add .env to our project’s .gitignore file, which tells Git which files to ignore in our project. Open the file for editing:

      At the bottom of the file, add an entry for .env:

      ~/rails-docker/.gitignore

      yarn-debug.log*
      .yarn-integrity
      .env
      

      Save and close the file when you are finished editing.

      Next, we’ll create a .dockerignore file to set what should not be copied to our containers. Open the file for editing:

      Add the following code to the file, which tells Docker to ignore some of the things we don’t need copied to our containers:

      ~/rails-docker/.dockerignore

      .DS_Store
      .bin
      .git
      .gitignore
      .bundleignore
      .bundle
      .byebug_history
      .rspec
      tmp
      log
      test
      config/deploy
      public/packs
      public/packs-test
      node_modules
      yarn-error.log
      coverage/
      

      Add .env to the bottom of this file as well:

      ~/rails-docker/.dockerignore

      . . .
      yarn-error.log
      coverage/
      .env
      

      Save and close the file when you are finished editing.

      As a final step, we will create some seed data so that our application has a few records when we start it up.

      Open a file for the seed data in the db directory:

      Add the following code to the file to create four demo sharks and one sample post:

      ~/rails-docker/db/seeds.rb

      # Adding demo sharks
      sharks = Shark.create([{ name: 'Great White', facts: 'Scary' }, { name: 'Megalodon', facts: 'Ancient' }, { name: 'Hammerhead', facts: 'Hammer-like' }, { name: 'Speartooth', facts: 'Endangered' }])
      Post.create(body: 'These sharks are misunderstood', shark: sharks.first)
      

      This seed data will create four sharks and one post that is associated with the first shark.

      Save and close the file when you are finished editing.

      With your application configured to work with PostgreSQL and your environment variables created, you are ready to write your application Dockerfile.

      Step 3 — Writing the Dockerfile and Entrypoint Scripts

      Your Dockerfile specifies what will be included in your application container when it is created. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.

      Following these guidelines on building optimized containers, we will make our image as efficient as possible by using an Alpine base and attempting to minimize our image layers generally.

      Open a Dockerfile in your current directory:

      Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application, which will form the starting point of the application build.

      Add the following code to the file to add the Ruby alpine image as a base:

      ~/rails-docker/Dockerfile

      FROM ruby:2.5.1-alpine
      

      The alpine image is derived from the Alpine Linux project, and will help us keep our image size down. For more information about whether or not the alpine image is the right choice for your project, please see the full discussion under the Image Variants section of the Docker Hub Ruby image page.

      Some factors to consider when using alpine in development:

      • Keeping image size down will decrease page and resource load times, particularly if you also keep volumes to a minimum. This helps keep your user experience in development quick and closer to what it would be if you were working locally in a non-containerized environment.
      • Having parity between development and production images facilitates successful deployments. Since teams often opt to use Alpine images in production for speed benefits, developing with an Alpine base helps offset issues when moving to production.

      Next, set an environment variable to specify the Bundler version:

      ~/rails-docker/Dockerfile

      . . .
      ENV BUNDLER_VERSION=2.0.2
      

      This is one of the steps we will take to avoid version conflicts between the default bundler version available in our environment and our application code, which requires Bundler 2.0.2.

      Next, add the packages that you need to work with the application to the Dockerfile:

      ~/rails-docker/Dockerfile

      . . . 
      RUN apk add --update --no-cache 
            binutils-gold 
            build-base 
            curl 
            file 
            g++ 
            gcc 
            git 
            less 
            libstdc++ 
            libffi-dev 
            libc-dev  
            linux-headers 
            libxml2-dev 
            libxslt-dev 
            libgcrypt-dev 
            make 
            netcat-openbsd 
            nodejs 
            openssl 
            pkgconfig 
            postgresql-dev 
            python 
            tzdata 
            yarn 
      

      These packages include nodejs and yarn, among others. Since our application serves assets with webpack, we need to include Node.js and Yarn for the application to work as expected.

      Keep in mind that the alpine image is extremely minimal: the packages listed here are not exhaustive of what you might want or need in development when you are containerizing your own application.

      Next, install the appropriate bundler version:

      ~/rails-docker/Dockerfile

      . . . 
      RUN gem install bundler -v 2.0.2
      

      This step will guarantee parity between our containerized environment and the specifications in this project’s Gemfile.lock file.

      Now set the working directory for the application on the container:

      ~/rails-docker/Dockerfile

      . . .
      WORKDIR /app
      

      Copy over your Gemfile and Gemfile.lock:

      ~/rails-docker/Dockerfile

      . . .
      COPY Gemfile Gemfile.lock ./
      

      Copying these files as an independent step, followed by bundle install, means that the project gems do not need to be rebuilt every time you make changes to your application code. This will work in conjunction with the gem volume that we will include in our Compose file, which will mount gems to your application container in cases where the service is recreated but project gems remain the same.

      Next, set the configuration options for the nokogiri gem build:

      ~/rails-docker/Dockerfile

      . . . 
      RUN bundle config build.nokogiri --use-system-libraries
      . . .
      

      This step builds nokigiri with the libxml2 and libxslt library versions that we added to the application container in the RUN apk add… step above.

      Next, install the project gems:

      ~/rails-docker/Dockerfile

      . . . 
      RUN bundle check || bundle install
      

      This instruction checks that the gems are not already installed before installing them.

      Next, we’ll repeat the same procedure that we used with gems with our JavaScript packages and dependencies. First we’ll copy package metadata, then we’ll install dependencies, and finally we’ll copy the application code into the container image.

      To get started with the Javascript section of our Dockerfile, copy package.json and yarn.lock from your current project directory on the host to the container:

      ~/rails-docker/Dockerfile

      . . . 
      COPY package.json yarn.lock ./
      

      Then install the required packages with yarn install:

      ~/rails-docker/Dockerfile

      . . . 
      RUN yarn install --check-files
      

      This instruction includes a --check-files flag with the yarn command, a feature that makes sure any previously installed files have not been removed. As in the case of our gems, we will manage the persistence of the packages in the node_modules directory with a volume when we write our Compose file.

      Finally, copy over the rest of the application code and start the application with an entrypoint script:

      ~/rails-docker/Dockerfile

      . . . 
      COPY . ./ 
      
      ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
      

      Using an entrypoint script allows us to run the container as an executable.

      The final Dockerfile will look like this:

      ~/rails-docker/Dockerfile

      FROM ruby:2.5.1-alpine
      
      ENV BUNDLER_VERSION=2.0.2
      
      RUN apk add --update --no-cache 
            binutils-gold 
            build-base 
            curl 
            file 
            g++ 
            gcc 
            git 
            less 
            libstdc++ 
            libffi-dev 
            libc-dev  
            linux-headers 
            libxml2-dev 
            libxslt-dev 
            libgcrypt-dev 
            make 
            netcat-openbsd 
            nodejs 
            openssl 
            pkgconfig 
            postgresql-dev 
            python 
            tzdata 
            yarn 
      
      RUN gem install bundler -v 2.0.2
      
      WORKDIR /app
      
      COPY Gemfile Gemfile.lock ./
      
      RUN bundle config build.nokogiri --use-system-libraries
      
      RUN bundle check || bundle install 
      
      COPY package.json yarn.lock ./
      
      RUN yarn install --check-files
      
      COPY . ./ 
      
      ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
      

      Save and close the file when you are finished editing.

      Next, create a directory called entrypoints for the entrypoint scripts:

      This directory will include our main entrypoint script and a script for our Sidekiq service.

      Open the file for the application entrypoint script:

      • nano entrypoints/docker-entrypoint.sh

      Add the following code to the file:

      rails-docker/entrypoints/docker-entrypoint.sh

      #!/bin/sh
      
      set -e
      
      if [ -f tmp/pids/server.pid ]; then
        rm tmp/pids/server.pid
      fi
      
      bundle exec rails s -b 0.0.0.0
      

      The first important line is set -e, which tells the /bin/sh shell that runs the script to fail fast if there are any problems later in the script. Next, the script checks that tmp/pids/server.pid is not present to ensure that there won’t be server conflicts when we start the application. Finally, the script starts the Rails server with the bundle exec rails s command. We use the -b option with this command to bind the server to all IP addresses rather than to the default, localhost. This invocation makes the Rails server route incoming requests to the container IP rather than to the default localhost.

      Save and close the file when you are finished editing.

      Make the script executable:

      • chmod +x entrypoints/docker-entrypoint.sh

      Next, we will create a script to start our sidekiq service, which will process our Sidekiq jobs. For more information about how this application uses Sidekiq, please see How To Add Sidekiq and Redis to a Ruby on Rails Application.

      Open a file for the Sidekiq entrypoint script:

      • nano entrypoints/sidekiq-entrypoint.sh

      Add the following code to the file to start Sidekiq:

      ~/rails-docker/entrypoints/sidekiq-entrypoint.sh

      #!/bin/sh
      
      set -e
      
      if [ -f tmp/pids/server.pid ]; then
        rm tmp/pids/server.pid
      fi
      
      bundle exec sidekiq
      

      This script starts Sidekiq in the context of our application bundle.

      Save and close the file when you are finished editing. Make it executable:

      • chmod +x entrypoints/sidekiq-entrypoint.sh

      With your entrypoint scripts and Dockerfile in place, you are ready to define your services in your Compose file.

      Step 4 — Defining Services with Docker Compose

      Using Docker Compose, we will be able to run the multiple containers required for our setup. We will define our Compose services in our main docker-compose.yml file. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

      Our application setup will include the following services:

      • The application itself
      • The PostgreSQL database
      • Redis
      • Sidekiq

      We will also include a bind mount as part of our setup, so that any code changes we make during development will be immediately synchronized with the containers that need access to this code.

      Note that we are not defining a test service, since testing is outside of the scope of this tutorial and series, but you could do so by following the precedent we are using here for the sidekiq service.

      Open the docker-compose.yml file:

      First, add the application service definition:

      ~/rails-docker/docker-compose.yml

      version: '3.4'
      
      services:
        app: 
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - database
            - redis
          ports: 
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      

      The app service definition includes the following options:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the image build — in this case, the current project directory.
      • dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image.
      • depends_on: This sets up the database and redis containers first so that they are up and running before app.
      • ports: This maps port 3000 on the host to port 3000 on the container.
      • volumes: We are including two types of mounts here:
        • The first is a bind mount that mounts our application code on the host to the /app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.
        • The second is a named volume, gem_cache. When the bundle install instruction runs in the container, it will install the project gems. Adding this volume means that if you recreate the container, the gems will be mounted to the new container. This mount presumes that there haven’t been any changes to the project, so if you do make changes to your project gems in development, you will need to remember to delete this volume before recreating your application service.
        • The third volume is a named volume for the node_modules directory. Rather than having node_modules mounted to the host, which can lead to package discrepancies and permissions conflicts in development, this volume will ensure that the packages in this directory are persisted and reflect the current state of the project. Again, if you modify the project’s Node dependencies, you will need to remove and recreate this volume.
      • env_file: This tells Compose that we would like to add environment variables from a file called .env located in the build context.
      • environment: Using this option allows us to set a non-sensitive environment variable, passing information about the Rails environment to the container.

      Next, below the app service definition, add the following code to define your database service:

      ~/rails-docker/docker-compose.yml

      . . .
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
            - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      

      Unlike the app service, the database service pulls a postgres image directly from Docker Hub. Note that we’re also pinning the version here, rather than setting it to latest or not specifying it (which defaults to latest). This way, we can ensure that this setup works with the versions specified here and avoid unexpected surprises with breaking code changes to the image.

      We are also including a db_data volume here, which will persist our application data in between container starts. Additionally, we’ve mounted our init.sql startup script to the appropriate directory, docker-entrypoint-initdb.d/ on the container, in order to create our sammy database user. After the image entrypoint creates the default postgres user and database, it will run any scripts found in the docker-entrypoint-initdb.d/ directory, which you can use for necessary initialization tasks. For more details, look at the Initialization scripts section of the PostgreSQL image documentation

      Next, add the redis service definition:

      ~/rails-docker/docker-compose.yml

      . . .
        redis:
          image: redis:5.0.7
      

      Like the database service, the redis service uses an image from Docker Hub. In this case, we are not persisting the Sidekiq job cache.

      Finally, add the sidekiq service definition:

      ~/rails-docker/docker-compose.yml

      . . .
        sidekiq:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - app      
            - database
            - redis
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      

      Our sidekiq service resembles our app service in a few respects: it uses the same build context and image, environment variables, and volumes. However, it is dependent on the app, redis, and database services, and so will be the last to start. Additionally, it uses an entrypoint that will override the entrypoint set in the Dockerfile. This entrypoint setting points to entrypoints/sidekiq-entrypoint.sh, which includes the appropriate command to start the sidekiq service.

      As a final step, add the volume definitions below the sidekiq service definition:

      ~/rails-docker/docker-compose.yml

      . . .
      volumes:
        gem_cache:
        db_data:
        node_modules:
      

      Our top-level volumes key defines the volumes gem_cache, db_data, and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that’s managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the db_data volume even if we remove and recreate the database service.

      The finished file will look like this:

      ~/rails-docker/docker-compose.yml

      version: '3.4'
      
      services:
        app: 
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:     
            - database
            - redis
          ports: 
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
            - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      
        redis:
          image: redis:5.0.7
      
        sidekiq:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - app      
            - database
            - redis
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      
      volumes:
        gem_cache:
        db_data:
        node_modules:     
      

      Save and close the file when you are finished editing.

      With your service definitions written, you are ready to start the application.

      Step 5 — Testing the Application

      With your docker-compose.yml file in place, you can create your services with the docker-compose up command and seed your database. You can also test that your data will persist by stopping and removing your containers with docker-compose down and recreating them.

      First, build the container images and create the services by running docker-compose up with the -d flag, which will run the containers in the background:

      You will see output that your services have been created:

      Output

      Creating rails-docker_database_1 ... done Creating rails-docker_redis_1 ... done Creating rails-docker_app_1 ... done Creating rails-docker_sidekiq_1 ... done

      You can also get more detailed information about the startup processes by displaying the log output from the services:

      You will see something like this if everything has started correctly:

      Output

      sidekiq_1 | 2019-12-19T15:05:26.365Z pid=6 tid=grk7r6xly INFO: Booting Sidekiq 6.0.3 with redis options {:host=>"redis", :port=>"6379", :id=>"Sidekiq-server-PID-6", :url=>nil} sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: Running in ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux-musl] sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: See LICENSE and the LGPL-3.0 for licensing details. sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org app_1 | => Booting Puma app_1 | => Rails 5.2.3 application starting in development app_1 | => Run `rails server -h` for more startup options app_1 | Puma starting in single mode... app_1 | * Version 3.12.1 (ruby 2.5.1-p57), codename: Llamas in Pajamas app_1 | * Min threads: 5, max threads: 5 app_1 | * Environment: development app_1 | * Listening on tcp://0.0.0.0:3000 app_1 | Use Ctrl-C to stop . . . database_1 | PostgreSQL init process complete; ready for start up. database_1 | database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: listening on IPv6 address "::", port 5432 database_1 | 2019-12-19 15:05:20.163 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" database_1 | 2019-12-19 15:05:20.182 UTC [63] LOG: database system was shut down at 2019-12-19 15:05:20 UTC database_1 | 2019-12-19 15:05:20.187 UTC [1] LOG: database system is ready to accept connections . . . redis_1 | 1:M 19 Dec 2019 15:05:18.822 * Ready to accept connections

      You can also check the status of your containers with docker-compose ps:

      You will see output indicating that your containers are running:

      Output

      Name Command State Ports ----------------------------------------------------------------------------------------- rails-docker_app_1 ./entrypoints/docker-resta ... Up 0.0.0.0:3000->3000/tcp rails-docker_database_1 docker-entrypoint.sh postgres Up 5432/tcp rails-docker_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp rails-docker_sidekiq_1 ./entrypoints/sidekiq-entr ... Up

      Next, create and seed your database and run migrations on it with the following docker-compose exec command:

      • docker-compose exec app bundle exec rake db:setup db:migrate

      The docker-compose exec command allows you to run commands in your services; we are using it here to run rake db:setup and db:migrate in the context of our application bundle to create and seed the database and run migrations. As you work in development, docker-compose exec will prove useful to you when you want to run migrations against your development database.

      You will see the following output after running this command:

      Output

      Created database 'rails_development' Database 'rails_development' already exists -- enable_extension("plpgsql") -> 0.0140s -- create_table("endangereds", {:force=>:cascade}) -> 0.0097s -- create_table("posts", {:force=>:cascade}) -> 0.0108s -- create_table("sharks", {:force=>:cascade}) -> 0.0050s -- enable_extension("plpgsql") -> 0.0173s -- create_table("endangereds", {:force=>:cascade}) -> 0.0088s -- create_table("posts", {:force=>:cascade}) -> 0.0128s -- create_table("sharks", {:force=>:cascade}) -> 0.0072s

      With your services running, you can visit localhost:3000 or http://your_server_ip:3000 in the browser. You will see a landing page that looks like this:

      Sidekiq App Home

      We can now test data persistence. Create a new shark by clicking on Get Shark Info button, which will take you to the sharks/index route:

      Sharks Index Page with Seeded Data

      To verify that the application is working, we can add some demo information to it. Click on New Shark. You will be prompted for a username (sammy) and password (shark), thanks to the project’s authentication settings.

      On the New Shark page, input “Mako” into the Name field and “Fast” into the Facts field.

      Click on the Create Shark button to create the shark. Once you have created the shark, click Home on the site’s navbar to get back to the main application landing page. We can now test that Sidekiq is working.

      Click on the Which Sharks Are in Danger? button. Since you have not uploaded any endangered sharks, this will take you to the endangered index view:

      Endangered Index View

      Click on Import Endangered Sharks to import the sharks. You will see a status message telling you that the sharks have been imported:

      Begin Import

      You will also see the beginning of the import. Refresh your page to see the entire table:

      Refresh Table

      Thanks to Sidekiq, our large batch upload of endangered sharks has succeeded without locking up the browser or interfering with other application functionality.

      Click on the Home button at the bottom of the page, which will bring you back to the application main page:

      Sidekiq App Home

      From here, click on Which Sharks Are in Danger? again. You will see the uploaded sharks once again.

      Now that we know our application is working properly, we can test our data persistence.

      Back at your terminal, type the following command to stop and remove your containers:

      Note that we are not including the --volumes option; hence, our db_data volume is not removed.

      The following output confirms that your containers and network have been removed:

      Output

      Stopping rails-docker_sidekiq_1 ... done Stopping rails-docker_app_1 ... done Stopping rails-docker_database_1 ... done Stopping rails-docker_redis_1 ... done Removing rails-docker_sidekiq_1 ... done Removing rails-docker_app_1 ... done Removing rails-docker_database_1 ... done Removing rails-docker_redis_1 ... done Removing network rails-docker_default

      Recreate the containers:

      Open the Rails console on the app container with docker-compose exec and bundle exec rails console:

      • docker-compose exec app bundle exec rails console

      At the prompt, inspect the last Shark record in the database:

      You will see the record you just created:

      IRB session

      Shark Load (1.0ms) SELECT "sharks".* FROM "sharks" ORDER BY "sharks"."id" DESC LIMIT $1 [["LIMIT", 1]] => "#<Shark id: 5, name: "Mako", facts: "Fast", created_at: "2019-12-20 14:03:28", updated_at: "2019-12-20 14:03:28">"

      You can then check to see that your Endangered sharks have been persisted with the following command:

      IRB session

      (0.8ms) SELECT COUNT(*) FROM "endangereds" => 73

      Your db_data volume was successfully mounted to the recreated database service, making it possible for your app service to access the saved data. If you navigate directly to the index shark page by visiting localhost:3000/sharks or http://your_server_ip:3000/sharks you will also see that record displayed:

      Sharks Index Page with Mako

      Your endangered sharks will also be at the localhost:3000/endangered/data or http://your_server_ip:3000/endangered/data view:

      Refresh Table

      Your application is now running on Docker containers with data persistence and code synchronization enabled. You can go ahead and test out local code changes on your host, which will be synchronized to your container thanks to the bind mount we defined as part of the app service.

      Conclusion

      By following this tutorial, you have created a development setup for your Rails application using Docker containers. You’ve made your project more modular and portable by extracting sensitive information and decoupling your application’s state from your code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

      As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics. Or, if you would like to invest in a Kubernetes learning sequence, please have a look at out Kubernetes for Full-Stack Developers curriculum.

      To learn more about the application code itself, please see the other tutorials in this series:



      Source link