One place for hosting & domains

      Development

      Containerizing a Node.js Application for Development With Docker Compose


      Introdução

      Se você estiver desenvolvendo ativamente um aplicativo, usar o Docker pode simplificar seu fluxo de trabalho e o processo de implantação do seu aplicativo para produção. Trabalhar com contêineres no desenvolvimento oferece os seguintes benefícios:

      • Os ambientes são consistentes, o que significa que você pode escolher as linguagens e dependências que quiser para seu projeto sem se preocupar com conflitos de sistema.
      • Os ambientes são isolados, tornando mais fácil a resolução de problemas e a adição de novos membros de equipe.
      • Os ambientes são portáteis, permitindo que você empacote e compartilhe seu código com outros.

      Este tutorial mostrará como configurar um ambiente de desenvolvimento para um aplicativo Node.js usando o Docker. Você criará dois contêineres — um para o aplicativo Node e outro para o banco de dados MongoDB — com o Docker Compose. Como este aplicativo funciona com o Node e o MongoDB, nossa configuração fará o seguinte:

      • Sincronizar o código do aplicativo no host com o código no contêiner para facilitar as alterações durante o desenvolvimento.
      • Garante que as alterações no código do aplicativo funcionem sem um reinício.
      • Cria um usuário e um banco de dados protegido por senha para os dados do aplicativo.
      • Persistir esses dados.

      No final deste tutorial, você terá um aplicativo funcional de informações sobre tubarões sendo executado em contêineres do Docker:

      Complete Shark Collection

      Pré-requisitos

      Para seguir este tutorial, será necessário:

      Passo 1 — Clonando o projeto e modificando as dependências

      O primeiro passo na construção desta configuração será clonar o código do projeto e modificar seu arquivo package.json, que inclui as dependências do projeto. Vamos adicionar o nodemon às devDependecies do projeto, especificando que vamos usá-lo durante o desenvolvimento. Ao executar o aplicativo com o nodemon, fica garantido que ele será reiniciado automaticamente sempre que você fizer alterações no seu código.

      Primeiro, clone o repositório nodejs-mongo-mongoose da conta comunitária do GitHub da DigitalOcean. Este repositório inclui o código da configuração descrita em Como integrar o MongoDB com seu aplicativo Node, que explica como integrar um banco de dados MongoDB com um aplicativo Node existente usando o Mongoose.

      Clone o repositório em um diretório chamado node_project:

      • git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project

      Navegue até o diretório node_project:

      Abra o arquivo do projeto package.json usando o nano ou seu editor favorito:

      Por baixo das dependências do projeto e acima da chave de fechamento, crie um novo objeto devDependencies que inclua o nodemon:

      ~/node_project/package.json

      ...
      "dependencies": {
          "ejs": "^2.6.1",
          "express": "^4.16.4",
          "mongoose": "^5.4.10"
        },
        "devDependencies": {
          "nodemon": "^1.18.10"
        }    
      }
      

      Salve e feche o arquivo quando você terminar a edição.

      Com o código do projeto funcionando e suas dependências modificadas, você pode seguir para a refatoração do código para um fluxo de trabalho em contêiner.

      Modificar nosso aplicativo para um fluxo de trabalho em contêiner significa tornar nosso código mais modular. Os contêineres oferecem portabilidade entre ambientes, e nosso código deve refletir isso mantendo-se dissociado do sistema operacional subjacente o máximo possível. Para conseguir isso, vamos refatorar nosso código para fazer maior uso da propriedade do Node process.env, que retorna um objeto com informações sobre seu ambiente de usuário em tempo de execução. Podemos usar este objeto no nosso código para atribuir dinamicamente informações de configuração em tempo de execução com variáveis de ambiente.

      Vamos começar com o app.js, nosso principal ponto de entrada do aplicativo. Abra o arquivo:

      Dentro, você verá uma definição constante para uma port, bem como uma função listen que usa essa constante para especificar a porta na qual o aplicativo irá escutar:

      ~/home/node_project/app.js

      ...
      const port = 8080;
      ...
      app.listen(port, function () {
        console.log('Example app listening on port 8080!');
      });
      

      Vamos redefinir a constante port para permitir uma atribuição dinâmica em tempo de execução usando o objeto process.env. Faça as alterações a seguir na definição da constante e função listen:

      ~/home/node_project/app.js

      ...
      const port = process.env.PORT || 8080;
      ...
      app.listen(port, function () {
        console.log(`Example app listening on ${port}!`);
      });
      

      Nossa nova definição da constante atribui port dinamicamente usando o valor passado em tempo de execução ou 8080. De forma similar, reescrevemos a função listen para usar um template literal, que vai interpolar o valor port ao escutar conexões. Como vamos mapear nossas portas em outro lugar, essas revisões impedirão que tenhamos que revisar continuamente este arquivo como nossas alterações de ambiente.

      Quando terminar a edição, salve e feche o arquivo.

      Em seguida, vamos modificar nossa informação de conexão de banco de dados para remover quaisquer credenciais de configuração. Abra o arquivo db.js, que contém essa informação:

      Atualmente, o arquivo faz as seguintes coisas:

      • Importa o Mongoose, o Object Document Mapper (ODM) que estamos usando para criar esquemas e modelos para nossos dados do aplicativo.
      • Define as credenciais de banco de dados como constantes, incluindo o nome de usuário e senha.
      • Conecta-se ao banco de dados usando o método mongoose.connect.

      Para maiores informações sobre o arquivo, consulte o Passo 3 de Como integrar o MongoDB com seu aplicativo Node.

      Nosso primeiro passo na modificação do arquivo será redefinir as constantes que incluem informações sensíveis. Atualmente, essas constantes se parecem com isso:

      ~/node_project/db.js

      ...
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      ...
      

      Em vez de codificar essas informações de maneira rígida, é possível usar o objeto process.env para capturar os valores de tempo de execução para essas constantes. Modifique o bloco para que se pareça com isso:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      ...
      

      Salve e feche o arquivo quando você terminar a edição.

      Neste ponto, você modificou o db.js para trabalhar com as variáveis de ambiente do seu aplicativo, mas ainda precisa de uma maneira de passar essas variáveis ao seu aplicativo. Vamos criar um arquivo .env com valores que você pode passar para seu aplicativo em tempo de execução.

      Abra o arquivo:

      Este arquivo incluirá as informações que você removeu do db.js: o nome de usuário e senha para o banco de dados do seu aplicativo, além da configuração de porta e nome do banco de dados. Lembre-se de atualizar o nome de usuário, senha e nome do banco de dados listados aqui com suas próprias informações:

      ~/node_project/.env

      MONGO_USERNAME=sammy
      MONGO_PASSWORD=your_password
      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Note que removemos a configuração de host que originalmente apareceu em db.js. Agora, vamos definir nosso host no nível do arquivo do Docker Compose, junto com outras informações sobre nossos serviços e contêineres.

      Salve e feche esse arquivo quando terminar a edição.

      Como seu arquivo .env contém informações sensíveis, você vai querer garantir que ele esteja incluído nos arquivos .dockerignore e .gitignore“ do seu projeto para que ele não copie para o seu controle de versão ou contêineres.

      Abra seu arquivo .dockerignore:

      Adicione a seguinte linha ao final do arquivo:

      ~/node_project/.dockerignore

      ...
      .gitignore
      .env
      

      Salve e feche o arquivo quando você terminar a edição.

      O arquivo .gitignore neste repositório já inclui o .env, mas sinta-se à vontade para verificar se ele está lá:

      ~~/node_project/.gitignore

      ...
      .env
      ...
      

      Neste ponto, você extraiu informações sensíveis do seu código de projeto com sucesso e tomou medidas para controlar como e onde essas informações são copiadas. Agora, você pode adicionar mais robustez ao seu código de conexão de banco de dados para otimizá-lo para um fluxo de trabalho em contêiner.

      Passo 3 — Modificando as configurações de conexão de banco de dados

      Nosso próximo passo será tornar nosso método de conexão do banco de dados mais robusto adicionando códigos que lidem com casos onde nosso aplicativo falhe em se conectar ao nosso banco de dados. Introduzir este nível de resistência ao código do seu aplicativo é uma prática recomendada ao trabalhar com contêineres usando o Compose.

      Abra o db.js para edição:

      Você verá o código que adicionamos mais cedo, junto com a constante url para a conexão URI do Mongo e o método connect do Mongoose:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Atualmente, nosso método connect aceita uma opção que diz ao Mongoose para usar o novo analisador de URL do Mongo. Vamos adicionar mais algumas opções a este método para definir parâmetros para tentativas de reconexão. Podemos fazer isso criando uma constante options que inclua as informações relevantes, além da nova opção de analisador de URL. Abaixo das suas constantes do Mongo, adicione a seguinte definição para uma constante options:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      ...
      

      A opção reconnectTries diz ao Mongoose para continuar tentando se conectar indefinidamente, ao mesmo tempo que a reconnectInterval define o período entre tentativas de conexão em milissegundos. A connectTimeoutMS define 10 segundos como o período que o condutor do Mongo irá esperar antes de falhar a tentativa de conexão.

      Agora, podemos usar as novas constantes options no método connect do Mongoose para ajustar nossas configurações de conexão do Mongoose. Também vamos adicionar uma promise para lidar com possíveis erros de conexão.

      Atualmente, o método connect do Mongoose se parece com isso:

      ~/node_project/db.js

      ...
      mongoose.connect(url, {useNewUrlParser: true});
      

      Exclua o método connect existente e substitua-o pelo seguinte código, que inclui as constantes options e uma promise:

      ~/node_project/db.js

      ...
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      No caso de uma conexão bem sucedida, nossa função registra uma mensagem apropriada; caso contrário, ela irá catch o erro e registrá-lo, permitindo que resolvamos o problema.

      O arquivo final se parecerá com isso:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      Salve e feche o arquivo quando terminar a edição.

      Agora, você adicionou resiliência ao código do seu aplicativo para lidar com casos onde ele talvez falhasse em se conectar ao seu banco de dados. Com esse código funcionando, você pode seguir em frente para definir seus serviços com o Compose.

      Com seu código refatorado, você está pronto para escrever o arquivo docker-compose.yml com as definições do serviço. Um serviço no Compose é um contêiner em execução e as definições de serviço — que você incluirá no seu arquivo docker-compose.yml — contém informações sobre como cada imagem de contêiner será executada. A ferramenta Compose permite que você defina vários serviços para construir aplicativos multi-contêiner.

      No entanto, antes de definir nossos serviços, vamos adicionar uma ferramenta ao nosso projeto chamada wait-for para garantir que nosso aplicativo tente se conectar apenas ao nosso banco de dados assim que as tarefas de inicialização do banco de dados estiverem completas. Este script de empacotamento usa o netcat para verificar se um host e porta específicos estão ou não aceitando conexões TCP. Usar ele permite que você controle as tentativas do seu aplicativo para se conectar ao seu banco de dados testando se ele está ou não pronto para aceitar conexões.

      Embora o Compose permita que você especifique dependências entre serviços usando a opção depends_on, essa ordem é baseada em se o contêiner está ou não em funcionamento ao invés da sua disponibilidade. Usar o depends_on não será ideal para nossa configuração, uma vez que queremos que nosso aplicativo se conecte apenas quando as tarefas de inicialização do banco de dados, incluindo a adição de um usuário e senha ao banco de dados de autenticação do admin, estejam completas. Para maiores informações sobre como usar o wait-for e outras ferramentas para controlar a ordem de inicialização, consulte as recomendações na documentação do Compose relevantes.

      Abra um arquivo chamado wait-for.sh:

      Cole o código a seguir no arquivo para criar a função de votação:

      ~/node_project/app/wait-for.sh

      #!/bin/sh
      
      # original script: https://github.com/eficode/wait-for/blob/master/wait-for
      
      TIMEOUT=15
      QUIET=0
      
      echoerr() {
        if [ "$QUIET" -ne 1 ]; then printf "%sn" "$*" 1>&2; fi
      }
      
      usage() {
        exitcode="$1"
        cat << USAGE >&2
      Usage:
        $cmdname host:port [-t timeout] [-- command args]
        -q | --quiet                        Do not output any status messages
        -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
        -- COMMAND ARGS                     Execute command with args after the test finishes
      USAGE
        exit "$exitcode"
      }
      
      wait_for() {
        for i in `seq $TIMEOUT` ; do
          nc -z "$HOST" "$PORT" > /dev/null 2>&1
      
          result=$?
          if [ $result -eq 0 ] ; then
            if [ $# -gt 0 ] ; then
              exec "$@"
            fi
            exit 0
          fi
          sleep 1
        done
        echo "Operation timed out" >&2
        exit 1
      }
      
      while [ $# -gt 0 ]
      do
        case "$1" in
          *:* )
          HOST=$(printf "%sn" "$1"| cut -d : -f 1)
          PORT=$(printf "%sn" "$1"| cut -d : -f 2)
          shift 1
          ;;
          -q | --quiet)
          QUIET=1
          shift 1
          ;;
          -t)
          TIMEOUT="$2"
          if [ "$TIMEOUT" = "" ]; then break; fi
          shift 2
          ;;
          --timeout=*)
          TIMEOUT="${1#*=}"
          shift 1
          ;;
          --)
          shift
          break
          ;;
          --help)
          usage 0
          ;;
          *)
          echoerr "Unknown argument: $1"
          usage 1
          ;;
        esac
      done
      
      if [ "$HOST" = "" -o "$PORT" = "" ]; then
        echoerr "Error: you need to provide a host and port to test."
        usage 2
      fi
      
      wait_for "$@"
      

      Salve e feche o arquivo quando terminar de adicionar o código.

      Crie o executável do script:

      Em seguida, abra o arquivo docker-compose.yml:

      Primeiro, defina o serviço do aplicativo nodejs adicionando o seguinte código ao arquivo:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      

      A definição de serviço do nodejs inclui as seguintes opções:

      • build: define as opções de configuração, incluindo o context e dockerfile, que serão aplicadas quando o Compose construir a imagem do aplicativo. Se quisesse usar uma imagem existente de um registro como o Docker Hub, você poderia usar como alternativa a instrução image, com informações sobre seu nome de usuário, repositório e tag da imagem.
      • context: define o contexto de construção para a construção da imagem — neste caso, o diretório atual do projeto.
      • dockerfile: especifica o Dockerfile no diretório atual do seu projeto como o arquivo que o Compose usará para construir a imagem do aplicativo. Para maiores informações sobre este arquivo, consulte Como construir um aplicativo Node.js com o Docker.
      • image, container_name: aplicam nomes à imagem e contêiner.
      • restart: define a política de reinício. A padrão é no, mas definimos o contêiner para reiniciar a menos que ele seja interrompido.
      • env_file: diz ao Compose que gostaríamos de adicionar variáveis de ambiente de um arquivo chamado .env, localizado em nosso contexto de construção.
      • environment: usar essa opção permite que você adicione as configurações de conexão do Mongo que definiu no arquivo .env. Note que não estamos definindo o NODE_ENV para development, já que é o comportamento padrão do Express se o NODE_ENV não estiver definido. Quando seguir para a produção, será possível definir isso para production de forma a habilitar a visualização de mensagens de erro de cache e mensagens de erros menos detalhadas. Note também que especificamos o contêiner do banco de dados db como host, como discutido no Passo 2.
      • ports: mapeia a porta 80 no host para a porta 8080 no contêiner.
      • volumes: estamos incluindo dois tipos de montagens aqui:
        • A primeira é uma bind mount que monta nosso código do aplicativo no host no diretório /home/node/app no contêiner. Isso facilitará o desenvolvimento rápido, uma vez que quaisquer alterações que você faça no código do seu host serão povoadas imediatamente no contêiner.
        • A segunda é uma volume com o nome, node_modules. Quando o Docker executa a instrução npm install listada no aplicativo Dockerfile, o npm cria um novo diretório node_modules no contêiner que inclui os pacotes necessários para executar o aplicativo. No entanto, o bind mount que acabamos de criar irá esconder este diretório node_modules recém-criado. Como o node_modules no host está vazio, o bind irá mapear um diretório vazio para o contêiner, sobrepondo o novo diretório node_modules e impedir que nosso aplicativo seja iniciado. O volume chamado node_modules resolve este problema persistindo o conteúdo do diretório /home/node/app/node_modules” e montando-o no contêiner, escondendo o bind.

      Lembre-se disso ao usar esta abordagem:

      • Seu bind irá montar o conteúdo do diretório node_modules no contêiner para o host e este diretório será propriedade do root, uma vez que o volume nomeado foi criado pelo Docker.
      • Se você tiver um diretório pré-existente node_modules no host, ele irá sobrepor o diretório node_modules criado no contêiner. A configuração que estamos construindo neste tutorial supõe que você não tenha um diretório pré-existente node_modules e que você não estará trabalhando com o npm no seu host. Isso está de acordo com uma abordagem de doze fatores para o desenvolvimento do aplicativo, que minimiza dependências entre ambientes de execução.

        • networks: especifica que nosso serviço de aplicativo irá juntar-se à rede app-network que vamos definir no final no arquivo.
        • command: essa opção permite que você defina o comando que deve ser executado quando o Compose executar a imagem. Note que isso irá sobrepor a instrução CMD que definimos no nosso aplicativo Dockerfile. Aqui, estamos executando o aplicativo usando o script wait-for, que irá apurar o serviço db na porta 27017 para testar se o serviço de banco de dados está ou não pronto. Assim que o teste de prontidão for bem sucedido, o script executará o comando que definimos, /home/node/app/node_modules/.bin/nodemon app.js, para iniciar o aplicativo com o nodemon. Isso irá garantir que quaisquer alterações futuras que façamos no nosso código sejam recarregadas sem que tenhamos que reiniciar o aplicativo.

      Em seguida, crie o serviço db adicionando o seguinte código abaixo da definição do serviço do aplicativo:

      ~/node_project/docker-compose.yml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network  
      

      Algumas das configurações que definimos para o serviço nodejs continuam as mesmas, mas também fizemos as seguintes alterações nas definições image, environment e volumes:

      • image: para criar esse serviço, o Compose irá puxar a imagem do Mongo 4.1.8-xenial do hub do Docker. Estamos fixando uma versão específica para evitar possíveis conflitos futuros conforme a imagem do Mongo muda. Para maiores informações sobre a fixação da versão, consulte a documentação do Docker sobre as práticas recomendadas do Dockerfile.
      • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: a imagem mongo torna essas variáveis de ambiente disponíveis para que você possa modificar a inicialização da instância do seu banco de dados. O MONGO_INITDB_ROOT_USERNAME e o MONGO_INITDB_ROOT_PASSWORD criam juntos um usuário root no banco de dados de autenticação do admin e garantem que a autenticação esteja habilitada quando o contêiner iniciar. Definimos o MONGO_INITDB_ROOT_USERNAME e o MONGO_INITDB_ROOT_PASSWORD usando os valores do nosso arquivo .env, que passamos ao serviço db usando a opção env_file. Fazer isso significa que nosso usuário do aplicativo sammy será um usuário root na instância do banco de dados, com acesso a todos os privilégios administrativos e operacionais dessa função. Ao trabalhar na produção, será necessário criar um usuário de aplicativo dedicado com privilégios adequados ao escopo.

        Nota: lembre-se de que essas variáveis não irão surtir efeito caso inicie o contêiner com um diretório de dados já existente em funcionamento.

      • dbdata:/data/db: o volume chamado dbdata irá persistir os dados armazenados no diretório padrão de dados do Mongo, o /data/db. Isso garantirá que não perca dados nos casos em que você interrompa ou remova contêineres.

      Também adicionamos o serviço db à rede app-network com a opção networks.

      Como passo final, adicione as definições de volume e rede ao final do arquivo:

      ~/node_project/docker-compose.yml

      ...
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      A rede bridge app-network definida pelo usuário habilita a comunicação entre nossos contêineres, uma vez que eles estão no mesmo host daemon do Docker. Isso simplifica o tráfego e a comunicação dentro do aplicativo, uma vez que todas as portas entre os contêineres na mesma rede bridge são abertas, ao mesmo tempo em que nenhuma porta é exposta ao mundo exterior. Assim, nossos contêineres db e nodejs podem se comunicar um com o outro, e precisamos apenas expor a porta 80 para o acesso front-end ao aplicativo.

      Nossa chave de nível superior volumes define os volumes dbdata e node_modules. Quando o Docker cria volumes, o conteúdo do volume é armazenado em uma parte do sistema de arquivos do host, /var/lib/docker/volumes/, que é gerenciado pelo Docker. O conteúdo de cada volume é armazenado em um diretório em /var/lib/docker/volumes/ e é montado em qualquer contêiner que utilize o volume. Desta forma, os dados de informações sobre tubarões que nossos usuários criarão vão persistir no volume dbdata mesmo se removermos e recriarmos o contêiner db.

      O arquivo final docker-compose.yml se parecerá com isso:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:     
            - dbdata:/data/db
          networks:
            - app-network  
      
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      Salve e feche o arquivo quando você terminar a edição.

      Com as definições do seu serviço instaladas, você está pronto para iniciar o aplicativo.

      Passo 5 — Testando o aplicativo

      Com seu arquivo docker-compose.yml funcionando, você pode criar seus serviços com o comando docker-compose up. Você também pode testar se seus dados irão persistir parando e removendo seus contêineres com o docker-compose down.

      Primeiro, construa as imagens dos contêineres e crie os serviços executando o docker-compose up com a flag -d, que executará, em seguida, os contêineres nodejs e db em segundo plano:

      Você verá um resultado confirmando que seus serviços foram criados:

      Output

      ... Creating db ... done Creating nodejs ... done

      Você também pode obter informações mais detalhadas sobre os processos de inicialização exibindo o resultado do registro dos serviços:

      Você verá algo simelhante a isso caso tudo tenha iniciado corretamente:

      Output

      ... nodejs | [nodemon] starting `node app.js` nodejs | Example app listening on 8080! nodejs | MongoDB is connected ... db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin

      Você também pode verificar o status dos seus contêineres com o docker-compose ps:

      Você verá um resultado indicando que seus contêineres estão funcionando:

      Output

      Name Command State Ports ---------------------------------------------------------------------- db docker-entrypoint.sh mongod Up 27017/tcp nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp

      Com seus serviços em funcionamento, visite http://your_server_ip no navegador. Você verá uma página de destino que se parece com esta:

      Application Landing Page

      Clique no botão Get Shark Info. Você verá uma página com um formulário de entrada onde é possível digitar um nome de tubarão e uma descrição das características gerais desse tubarão:

      Shark Info Form

      No formulário, adicione um tubarão da sua escolha. Para o propósito dessa demonstração, vamos adicionar Megalodon Shark ao campo Shark Name e Ancient ao campo Shark Character:

      Filled Shark Form

      Clique no botão Submit. Você verá uma página com estas informações do tubarão exibidas para você:

      Shark Output

      Como passo final, podemos testar se os dados que acabou de digitar persistirão caso você remova seu contêiner de banco de dados.

      De volta ao seu terminal, digite o seguinte comando para parar e remover seus contêineres e rede:

      Note que não estamos incluindo a opção --volumes; desta forma, nosso volume dbdata não é removido.

      O resultado a seguir confirma que seus contêineres e rede foram removidos:

      Output

      Stopping nodejs ... done Stopping db ... done Removing nodejs ... done Removing db ... done Removing network node_project_app-network

      Recrie os contêineres:

      Agora, volte para o formulário de informações do tubarão:

      Shark Info Form

      Digite um novo tubarão da sua escolha. Vamos escolher Whale Shark e Large:

      Enter New Shark

      Assim que clicar em Submit, verá que o novo tubarão foi adicionado à coleção de tubarões no seu banco de dados sem a perda dos dados que já introduziu:

      Complete Shark Collection

      Seu aplicativo agora está funcionando em contêineres do Docker com persistência de dados e sincronização de código habilitados.

      Conclusão

      Ao seguir este tutorial, você criou uma configuração de desenvolvimento para seu aplicativo Node usando contêineres do Docker. Você tornou seu projeto mais modular e portátil extraindo informações sensíveis e desassociando o estado do seu aplicativo do código dele. Você também configurou um arquivo clichê docker-compose.yml que pode revisar conforme suas necessidades de desenvolvimento e exigências mudem.

      Conforme for desenvolvendo, você pode se interessar em aprender mais sobre a concepção de aplicativos para fluxos de trabalho em contêiner e Cloud Native. Consulte Arquitetando aplicativos para o Kubernetes e Modernizando aplicativos para o Kubernetes para maiores informações sobre esses tópicos.

      Para aprender mais sobre o código usado neste tutorial, consulte Como construir um aplicativo Node.js com o Docker e Como integrar o MongoDB com seu aplicativo Node. Para informações sobre como implantar um aplicativo Node com um proxy reverso Nginx usando contêineres, consulte Como proteger um aplicativo Node.js em contêiner com o Nginx, Let’s Encrypt e o Docker Compose.



      Source link

      Containerizing a Ruby on Rails Application for Development with Docker Compose


      Introduction

      If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

      • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
      • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
      • Environments are portable, allowing you to package and share your code with others.

      This tutorial will show you how to set up a development environment for a Ruby on Rails application using Docker. You will create multiple containers – for the application itself, the PostgreSQL database, Redis, and a Sidekiq service – with Docker Compose. The setup will do the following:

      • Synchronize the application code on the host with the code in the container to facilitate changes during development.
      • Persist application data between container restarts.
      • Configure Sidekiq workers to process jobs as expected.

      At the end of this tutorial, you will have a working shark information application running on Docker containers:

      Sidekiq App Home

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Cloning the Project and Adding Dependencies

      Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Add Sidekiq and Redis to a Ruby on Rails Application, which explains how to add Sidekiq to an existing Rails 5 project.

      Clone the repository into a directory called rails-docker:

      • git clone https://github.com/do-community/rails-sidekiq.git rails-docker

      Navigate to the rails-docker directory:

      In this tutorial we will use PostgreSQL as a database. In order to work with PostgreSQL instead of SQLite 3, you will need to add the pg gem to the project’s dependencies, which are listed in its Gemfile. Open that file for editing using nano or your favorite editor:

      Add the gem anywhere in the main project dependencies (above development dependencies):

      ~/rails-docker/Gemfile

      . . . 
      # Reduces boot times through caching; required in config/boot.rb
      gem 'bootsnap', '>= 1.1.0', require: false
      gem 'sidekiq', '~>6.0.0'
      gem 'pg', '~>1.1.3'
      
      group :development, :test do
      . . .
      

      We can also comment out the sqlite gem, since we won’t be using it anymore:

      ~/rails-docker/Gemfile

      . . . 
      # Use sqlite3 as the database for Active Record
      # gem 'sqlite3'
      . . .
      

      Finally, comment out the spring-watcher-listen gem under development:

      ~/rails-docker/Gemfile

      . . . 
      gem 'spring'
      # gem 'spring-watcher-listen', '~> 2.0.0'
      . . .
      

      If we do not disable this gem, we will see persistent error messages when accessing the Rails console. These error messages derive from the fact that this gem has Rails use listen to watch for changes in development, rather than polling the filesystem for changes. Because this gem watches the root of the project, including the node_modules directory, it will throw error messages about which directories are being watched, cluttering the console. If you are concerned about conserving CPU resources, however, disabling this gem may not work for you. In this case, it may be a good idea to upgrade your Rails application to Rails 6.

      Save and close the file when you are finished editing.

      With your project repository in place, the pg gem added to your Gemfile, and the spring-watcher-listen gem commented out, you are ready to configure your application to work with PostgreSQL.

      Step 2 — Configuring the Application to Work with PostgreSQL and Redis

      To work with PostgreSQL and Redis in development, we will want to do the following:

      • Configure the application to work with PostgreSQL as the default adapter.
      • Add an .env file to the project with our database username and password and Redis host.
      • Create an init.sql script to create a sammy user for the database.
      • Add an initializer for Sidekiq so that it can work with our containerized redis service.
      • Add the .env file and other relevant files to the project’s gitignore and dockerignore files.
      • Create database seeds so that our application has some records for us to work with when we start it up.

      First, open your database configuration file, located at config/database.yml:

      Currently, the file includes the following default settings, which are applied in the absence of other settings:

      ~/rails-docker/config/database.yml

      default: &default
        adapter: sqlite3
        pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
        timeout: 5000
      

      We need to change these to reflect the fact that we will use the postgresql adapter, since we will be creating a PostgreSQL service with Docker Compose to persist our application data.

      Delete the code that sets SQLite as the adapter and replace it with the following settings, which will set the adapter appropriately and the other variables necessary to connect:

      ~/rails-docker/config/database.yml

      default: &default
        adapter: postgresql
        encoding: unicode
        database: <%= ENV['DATABASE_NAME'] %>
        username: <%= ENV['DATABASE_USER'] %>
        password: <%= ENV['DATABASE_PASSWORD'] %>
        port: <%= ENV['DATABASE_PORT'] || '5432' %>
        host: <%= ENV['DATABASE_HOST'] %>
        pool: <%= ENV.fetch("RAILS_MAX_THREADS") { 5 } %>
        timeout: 5000
      . . .
      

      Next, we’ll modify the setting for the development environment, since this is the environment we’re using in this setup.

      Delete the existing SQLite database configuration so that section looks like this:

      ~/rails-docker/config/database.yml

      . . . 
      development:
        <<: *default
      . . .
      

      Finally, delete the database settings for the production and test environments as well:

      ~/rails-docker/config/database.yml

      . . . 
      test:
        <<: *default
      
      production:
        <<: *default
      . . . 
      

      These modifications to our default database settings will allow us to set our database information dynamically using environment variables defined in .env files, which will not be committed to version control.

      Save and close the file when you are finished editing.

      Note that if you are creating a Rails project from scratch, you can set the adapter with the rails new command, as described in Step 3 of How To Use PostgreSQL with Your Ruby on Rails Application on Ubuntu 18.04. This will set your adapter in config/database.yml and automatically add the pg gem to the project.

      Now that we have referenced our environment variables, we can create a file for them with our preferred settings. Extracting configuration settings in this way is part of the 12 Factor approach to application development, which defines best practices for application resiliency in distributed environments. Now, when we are setting up our production and test environments in the future, configuring our database settings will involve creating additional .env files and referencing the appropriate file in our Docker Compose files.

      Open an .env file:

      Add the following values to the file:

      ~/rails-docker/.env

      DATABASE_NAME=rails_development
      DATABASE_USER=sammy
      DATABASE_PASSWORD=shark
      DATABASE_HOST=database
      REDIS_HOST=redis
      

      In addition to setting our database name, user, and password, we’ve also set a value for the DATABASE_HOST. The value, database, refers to the database PostgreSQL service we will create using Docker Compose. We’ve also set a REDIS_HOST to specify our redis service.

      Save and close the file when you are finished editing.

      To create the sammy database user, we can write an init.sql script that we can then mount to the database container when it starts.

      Open the script file:

      Add the following code to create a sammy user with administrative privileges:

      ~/rails-docker/init.sql

      CREATE USER sammy;
      ALTER USER sammy WITH SUPERUSER;
      

      This script will create the appropriate user on the database and grant this user administrative privileges.

      Set appropriate permissions on the script:

      Next, we’ll configure Sidekiq to work with our containerized redis service. We can add an initializer to the config/initializers directory, where Rails looks for configuration settings once frameworks and plugins are loaded, that sets a value for a Redis host.

      Open a sidekiq.rb file to specify these settings:

      • nano config/initializers/sidekiq.rb

      Add the following code to the file to specify values for a REDIS_HOST and REDIS_PORT:

      ~/rails-docker/config/initializers/sidekiq.rb

      Sidekiq.configure_server do |config|
        config.redis = {
          host: ENV['REDIS_HOST'],
          port: ENV['REDIS_PORT'] || '6379'
        }
      end
      
      Sidekiq.configure_client do |config|
        config.redis = {
          host: ENV['REDIS_HOST'],
          port: ENV['REDIS_PORT'] || '6379'
        }
      end
      

      Much like our database configuration settings, these settings give us the ability to set our host and port parameters dynamically, allowing us to substitute the appropriate values at runtime without having to modify the application code itself. In addition to a REDIS_HOST, we have a default value set for REDIS_PORT in case it is not set elsewhere.

      Save and close the file when you are finished editing.

      Next, to ensure that our application’s sensitive data is not copied to version control, we can add .env to our project’s .gitignore file, which tells Git which files to ignore in our project. Open the file for editing:

      At the bottom of the file, add an entry for .env:

      ~/rails-docker/.gitignore

      yarn-debug.log*
      .yarn-integrity
      .env
      

      Save and close the file when you are finished editing.

      Next, we’ll create a .dockerignore file to set what should not be copied to our containers. Open the file for editing:

      Add the following code to the file, which tells Docker to ignore some of the things we don’t need copied to our containers:

      ~/rails-docker/.dockerignore

      .DS_Store
      .bin
      .git
      .gitignore
      .bundleignore
      .bundle
      .byebug_history
      .rspec
      tmp
      log
      test
      config/deploy
      public/packs
      public/packs-test
      node_modules
      yarn-error.log
      coverage/
      

      Add .env to the bottom of this file as well:

      ~/rails-docker/.dockerignore

      . . .
      yarn-error.log
      coverage/
      .env
      

      Save and close the file when you are finished editing.

      As a final step, we will create some seed data so that our application has a few records when we start it up.

      Open a file for the seed data in the db directory:

      Add the following code to the file to create four demo sharks and one sample post:

      ~/rails-docker/db/seeds.rb

      # Adding demo sharks
      sharks = Shark.create([{ name: 'Great White', facts: 'Scary' }, { name: 'Megalodon', facts: 'Ancient' }, { name: 'Hammerhead', facts: 'Hammer-like' }, { name: 'Speartooth', facts: 'Endangered' }])
      Post.create(body: 'These sharks are misunderstood', shark: sharks.first)
      

      This seed data will create four sharks and one post that is associated with the first shark.

      Save and close the file when you are finished editing.

      With your application configured to work with PostgreSQL and your environment variables created, you are ready to write your application Dockerfile.

      Step 3 — Writing the Dockerfile and Entrypoint Scripts

      Your Dockerfile specifies what will be included in your application container when it is created. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.

      Following these guidelines on building optimized containers, we will make our image as efficient as possible by using an Alpine base and attempting to minimize our image layers generally.

      Open a Dockerfile in your current directory:

      Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application, which will form the starting point of the application build.

      Add the following code to the file to add the Ruby alpine image as a base:

      ~/rails-docker/Dockerfile

      FROM ruby:2.5.1-alpine
      

      The alpine image is derived from the Alpine Linux project, and will help us keep our image size down. For more information about whether or not the alpine image is the right choice for your project, please see the full discussion under the Image Variants section of the Docker Hub Ruby image page.

      Some factors to consider when using alpine in development:

      • Keeping image size down will decrease page and resource load times, particularly if you also keep volumes to a minimum. This helps keep your user experience in development quick and closer to what it would be if you were working locally in a non-containerized environment.
      • Having parity between development and production images facilitates successful deployments. Since teams often opt to use Alpine images in production for speed benefits, developing with an Alpine base helps offset issues when moving to production.

      Next, set an environment variable to specify the Bundler version:

      ~/rails-docker/Dockerfile

      . . .
      ENV BUNDLER_VERSION=2.0.2
      

      This is one of the steps we will take to avoid version conflicts between the default bundler version available in our environment and our application code, which requires Bundler 2.0.2.

      Next, add the packages that you need to work with the application to the Dockerfile:

      ~/rails-docker/Dockerfile

      . . . 
      RUN apk add --update --no-cache 
            binutils-gold 
            build-base 
            curl 
            file 
            g++ 
            gcc 
            git 
            less 
            libstdc++ 
            libffi-dev 
            libc-dev  
            linux-headers 
            libxml2-dev 
            libxslt-dev 
            libgcrypt-dev 
            make 
            netcat-openbsd 
            nodejs 
            openssl 
            pkgconfig 
            postgresql-dev 
            python 
            tzdata 
            yarn 
      

      These packages include nodejs and yarn, among others. Since our application serves assets with webpack, we need to include Node.js and Yarn for the application to work as expected.

      Keep in mind that the alpine image is extremely minimal: the packages listed here are not exhaustive of what you might want or need in development when you are containerizing your own application.

      Next, install the appropriate bundler version:

      ~/rails-docker/Dockerfile

      . . . 
      RUN gem install bundler -v 2.0.2
      

      This step will guarantee parity between our containerized environment and the specifications in this project’s Gemfile.lock file.

      Now set the working directory for the application on the container:

      ~/rails-docker/Dockerfile

      . . .
      WORKDIR /app
      

      Copy over your Gemfile and Gemfile.lock:

      ~/rails-docker/Dockerfile

      . . .
      COPY Gemfile Gemfile.lock ./
      

      Copying these files as an independent step, followed by bundle install, means that the project gems do not need to be rebuilt every time you make changes to your application code. This will work in conjunction with the gem volume that we will include in our Compose file, which will mount gems to your application container in cases where the service is recreated but project gems remain the same.

      Next, set the configuration options for the nokogiri gem build:

      ~/rails-docker/Dockerfile

      . . . 
      RUN bundle config build.nokogiri --use-system-libraries
      . . .
      

      This step builds nokigiri with the libxml2 and libxslt library versions that we added to the application container in the RUN apk add… step above.

      Next, install the project gems:

      ~/rails-docker/Dockerfile

      . . . 
      RUN bundle check || bundle install
      

      This instruction checks that the gems are not already installed before installing them.

      Next, we’ll repeat the same procedure that we used with gems with our JavaScript packages and dependencies. First we’ll copy package metadata, then we’ll install dependencies, and finally we’ll copy the application code into the container image.

      To get started with the Javascript section of our Dockerfile, copy package.json and yarn.lock from your current project directory on the host to the container:

      ~/rails-docker/Dockerfile

      . . . 
      COPY package.json yarn.lock ./
      

      Then install the required packages with yarn install:

      ~/rails-docker/Dockerfile

      . . . 
      RUN yarn install --check-files
      

      This instruction includes a --check-files flag with the yarn command, a feature that makes sure any previously installed files have not been removed. As in the case of our gems, we will manage the persistence of the packages in the node_modules directory with a volume when we write our Compose file.

      Finally, copy over the rest of the application code and start the application with an entrypoint script:

      ~/rails-docker/Dockerfile

      . . . 
      COPY . ./ 
      
      ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
      

      Using an entrypoint script allows us to run the container as an executable.

      The final Dockerfile will look like this:

      ~/rails-docker/Dockerfile

      FROM ruby:2.5.1-alpine
      
      ENV BUNDLER_VERSION=2.0.2
      
      RUN apk add --update --no-cache 
            binutils-gold 
            build-base 
            curl 
            file 
            g++ 
            gcc 
            git 
            less 
            libstdc++ 
            libffi-dev 
            libc-dev  
            linux-headers 
            libxml2-dev 
            libxslt-dev 
            libgcrypt-dev 
            make 
            netcat-openbsd 
            nodejs 
            openssl 
            pkgconfig 
            postgresql-dev 
            python 
            tzdata 
            yarn 
      
      RUN gem install bundler -v 2.0.2
      
      WORKDIR /app
      
      COPY Gemfile Gemfile.lock ./
      
      RUN bundle config build.nokogiri --use-system-libraries
      
      RUN bundle check || bundle install 
      
      COPY package.json yarn.lock ./
      
      RUN yarn install --check-files
      
      COPY . ./ 
      
      ENTRYPOINT ["./entrypoints/docker-entrypoint.sh"]
      

      Save and close the file when you are finished editing.

      Next, create a directory called entrypoints for the entrypoint scripts:

      This directory will include our main entrypoint script and a script for our Sidekiq service.

      Open the file for the application entrypoint script:

      • nano entrypoints/docker-entrypoint.sh

      Add the following code to the file:

      rails-docker/entrypoints/docker-entrypoint.sh

      #!/bin/sh
      
      set -e
      
      if [ -f tmp/pids/server.pid ]; then
        rm tmp/pids/server.pid
      fi
      
      bundle exec rails s -b 0.0.0.0
      

      The first important line is set -e, which tells the /bin/sh shell that runs the script to fail fast if there are any problems later in the script. Next, the script checks that tmp/pids/server.pid is not present to ensure that there won’t be server conflicts when we start the application. Finally, the script starts the Rails server with the bundle exec rails s command. We use the -b option with this command to bind the server to all IP addresses rather than to the default, localhost. This invocation makes the Rails server route incoming requests to the container IP rather than to the default localhost.

      Save and close the file when you are finished editing.

      Make the script executable:

      • chmod +x entrypoints/docker-entrypoint.sh

      Next, we will create a script to start our sidekiq service, which will process our Sidekiq jobs. For more information about how this application uses Sidekiq, please see How To Add Sidekiq and Redis to a Ruby on Rails Application.

      Open a file for the Sidekiq entrypoint script:

      • nano entrypoints/sidekiq-entrypoint.sh

      Add the following code to the file to start Sidekiq:

      ~/rails-docker/entrypoints/sidekiq-entrypoint.sh

      #!/bin/sh
      
      set -e
      
      if [ -f tmp/pids/server.pid ]; then
        rm tmp/pids/server.pid
      fi
      
      bundle exec sidekiq
      

      This script starts Sidekiq in the context of our application bundle.

      Save and close the file when you are finished editing. Make it executable:

      • chmod +x entrypoints/sidekiq-entrypoint.sh

      With your entrypoint scripts and Dockerfile in place, you are ready to define your services in your Compose file.

      Step 4 — Defining Services with Docker Compose

      Using Docker Compose, we will be able to run the multiple containers required for our setup. We will define our Compose services in our main docker-compose.yml file. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

      Our application setup will include the following services:

      • The application itself
      • The PostgreSQL database
      • Redis
      • Sidekiq

      We will also include a bind mount as part of our setup, so that any code changes we make during development will be immediately synchronized with the containers that need access to this code.

      Note that we are not defining a test service, since testing is outside of the scope of this tutorial and series, but you could do so by following the precedent we are using here for the sidekiq service.

      Open the docker-compose.yml file:

      First, add the application service definition:

      ~/rails-docker/docker-compose.yml

      version: '3.4'
      
      services:
        app: 
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - database
            - redis
          ports: 
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      

      The app service definition includes the following options:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the image build — in this case, the current project directory.
      • dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image.
      • depends_on: This sets up the database and redis containers first so that they are up and running before app.
      • ports: This maps port 3000 on the host to port 3000 on the container.
      • volumes: We are including two types of mounts here:
        • The first is a bind mount that mounts our application code on the host to the /app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.
        • The second is a named volume, gem_cache. When the bundle install instruction runs in the container, it will install the project gems. Adding this volume means that if you recreate the container, the gems will be mounted to the new container. This mount presumes that there haven’t been any changes to the project, so if you do make changes to your project gems in development, you will need to remember to delete this volume before recreating your application service.
        • The third volume is a named volume for the node_modules directory. Rather than having node_modules mounted to the host, which can lead to package discrepancies and permissions conflicts in development, this volume will ensure that the packages in this directory are persisted and reflect the current state of the project. Again, if you modify the project’s Node dependencies, you will need to remove and recreate this volume.
      • env_file: This tells Compose that we would like to add environment variables from a file called .env located in the build context.
      • environment: Using this option allows us to set a non-sensitive environment variable, passing information about the Rails environment to the container.

      Next, below the app service definition, add the following code to define your database service:

      ~/rails-docker/docker-compose.yml

      . . .
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
            - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      

      Unlike the app service, the database service pulls a postgres image directly from Docker Hub. Note that we’re also pinning the version here, rather than setting it to latest or not specifying it (which defaults to latest). This way, we can ensure that this setup works with the versions specified here and avoid unexpected surprises with breaking code changes to the image.

      We are also including a db_data volume here, which will persist our application data in between container starts. Additionally, we’ve mounted our init.sql startup script to the appropriate directory, docker-entrypoint-initdb.d/ on the container, in order to create our sammy database user. After the image entrypoint creates the default postgres user and database, it will run any scripts found in the docker-entrypoint-initdb.d/ directory, which you can use for necessary initialization tasks. For more details, look at the Initialization scripts section of the PostgreSQL image documentation

      Next, add the redis service definition:

      ~/rails-docker/docker-compose.yml

      . . .
        redis:
          image: redis:5.0.7
      

      Like the database service, the redis service uses an image from Docker Hub. In this case, we are not persisting the Sidekiq job cache.

      Finally, add the sidekiq service definition:

      ~/rails-docker/docker-compose.yml

      . . .
        sidekiq:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - app      
            - database
            - redis
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      

      Our sidekiq service resembles our app service in a few respects: it uses the same build context and image, environment variables, and volumes. However, it is dependent on the app, redis, and database services, and so will be the last to start. Additionally, it uses an entrypoint that will override the entrypoint set in the Dockerfile. This entrypoint setting points to entrypoints/sidekiq-entrypoint.sh, which includes the appropriate command to start the sidekiq service.

      As a final step, add the volume definitions below the sidekiq service definition:

      ~/rails-docker/docker-compose.yml

      . . .
      volumes:
        gem_cache:
        db_data:
        node_modules:
      

      Our top-level volumes key defines the volumes gem_cache, db_data, and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that’s managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the db_data volume even if we remove and recreate the database service.

      The finished file will look like this:

      ~/rails-docker/docker-compose.yml

      version: '3.4'
      
      services:
        app: 
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:     
            - database
            - redis
          ports: 
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
            - ./init.sql:/docker-entrypoint-initdb.d/init.sql
      
        redis:
          image: redis:5.0.7
      
        sidekiq:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - app      
            - database
            - redis
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      
      volumes:
        gem_cache:
        db_data:
        node_modules:     
      

      Save and close the file when you are finished editing.

      With your service definitions written, you are ready to start the application.

      Step 5 — Testing the Application

      With your docker-compose.yml file in place, you can create your services with the docker-compose up command and seed your database. You can also test that your data will persist by stopping and removing your containers with docker-compose down and recreating them.

      First, build the container images and create the services by running docker-compose up with the -d flag, which will run the containers in the background:

      You will see output that your services have been created:

      Output

      Creating rails-docker_database_1 ... done Creating rails-docker_redis_1 ... done Creating rails-docker_app_1 ... done Creating rails-docker_sidekiq_1 ... done

      You can also get more detailed information about the startup processes by displaying the log output from the services:

      You will see something like this if everything has started correctly:

      Output

      sidekiq_1 | 2019-12-19T15:05:26.365Z pid=6 tid=grk7r6xly INFO: Booting Sidekiq 6.0.3 with redis options {:host=>"redis", :port=>"6379", :id=>"Sidekiq-server-PID-6", :url=>nil} sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: Running in ruby 2.5.1p57 (2018-03-29 revision 63029) [x86_64-linux-musl] sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: See LICENSE and the LGPL-3.0 for licensing details. sidekiq_1 | 2019-12-19T15:05:31.097Z pid=6 tid=grk7r6xly INFO: Upgrade to Sidekiq Pro for more features and support: http://sidekiq.org app_1 | => Booting Puma app_1 | => Rails 5.2.3 application starting in development app_1 | => Run `rails server -h` for more startup options app_1 | Puma starting in single mode... app_1 | * Version 3.12.1 (ruby 2.5.1-p57), codename: Llamas in Pajamas app_1 | * Min threads: 5, max threads: 5 app_1 | * Environment: development app_1 | * Listening on tcp://0.0.0.0:3000 app_1 | Use Ctrl-C to stop . . . database_1 | PostgreSQL init process complete; ready for start up. database_1 | database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: starting PostgreSQL 12.1 (Debian 12.1-1.pgdg100+1) on x86_64-pc-linux-gnu, compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 database_1 | 2019-12-19 15:05:20.160 UTC [1] LOG: listening on IPv6 address "::", port 5432 database_1 | 2019-12-19 15:05:20.163 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" database_1 | 2019-12-19 15:05:20.182 UTC [63] LOG: database system was shut down at 2019-12-19 15:05:20 UTC database_1 | 2019-12-19 15:05:20.187 UTC [1] LOG: database system is ready to accept connections . . . redis_1 | 1:M 19 Dec 2019 15:05:18.822 * Ready to accept connections

      You can also check the status of your containers with docker-compose ps:

      You will see output indicating that your containers are running:

      Output

      Name Command State Ports ----------------------------------------------------------------------------------------- rails-docker_app_1 ./entrypoints/docker-resta ... Up 0.0.0.0:3000->3000/tcp rails-docker_database_1 docker-entrypoint.sh postgres Up 5432/tcp rails-docker_redis_1 docker-entrypoint.sh redis ... Up 6379/tcp rails-docker_sidekiq_1 ./entrypoints/sidekiq-entr ... Up

      Next, create and seed your database and run migrations on it with the following docker-compose exec command:

      • docker-compose exec app bundle exec rake db:setup db:migrate

      The docker-compose exec command allows you to run commands in your services; we are using it here to run rake db:setup and db:migrate in the context of our application bundle to create and seed the database and run migrations. As you work in development, docker-compose exec will prove useful to you when you want to run migrations against your development database.

      You will see the following output after running this command:

      Output

      Created database 'rails_development' Database 'rails_development' already exists -- enable_extension("plpgsql") -> 0.0140s -- create_table("endangereds", {:force=>:cascade}) -> 0.0097s -- create_table("posts", {:force=>:cascade}) -> 0.0108s -- create_table("sharks", {:force=>:cascade}) -> 0.0050s -- enable_extension("plpgsql") -> 0.0173s -- create_table("endangereds", {:force=>:cascade}) -> 0.0088s -- create_table("posts", {:force=>:cascade}) -> 0.0128s -- create_table("sharks", {:force=>:cascade}) -> 0.0072s

      With your services running, you can visit localhost:3000 or http://your_server_ip:3000 in the browser. You will see a landing page that looks like this:

      Sidekiq App Home

      We can now test data persistence. Create a new shark by clicking on Get Shark Info button, which will take you to the sharks/index route:

      Sharks Index Page with Seeded Data

      To verify that the application is working, we can add some demo information to it. Click on New Shark. You will be prompted for a username (sammy) and password (shark), thanks to the project’s authentication settings.

      On the New Shark page, input “Mako” into the Name field and “Fast” into the Facts field.

      Click on the Create Shark button to create the shark. Once you have created the shark, click Home on the site’s navbar to get back to the main application landing page. We can now test that Sidekiq is working.

      Click on the Which Sharks Are in Danger? button. Since you have not uploaded any endangered sharks, this will take you to the endangered index view:

      Endangered Index View

      Click on Import Endangered Sharks to import the sharks. You will see a status message telling you that the sharks have been imported:

      Begin Import

      You will also see the beginning of the import. Refresh your page to see the entire table:

      Refresh Table

      Thanks to Sidekiq, our large batch upload of endangered sharks has succeeded without locking up the browser or interfering with other application functionality.

      Click on the Home button at the bottom of the page, which will bring you back to the application main page:

      Sidekiq App Home

      From here, click on Which Sharks Are in Danger? again. You will see the uploaded sharks once again.

      Now that we know our application is working properly, we can test our data persistence.

      Back at your terminal, type the following command to stop and remove your containers:

      Note that we are not including the --volumes option; hence, our db_data volume is not removed.

      The following output confirms that your containers and network have been removed:

      Output

      Stopping rails-docker_sidekiq_1 ... done Stopping rails-docker_app_1 ... done Stopping rails-docker_database_1 ... done Stopping rails-docker_redis_1 ... done Removing rails-docker_sidekiq_1 ... done Removing rails-docker_app_1 ... done Removing rails-docker_database_1 ... done Removing rails-docker_redis_1 ... done Removing network rails-docker_default

      Recreate the containers:

      Open the Rails console on the app container with docker-compose exec and bundle exec rails console:

      • docker-compose exec app bundle exec rails console

      At the prompt, inspect the last Shark record in the database:

      You will see the record you just created:

      IRB session

      Shark Load (1.0ms) SELECT "sharks".* FROM "sharks" ORDER BY "sharks"."id" DESC LIMIT $1 [["LIMIT", 1]] => "#<Shark id: 5, name: "Mako", facts: "Fast", created_at: "2019-12-20 14:03:28", updated_at: "2019-12-20 14:03:28">"

      You can then check to see that your Endangered sharks have been persisted with the following command:

      IRB session

      (0.8ms) SELECT COUNT(*) FROM "endangereds" => 73

      Your db_data volume was successfully mounted to the recreated database service, making it possible for your app service to access the saved data. If you navigate directly to the index shark page by visiting localhost:3000/sharks or http://your_server_ip:3000/sharks you will also see that record displayed:

      Sharks Index Page with Mako

      Your endangered sharks will also be at the localhost:3000/endangered/data or http://your_server_ip:3000/endangered/data view:

      Refresh Table

      Your application is now running on Docker containers with data persistence and code synchronization enabled. You can go ahead and test out local code changes on your host, which will be synchronized to your container thanks to the bind mount we defined as part of the app service.

      Conclusion

      By following this tutorial, you have created a development setup for your Rails application using Docker containers. You’ve made your project more modular and portable by extracting sensitive information and decoupling your application’s state from your code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

      As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics. Or, if you would like to invest in a Kubernetes learning sequence, please have a look at out Kubernetes for Full-Stack Developers curriculum.

      To learn more about the application code itself, please see the other tutorials in this series:



      Source link

      Containerizing a Node.js Application for Development With Docker Compose


      Introduction

      If you are actively developing an application, using Docker can simplify your workflow and the process of deploying your application to production. Working with containers in development offers the following benefits:

      • Environments are consistent, meaning that you can choose the languages and dependencies you want for your project without worrying about system conflicts.
      • Environments are isolated, making it easier to troubleshoot issues and onboard new team members.
      • Environments are portable, allowing you to package and share your code with others.

      This tutorial will show you how to set up a development environment for a Node.js application using Docker. You will create two containers — one for the Node application and another for the MongoDB database — with Docker Compose. Because this application works with Node and MongoDB, our setup will do the following:

      • Synchronize the application code on the host with the code in the container to facilitate changes during development.
      • Ensure that changes to the application code work without a restart.
      • Create a user and password-protected database for the application’s data.
      • Persist this data.

      At the end of this tutorial, you will have a working shark information application running on Docker containers:

      Complete Shark Collection

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Cloning the Project and Modifying Dependencies

      The first step in building this setup will be cloning the project code and modifying its package.json file, which includes the project’s dependencies. We will add nodemon to the project’s devDependencies, specifying that we will be using it during development. Running the application with nodemon ensures that it will be automatically restarted whenever you make changes to your code.

      First, clone the nodejs-mongo-mongoose repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Integrate MongoDB with Your Node Application, which explains how to integrate a MongoDB database with an existing Node application using Mongoose.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-mongo-mongoose.git node_project

      Navigate to the node_project directory:

      Open the project's package.json file using nano or your favorite editor:

      Beneath the project dependencies and above the closing curly brace, create a new devDependencies object that includes nodemon:

      ~/node_project/package.json

      ...
      "dependencies": {
          "ejs": "^2.6.1",
          "express": "^4.16.4",
          "mongoose": "^5.4.10"
        },
        "devDependencies": {
          "nodemon": "^1.18.10"
        }    
      }
      

      Save and close the file when you are finished editing.

      With the project code in place and its dependencies modified, you can move on to refactoring the code for a containerized workflow.

      Step 2 — Configuring Your Application to Work with Containers

      Modifying our application for a containerized workflow means making our code more modular. Containers offer portability between environments, and our code should reflect that by remaining as decoupled from the underlying operating system as possible. To achieve this, we will refactor our code to make greater use of Node's process.env property, which returns an object with information about your user environment at runtime. We can use this object in our code to dynamically assign configuration information at runtime with environment variables.

      Let's begin with app.js, our main application entrypoint. Open the file:

      Inside, you will see a definition for a port constant, as well a listen function that uses this constant to specify the port the application will listen on:

      ~/home/node_project/app.js

      ...
      const port = 8080;
      ...
      app.listen(port, function () {
        console.log('Example app listening on port 8080!');
      });
      

      Let's redefine the port constant to allow for dynamic assignment at runtime using the process.env object. Make the following changes to the constant definition and listen function:

      ~/home/node_project/app.js

      ...
      const port = process.env.PORT || 8080;
      ...
      app.listen(port, function () {
        console.log(`Example app listening on ${port}!`);
      });
      

      Our new constant definition assigns port dynamically using the value passed in at runtime or 8080. Similarly, we've rewritten the listen function to use a template literal, which will interpolate the port value when listening for connections. Because we will be mapping our ports elsewhere, these revisions will prevent our having to continuously revise this file as our environment changes.

      When you are finished editing, save and close the file.

      Next, we will modify our database connection information to remove any configuration credentials. Open the db.js file, which contains this information:

      Currently, the file does the following things:

      • Imports Mongoose, the Object Document Mapper (ODM) that we're using to create schemas and models for our application data.
      • Sets the database credentials as constants, including the username and password.
      • Connects to the database using the mongoose.connect method.

      For more information about the file, please see Step 3 of How To Integrate MongoDB with Your Node Application.

      Our first step in modifying the file will be redefining the constants that include sensitive information. Currently, these constants look like this:

      ~/node_project/db.js

      ...
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      ...
      

      Instead of hardcoding this information, you can use the process.env object to capture the runtime values for these constants. Modify the block to look like this:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      ...
      

      Save and close the file when you are finished editing.

      At this point, you have modified db.js to work with your application's environment variables, but you still need a way to pass these variables to your application. Let's create an .env file with values that you can pass to your application at runtime.

      Open the file:

      This file will include the information that you removed from db.js: the username and password for your application's database, as well as the port setting and database name. Remember to update the username, password, and database name listed here with your own information:

      ~/node_project/.env

      MONGO_USERNAME=sammy
      MONGO_PASSWORD=your_password
      MONGO_PORT=27017
      MONGO_DB=sharkinfo
      

      Note that we have removed the host setting that originally appeared in db.js. We will now define our host at the level of the Docker Compose file, along with other information about our services and containers.

      Save and close this file when you are finished editing.

      Because your .env file contains sensitive information, you will want to ensure that it is included in your project's .dockerignore and .gitignore files so that it does not copy to your version control or containers.

      Open your .dockerignore file:

      Add the following line to the bottom of the file:

      ~/node_project/.dockerignore

      ...
      .gitignore
      .env
      

      Save and close the file when you are finished editing.

      The .gitignore file in this repository already includes .env, but feel free to check that it is there:

      ~~/node_project/.gitignore

      ...
      .env
      ...
      

      At this point, you have successfully extracted sensitive information from your project code and taken measures to control how and where this information gets copied. Now you can add more robustness to your database connection code to optimize it for a containerized workflow.

      Step 3 — Modifying Database Connection Settings

      Our next step will be to make our database connection method more robust by adding code that handles cases where our application fails to connect to our database. Introducing this level of resilience to your application code is a recommended practice when working with containers using Compose.

      Open db.js for editing:

      You will see the code that we added earlier, along with the url constant for Mongo's connection URI and the Mongoose connect method:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Currently, our connect method accepts an option that tells Mongoose to use Mongo's new URL parser. Let's add a few more options to this method to define parameters for reconnection attempts. We can do this by creating an options constant that includes the relevant information, in addition to the new URL parser option. Below your Mongo constants, add the following definition for an options constant:

      ~/node_project/db.js

      ...
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500, 
        connectTimeoutMS: 10000,
      };
      ...
      

      The reconnectTries option tells Mongoose to continue trying to connect indefinitely, while reconnectInterval defines the period between connection attempts in milliseconds. connectTimeoutMS defines 10 seconds as the period that the Mongo driver will wait before failing the connection attempt.

      We can now use the new options constant in the Mongoose connect method to fine tune our Mongoose connection settings. We will also add a promise to handle potential connection errors.

      Currently, the Mongoose connect method looks like this:

      ~/node_project/db.js

      ...
      mongoose.connect(url, {useNewUrlParser: true});
      

      Delete the existing connect method and replace it with the following code, which includes the options constant and a promise:

      ~/node_project/db.js

      ...
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      In the case of a successful connection, our function logs an appropriate message; otherwise it will catch and log the error, allowing us to troubleshoot.

      The finished file will look like this:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const {
        MONGO_USERNAME,
        MONGO_PASSWORD,
        MONGO_HOSTNAME,
        MONGO_PORT,
        MONGO_DB
      } = process.env;
      
      const options = {
        useNewUrlParser: true,
        reconnectTries: Number.MAX_VALUE,
        reconnectInterval: 500,
        connectTimeoutMS: 10000,
      };
      
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, options).then( function() {
        console.log('MongoDB is connected');
      })
        .catch( function(err) {
        console.log(err);
      });
      

      Save and close the file when you have finished editing.

      You have now added resiliency to your application code to handle cases where your application might fail to connect to your database. With this code in place, you can move on to defining your services with Compose.

      Step 4 — Defining Services with Docker Compose

      With your code refactored, you are ready to write the docker-compose.yml file with your service definitions. A service in Compose is a running container, and service definitions — which you will include in your docker-compose.yml file — contain information about how each container image will run. The Compose tool allows you to define multiple services to build multi-container applications.

      Before defining our services, however, we will add a tool to our project called wait-for to ensure that our application only attempts to connect to our database once the database startup tasks are complete. This wrapper script uses netcat to poll whether or not a specific host and port are accepting TCP connections. Using it allows you to control your application's attempts to connect to your database by testing whether or not the database is ready to accept connections.

      Though Compose allows you to specify dependencies between services using the depends_on option, this order is based on whether or not the container is running rather than its readiness. Using depends_on won't be optimal for our setup, since we want our application to connect only when the database startup tasks, including adding a user and password to the admin authentication database, are complete. For more information on using wait-for and other tools to control startup order, please see the relevant recommendations in the Compose documentation.

      Open a file called wait-for.sh:

      Paste the following code into the file to create the polling function:

      ~/node_project/app/wait-for.sh

      #!/bin/sh
      
      # original script: https://github.com/eficode/wait-for/blob/master/wait-for
      
      TIMEOUT=15
      QUIET=0
      
      echoerr() {
        if [ "$QUIET" -ne 1 ]; then printf "%sn" "$*" 1>&2; fi
      }
      
      usage() {
        exitcode="$1"
        cat << USAGE >&2
      Usage:
        $cmdname host:port [-t timeout] [-- command args]
        -q | --quiet                        Do not output any status messages
        -t TIMEOUT | --timeout=timeout      Timeout in seconds, zero for no timeout
        -- COMMAND ARGS                     Execute command with args after the test finishes
      USAGE
        exit "$exitcode"
      }
      
      wait_for() {
        for i in `seq $TIMEOUT` ; do
          nc -z "$HOST" "$PORT" > /dev/null 2>&1
      
          result=$?
          if [ $result -eq 0 ] ; then
            if [ $# -gt 0 ] ; then
              exec "$@"
            fi
            exit 0
          fi
          sleep 1
        done
        echo "Operation timed out" >&2
        exit 1
      }
      
      while [ $# -gt 0 ]
      do
        case "$1" in
          *:* )
          HOST=$(printf "%sn" "$1"| cut -d : -f 1)
          PORT=$(printf "%sn" "$1"| cut -d : -f 2)
          shift 1
          ;;
          -q | --quiet)
          QUIET=1
          shift 1
          ;;
          -t)
          TIMEOUT="$2"
          if [ "$TIMEOUT" = "" ]; then break; fi
          shift 2
          ;;
          --timeout=*)
          TIMEOUT="${1#*=}"
          shift 1
          ;;
          --)
          shift
          break
          ;;
          --help)
          usage 0
          ;;
          *)
          echoerr "Unknown argument: $1"
          usage 1
          ;;
        esac
      done
      
      if [ "$HOST" = "" -o "$PORT" = "" ]; then
        echoerr "Error: you need to provide a host and port to test."
        usage 2
      fi
      
      wait_for "$@"
      

      Save and close the file when you are finished adding the code.

      Make the script executable:

      Next, open the docker-compose.yml file:

      First, define the nodejs application service by adding the following code to the file:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB 
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js
      

      The nodejs service definition includes the following options:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the image build — in this case, the current project directory.
      • dockerfile: This specifies the Dockerfile in your current project directory as the file Compose will use to build the application image. For more information about this file, please see How To Build a Node.js Application with Docker.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.
      • env_file: This tells Compose that we would like to add environment variables from a file called .env, located in our build context.
      • environment: Using this option allows you to add the Mongo connection settings you defined in the .env file. Note that we are not setting NODE_ENV to development, since this is Express's default behavior if NODE_ENV is not set. When moving to production, you can set this to production to enable view caching and less verbose error messages.
        Also note that we have specified the db database container as the host, as discussed in Step 2.
      • ports: This maps port 80 on the host to port 8080 on the container.
      • volumes: We are including two types of mounts here:

        • The first is a bind mount that mounts our application code on the host to the /home/node/app directory on the container. This will facilitate rapid development, since any changes you make to your host code will be populated immediately in the container.
        • The second is a named volume, node_modules. When Docker runs the npm install instruction listed in the application Dockerfile, npm will create a new node_modules directory on the container that includes the packages required to run the application. The bind mount we just created will hide this newly created node_modules directory, however. Since node_modules on the host is empty, the bind will map an empty directory to the container, overriding the new node_modules directory and preventing our application from starting. The named node_modules volume solves this problem by persisting the contents of the /home/node/app/node_modules directory and mounting it to the container,
          hiding the bind.

        Keep the following points in mind when using this approach:

        • Your bind will mount the contents of the node_modules directory on the container to the host and this directory will be owned by root, since the named volume was created by Docker.
        • If you have a pre-existing node_modules directory on the host, it will override the node_modules directory created on the container. The setup that we're building in this tutorial assumes that you do not have a pre-existing node_modules directory and that you won't be working with npm on your host. This is in keeping with a twelve-factor approach to application development, which minimizes dependencies between execution environments.
      • networks: This specifies that our application service will join the app-network network, which we will define at the bottom on the file.

      • command: This option lets you set the command that should be executed when Compose runs the image. Note that this will override the CMD instruction that we set in our application Dockerfile. Here, we are running the application using the wait-for script, which will poll the db service on port 27017 to test whether or not the database service is ready. Once the readiness test succeeds, the script will execute the command we have set, /home/node/app/node_modules/.bin/nodemon app.js, to start the application with nodemon. This will ensure that any future changes we make to our code are reloaded without our having to restart the application.

      Next, create the db service by adding the following code below the application service definition:

      ~/node_project/docker-compose.yml

      ...
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:  
            - dbdata:/data/db   
          networks:
            - app-network  
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes to the image, environment, and volumes definitions:

      • image: To create this service, Compose will pull the 4.1.8-xenial Mongo image from Docker Hub. We are pinning a particular version to avoid possible future conflicts as the Mongo image changes. For more information about version pinning, please see the Docker documentation on Dockerfile best practices.
      • MONGO_INITDB_ROOT_USERNAME, MONGO_INITDB_ROOT_PASSWORD: The mongo image makes these environment variables available so that you can modify the initialization of your database instance. MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD together create a root user in the admin authentication database and ensure that authentication is enabled when the container starts. We have set MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using the values from our .env file, which we pass to the db service using the env_file option. Doing this means that our sammy application user will be a root user on the database instance, with access to all of the administrative and operational privileges of that role. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

        Note: Keep in mind that these variables will not take effect if you start the container with an existing data directory in place.
      • dbdata:/data/db: The named volume dbdata will persist the data stored in Mongo's default data directory, /data/db. This will ensure that you don't lose data in cases where you stop or remove containers.

      We've also added the db service to the app-network network with the networks option.

      As a final step, add the volume and network definitions to the bottom of the file:

      ~/node_project/docker-compose.yml

      ...
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      The user-defined bridge network app-network enables communication between our containers since they are on the same Docker daemon host. This streamlines traffic and communication within the application, as it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, our db and nodejs containers can communicate with each other, and we only need to expose port 80 for front-end access to the application.

      Our top-level volumes key defines the volumes dbdata and node_modules. When Docker creates volumes, the contents of the volume are stored in a part of the host filesystem, /var/lib/docker/volumes/, that's managed by Docker. The contents of each volume are stored in a directory under /var/lib/docker/volumes/ and get mounted to any container that uses the volume. In this way, the shark information data that our users will create will persist in the dbdata volume even if we remove and recreate the db container.

      The finished docker-compose.yml file will look like this:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          env_file: .env
          environment:
            - MONGO_USERNAME=$MONGO_USERNAME
            - MONGO_PASSWORD=$MONGO_PASSWORD
            - MONGO_HOSTNAME=db
            - MONGO_PORT=$MONGO_PORT
            - MONGO_DB=$MONGO_DB
          ports:
            - "80:8080"
          volumes:
            - .:/home/node/app
            - node_modules:/home/node/app/node_modules
          networks:
            - app-network
          command: ./wait-for.sh db:27017 -- /home/node/app/node_modules/.bin/nodemon app.js 
      
        db:
          image: mongo:4.1.8-xenial
          container_name: db
          restart: unless-stopped
          environment:
            - MONGO_INITDB_ROOT_USERNAME=$MONGO_USERNAME
            - MONGO_INITDB_ROOT_PASSWORD=$MONGO_PASSWORD
          volumes:     
            - dbdata:/data/db
          networks:
            - app-network  
      
      networks:
        app-network:
          driver: bridge
      
      volumes:
        dbdata:
        node_modules:  
      

      Save and close the file when you are finished editing.

      With your service definitions in place, you are ready to start the application.

      Step 5 — Testing the Application

      With your docker-compose.yml file in place, you can create your services with the docker-compose up command. You can also test that your data will persist by stopping and removing your containers with docker-compose down.

      First, build the container images and create the services by running docker-compose up with the -d flag, which will then run the nodejs and db containers in the background:

      You will see output confirming that your services have been created:

      Output

      ... Creating db ... done Creating nodejs ... done

      You can also get more detailed information about the startup processes by displaying the log output from the services:

      You will see something like this if everything has started correctly:

      Output

      ... nodejs | [nodemon] starting `node app.js` nodejs | Example app listening on 8080! nodejs | MongoDB is connected ... db | 2019-02-22T17:26:27.329+0000 I ACCESS [conn2] Successfully authenticated as principal sammy on admin

      You can also check the status of your containers with docker-compose ps:

      You will see output indicating that your containers are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------- db docker-entrypoint.sh mongod Up 27017/tcp nodejs ./wait-for.sh db:27017 -- ... Up 0.0.0.0:80->8080/tcp

      With your services running, you can visit http://your_server_ip in the browser. You will see a landing page that looks like this:

      Application Landing Page

      Click on the Get Shark Info button. You will see a page with an entry form where you can enter a shark name and a description of that shark's general character:

      Shark Info Form

      In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      As a final step, we can test that the data you've just entered will persist if you remove your database container.

      Back at your terminal, type the following command to stop and remove your containers and network:

      Note that we are not including the --volumes option; hence, our dbdata volume is not removed.

      The following output confirms that your containers and network have been removed:

      Output

      Stopping nodejs ... done Stopping db ... done Removing nodejs ... done Removing db ... done Removing network node_project_app-network

      Recreate the containers:

      Now head back to the shark information form:

      Shark Info Form

      Enter a new shark of your choosing. We'll go with Whale Shark and Large:

      Enter New Shark

      Once you click Submit, you will see that the new shark has been added to the shark collection in your database without the loss of the data you've already entered:

      Complete Shark Collection

      Your application is now running on Docker containers with data persistence and code synchronization enabled.

      Conclusion

      By following this tutorial, you have created a development setup for your Node application using Docker containers. You've made your project more modular and portable by extracting sensitive information and decoupling your application's state from your application code. You have also configured a boilerplate docker-compose.yml file that you can revise as your development needs and requirements change.

      As you develop, you may be interested in learning more about designing applications for containerized and Cloud Native workflows. Please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes for more information on these topics.

      To learn more about the code used in this tutorial, please see How To Build a Node.js Application with Docker and How To Integrate MongoDB with Your Node Application. For information about deploying a Node application with an Nginx reverse proxy using containers, please see How To Secure a Containerized Node.js Application with Nginx, Let's Encrypt, and Docker Compose.



      Source link