One place for hosting & domains

      Docker

      How To Build a Node.js Application with Docker


      Introduction

      The Docker platform allows developers to package and run applications as containers. A container is an isolated process that runs on a shared operating system, offering a lighter weight alternative to virtual machines. Though containers are not new, they offer benefits — including process isolation and environment standardization — that are growing in importance as more developers use distributed application architectures.

      When building and scaling an application with Docker, the starting point is typically creating an image for your application, which you can then run in a container. The image includes your application code, libraries, configuration files, environment variables, and runtime. Using an image ensures that the environment in your container is standardized and contains only what is necessary to build and run your application.

      In this tutorial, you will create an application image for a static website that uses the Express framework and Bootstrap. You will then build a container using that image and push it to Docker Hub for future use. Finally, you will pull the stored image from your Docker Hub repository and build another container, demonstrating how you can recreate and scale your application.

      Prerequisites

      To follow this tutorial, you will need:

      Step 1 — Installing Your Application Dependencies

      To create your image, you will first need to make your application files, which you can then copy to your container. These files will include your application’s static content, code, and dependencies.

      First, create a directory for your project in your non-root user’s home directory. We will call ours node_project, but you should feel free to replace this with something else:

      Navigate to this directory:

      This will be the root directory of the project.

      Next, create a package.json file with your project's dependencies and other identifying information. Open the file with nano or your favorite editor:

      Add the following information about the project, including its name, author, license, entrypoint, and dependencies. Be sure to replace the author information with your own name and contact details:

      ~/node_project/package.json

      {
        "name": "nodejs-image-demo",
        "version": "1.0.0",
        "description": "nodejs image demo",
        "author": "Sammy the Shark <sammy@example.com>",
        "license": "MIT",
        "main": "app.js",
        "scripts": {
          "start": "node app.js",
          "test": "echo "Error: no test specified" && exit 1"
        },
        "keywords": [
          "nodejs",
          "bootstrap",
          "express"
        ],
        "dependencies": {
          "express": "^4.16.4"
        }
      }
      

      This file includes the project name, author, and license under which it is being shared. Npm recommends making your project name short and descriptive, and avoiding duplicates in the npm registry. We've listed the MIT license in the license field, permitting the free use and distribution of the application code.

      Additionally, the file specifies:

      • "main": The entrypoint for the application, app.js. You will create this file next.
      • "scripts": The commands that will run when you use npm start to start your application.
      • "dependencies": The project dependencies — in this case, Express 4.16.4 or above.

      Though this file does not list a repository, you can add one by following these guidelines on adding a repository to your package.json file. This is a good addition if you are versioning your application.

      Save and close the file when you've finished making changes.

      To install your project's dependencies, run the following command:

      This will install the packages you've listed in your package.json file in your project directory.

      We can now move on to building the application files.

      Step 2 — Creating the Application Files

      We will create a website that offers users information about sharks. Our application will have a main entrypoint, app.js, and a views directory that will include the project's static assets. The landing page, index.html, will offer users some preliminary information and a link to a page with more detailed shark information, sharks.html. In the views directory, we will create both the landing page and sharks.html.

      First, open app.js in the main project directory to define the project's routes:

      The first part of the file will create the Express application and Router objects, and define the base directory, port, and host as variables:

      ~/node_project/app.js

      var express = require("express");
      var app = express();
      var router = express.Router();
      
      var path = __dirname + '/views/';
      const PORT = 8080;
      const HOST = '0.0.0.0';
      

      The require function loads the express module, which we then use to create the app and router objects. The router object will perform the routing function of the application, and as we define HTTP method routes we will add them to this object to define how our application will handle requests.

      This section of the file also sets a few variables, path, PORT, and HOST:

      • path: Defines the base directory, which will be the views subdirectory within the current project directory.
      • HOST: Defines the address that the application will bind to and listen on. Setting this to 0.0.0.0 or all IPv4 addresses corresponds with Docker's default behavior of exposing containers to 0.0.0.0 unless otherwise instructed.
      • PORT: Tells the app to listen on and bind to port 8080.

      Next, set the routes for the application using the router object:

      ~/node_project/app.js

      ...
      
      router.use(function (req,res,next) {
        console.log("/" + req.method);
        next();
      });
      
      router.get("/",function(req,res){
        res.sendFile(path + "index.html");
      });
      
      router.get("/sharks",function(req,res){
        res.sendFile(path + "sharks.html");
      });
      

      The router.use function loads a middleware function that will log the router's requests and pass them on to the application's routes. These are defined in the subsequent functions, which specify that a GET request to the base project URL should return the index.html page, while a GET request to the /sharks route should return sharks.html.

      Finally, mount the router middleware and the application's static assets and tell the app to listen on port 8080:

      ~/node_project/app.js

      ...
      
      app.use(express.static(path));
      app.use("/", router);
      
      app.listen(8080, function () {
        console.log('Example app listening on port 8080!')
      })
      

      The finished app.js file will look like this:

      ~/node_project/app.js

      var express = require("express");
      var app = express();
      var router = express.Router();
      
      var path = __dirname + '/views/';
      const PORT = 8080;
      const HOST = '0.0.0.0';
      
      router.use(function (req,res,next) {
        console.log("/" + req.method);
        next();
      });
      
      router.get("/",function(req,res){
        res.sendFile(path + "index.html");
      });
      
      router.get("/sharks",function(req,res){
        res.sendFile(path + "sharks.html");
      });
      
      app.use(express.static(path));
      app.use("/", router);
      
      app.listen(8080, function () {
        console.log('Example app listening on port 8080!')
      })
      

      Save and close the file when you are finished.

      Next, let's add some static content to the application. Start by creating the views directory:

      Open the landing page file, index.html:

      Add the following code to the file, which will import Boostrap and create a jumbotron component with a link to the more detailed sharks.html info page:

      ~/node_project/views/index.html

      <!DOCTYPE html>
      <html lang="en">
         <head>
            <title>About Sharks</title>
            <meta charset="utf-8">
            <meta name="viewport" content="width=device-width, initial-scale=1">
            <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
            <link href="css/styles.css" rel="stylesheet">
            <link href='https://fonts.googleapis.com/css?family=Merriweather:400,700' rel='stylesheet' type='text/css'>
            <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
            <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
         </head>
         <body>
            <nav class="navbar navbar-inverse navbar-static-top">
               <div class="container">
                  <div class="navbar-header">
                     <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
                     <span class="sr-only">Toggle navigation</span>
                     <span class="icon-bar"></span>
                     <span class="icon-bar"></span>
                     <span class="icon-bar"></span>
                     </button>
                     <a class="navbar-brand" href="#">Everything Sharks</a>
                  </div>
                  <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
                     <ul class="nav navbar-nav mr-auto">
                        <li class="active"><a href="/">Home</a></li>
                        <li><a href="http://www.digitalocean.com/sharks">Sharks</a></li>
                     </ul>
                  </div>
               </div>
            </nav>
            <div class="jumbotron">
               <div class="container">
                  <h1>Want to Learn About Sharks?</h1>
                  <p>Are you ready to learn about sharks?</p>
                  <br>
                  <p><a class="btn btn-primary btn-lg" href="http://www.digitalocean.com/sharks" role="button">Get Shark Info</a></p>
               </div>
            </div>
            <div class="container">
               <div class="row">
                  <div class="col-md-6">
                     <h3>Not all sharks are alike</h3>
                     <p>Though some are dangerous, sharks generally do not attack humans. Out of the 500 species known to researchers, only 30 have been known to attack humans.</p>
                  </div>
                  <div class="col-md-6">
                     <h3>Sharks are ancient</h3>
                     <p>There is evidence to suggest that sharks lived up to 400 million years ago.</p>
                  </div>
               </div>
            </div>
         </body>
      </html>
      

      The top-level navbar here allows users to toggle between the Home and Sharks pages. In the navbar-nav subcomponent, we are using Bootstrap's active class to indicate the current page to the user. We've also specified the routes to our static pages, which match the routes we defined in app.js:

      ~/node_project/views/index.html

      ...
      <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
          <ul class="nav navbar-nav mr-auto">
              <li class="active"><a href="/">Home</a></li>
              <li><a href="http://www.digitalocean.com/sharks">Sharks</a></li>
          </ul>
      </div>
      ...
      

      Additionally, we've created a link to our shark information page in our jumbotron's button:

      ~/node_project/views/index.html

      ...
      <div class="jumbotron">
          <div class="container">
            <h1>Want to Learn About Sharks?</h1>
            <p>Are you ready to learn about sharks?</p>
            <br>
            <p><a class="btn btn-primary btn-lg" href="http://www.digitalocean.com/sharks" role="button">Get Shark Info</a></p>
          </div>
      </div>
      ...
      

      There is also a link to a custom style sheet in the header:

      ~/node_project/views/index.html

      ...
      <link href="css/styles.css" rel="stylesheet">
      ...
      

      We will create this style sheet at the end of this step.

      Save and close the file when you are finished.

      With the application landing page in place, we can create our shark information page, sharks.html, which will offer interested users more information about sharks.

      Open the file:

      Add the following code, which imports Bootstrap and the custom style sheet and offers users detailed information about certain sharks:

      ~/node_project/views/sharks.html

      <!DOCTYPE html>
      <html lang="en">
         <head>
            <title>About Sharks</title>
            <meta charset="utf-8">
            <meta name="viewport" content="width=device-width, initial-scale=1">
            <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/css/bootstrap.min.css">
            <link href="css/styles.css" rel="stylesheet">
            <link href='https://fonts.googleapis.com/css?family=Merriweather:400,700' rel='stylesheet' type='text/css'>
            <script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
            <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js"></script>
         </head>
         <nav class="navbar navbar-inverse navbar-static-top">
            <div class="container">
               <div class="navbar-header">
                  <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1" aria-expanded="false">
                  <span class="sr-only">Toggle navigation</span>
                  <span class="icon-bar"></span>
                  <span class="icon-bar"></span>
                  <span class="icon-bar"></span>
                  </button>
                  <a class="navbar-brand" href="/">Everything Sharks</a>
               </div>
               <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
                  <ul class="nav navbar-nav mr-auto">
                     <li><a href="/">Home</a></li>
                     <li class="active"><a href="http://www.digitalocean.com/sharks">Sharks</a></li>
                  </ul>
               </div>
            </div>
         </nav>
         <div class="jumbotron text-center">
            <h1>Shark Info</h1>
         </div>
         <div class="container">
            <div class="row">
               <div class="col-md-6">
                  <p>
                  <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.</div>
                  <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
               </div>
               <div class="col-md-6">
                  <p>
                  <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                  <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
               </div>
            </div>
          </div>
         </body>
      </html>
      

      Note that in this file, we again use the active class to indicate the current page.

      Save and close the file when you are finished.

      Finally, create the custom CSS style sheet that you've linked to in index.html and sharks.html by first creating a css folder in the views directory:

      Open the style sheet:

      • nano views/css/styles.css

      Add the following code, which will set the desired color and font for our pages:

      ~/node_project/views/css/styles.css

      .navbar {
          margin-bottom: 0;
      }
      
      body {
          background: #020A1B;
          color: #ffffff;
          font-family: 'Merriweather', sans-serif;
      }
      
      h1,
      h2 {
          font-weight: bold;
      }
      
      p {
          font-size: 16px;
          color: #ffffff;
      }
      
      .jumbotron {
          background: #0048CD;
          color: white;
          text-align: center;
      }
      
      .jumbotron p {
          color: white;
          font-size: 26px;
      }
      
      .btn-primary {
          color: #fff;
          text-color: #000000;
          border-color: white;
          margin-bottom: 5px;
      }
      
      img,
      video,
      audio {
          margin-top: 20px;
          max-width: 80%;
      }
      
      div.caption: {
          float: left;
          clear: both;
      }
      

      In addition to setting font and color, this file also limits the size of the images by specifying a max-width of 80%. This will prevent them from taking up more room than we would like on the page.

      Save and close the file when you are finished.

      With the application files in place and the project dependencies installed, you are ready to start the application.

      If you followed the initial server setup tutorial in the prerequisites, you will have an active firewall permitting only SSH traffic. To permit traffic to port 8080 run:

      To start the application, make sure that you are in your project's root directory:

      Start the application with npm start:

      Navigate your browser to http://your_server_ip:8080. You will see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see the following information page:

      Shark Info Page

      You now have an application up and running. When you are ready, quit the server by typing CTRL+C. We can now move on to creating the Dockerfile that will allow us to recreate and scale this application as desired.

      Step 3 — Writing the Dockerfile

      Your Dockerfile specifies what will be included in your application container when it is executed. Using a Dockerfile allows you to define your container environment and avoid discrepancies with dependencies or runtime versions.

      Following these guidelines on building optimized containers, we will make our image as efficient as possible by minimizing the number of image layers and restricting the image's function to a single purpose — recreating our application files and static content.

      In your project's root directory, create the Dockerfile:

      Docker images are created using a succession of layered images that build on one another. Our first step will be to add the base image for our application that will form the starting point of the application build.

      Let's use the node:10 image, since, at the time of writing, this is the recommended LTS version of Node.js. Add the following FROM instruction to set the application's base image:

      ~/node_project/Dockerfile

      FROM node:10
      

      This image includes Node.js and npm. Each Dockerfile must begin with a FROM instruction.

      By default, the Docker Node image includes a non-root node user that you can use to avoid running your application container as root. It is a recommended security practice to avoid running containers as root and to restrict capabilities within the container to only those required to run its processes. We will therefore use the node user's home directory as the working directory for our application and set them as our user inside the container. For more information about best practices when working with the Docker Node image, see this best practices guide.

      To fine-tune the permissions on our application code in the container, let's create the node_modules subdirectory in /home/node along with the app directory. Creating these directories will ensure that they have the permissions we want, which will be important when we create local node modules in the container with npm install. In addition to creating these directories, we will set ownership on them to our node user:

      ~/node_project/Dockerfile

      ...
      RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
      

      For more information on the utility of consolidating RUN instructions, see this discussion of how to manage container layers.

      Next, set the working directory of the application to /home/node/app:

      ~/node_project/Dockerfile

      ...
      WORKDIR /home/node/app
      

      If a WORKDIR isn't set, Docker will create one by default, so it's a good idea to set it explicitly.

      Next, copy the package.json and package-lock.json (for npm 5+) files:

      ~/node_project/Dockerfile

      ...
      COPY package*.json ./
      

      Adding this COPY instruction before running npm install or copying the application code allows us to take advantage of Docker's caching mechanism. At each stage in the build, Docker will check to see if it has a layer cached for that particular instruction. If we change package.json, this layer will be rebuilt, but if we don't, this instruction will allow Docker to use the existing image layer and skip reinstalling our node modules.

      After copying the project dependencies, we can run npm install:

      ~/node_project/Dockerfile

      ...
      RUN npm install
      

      Copy your application code to the working application directory on the container:

      ~/node_project/Dockerfile

      ...
      COPY . .
      

      To ensure that the application files are owned by the non-root node user, copy the permissions from your application directory to the directory on the container:

      ~/node_project/Dockerfile

      ...
      COPY --chown=node:node . .
      

      Set the user to node:

      ~/node_project/Dockerfile

      ...
      USER node
      

      Expose port 8080 on the container and start the application:

      ~/node_project/Dockerfile

      ...
      EXPOSE 8080
      
      CMD [ "npm", "start" ]
      

      EXPOSE does not publish the port, but instead functions as a way of documenting which ports on the container will be published at runtime. CMD runs the command to start the application — in this case, npm start. Note that there should only be one CMD instruction in each Dockerfile. If you include more than one, only the last will take effect.

      There are many things you can do with the Dockerfile. For a complete list of instructions, please refer to Docker's Dockerfile reference documentation.

      The complete Dockerfile looks like this:

      ~/node_project/Dockerfile

      
      FROM node:10
      
      RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app
      
      WORKDIR /home/node/app
      
      COPY package*.json ./
      
      RUN npm install
      
      COPY . .
      
      COPY --chown=node:node . .
      
      USER node
      
      EXPOSE 8080
      
      CMD [ "npm", "start" ]
      

      Save and close the file when you are finished editing.

      Before building the application image, let's add a .dockerignore file. Working in a similar way to a .gitignore file, .dockerignore specifies which files and directories in your project directory should not be copied over to your container.

      Open the .dockerignore file:

      Inside the file, add your local node modules, npm logs, Dockerfile, and .dockerignore file:

      ~/node_project/.dockerignore

      node_modules
      npm-debug.log
      Dockerfile
      .dockerignore
      

      If you are working with Git then you will also want to add your .git directory and .gitignore file.

      Save and close the file when you are finished.

      You are now ready to build the application image using the docker build command. Using the -t flag with docker build will allow you to tag the image with a memorable name. Because we are going to push the image to Docker Hub, let's include our Docker Hub username in the tag. We will tag the image as nodejs-image-demo, but feel free to replace this with a name of your own choosing. Remember to also replace your_dockerhub_username with your own Docker Hub username:

      • docker build -t your_dockerhub_username/nodejs-image-demo .

      The . specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 8 seconds ago 895MB node 10 f09e7c96b6de 17 hours ago 893MB

      It is now possible to create a container with this image using docker run. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a memorable name.

      Run the following command to build the container:

      • docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo

      Once your container is up and running, you can inspect a list of your running containers with docker ps:

      You will see the following output:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e50ad27074a7 your_dockerhub_username/nodejs-image-demo "npm start" 8 seconds ago Up 7 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

      With your container running, you can now visit your application by navigating your browser to http://your_server_ip. You will see your application landing page once again:

      Application Landing Page

      Now that you have created an image for your application, you can push it to Docker Hub for future use.

      Step 4 — Using a Repository to Work with Images

      By pushing your application image to a registry like Docker Hub, you make it available for subsequent use as you build and scale your containers. We will demonstrate how this works by pushing the application image to a repository and then using the image to recreate our container.

      The first step to pushing the image is to log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_username -p your_dockerhub_password

      Logging in this way will create a ~/.docker/config.json file in your user's home directory with your Docker Hub credentials.

      You can now push the application image to Docker Hub using the tag you created earlier, your_dockerhub_username/nodejs-image-demo:

      • docker push your_dockerhub_username/nodejs-image-demo

      Let's test the utility of the image registry by destroying our current application container and image and rebuilding them with the image in our repository.

      First, list your running containers:

      You will see the following output:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES e50ad27074a7 your_dockerhub_username/nodejs-image-demo "npm start" 3 minutes ago Up 3 minutes 0.0.0.0:80->8080/tcp nodejs-image-demo

      Using the CONTAINER ID listed in your output, stop the running application container. Be sure to replace the highlighted ID below with your own CONTAINER ID:

      List your all of your images with the -a flag:

      You will see the following output with the name of your image, your_dockerhub_username/nodejs-image-demo, along with the node image and the other images from your build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 7 minutes ago 895MB <none> <none> e039d1b9a6a0 7 minutes ago 895MB <none> <none> dfa98908c5d1 7 minutes ago 895MB <none> <none> b9a714435a86 7 minutes ago 895MB <none> <none> 51de3ed7e944 7 minutes ago 895MB <none> <none> 5228d6c3b480 7 minutes ago 895MB <none> <none> 833b622e5492 8 minutes ago 893MB <none> <none> 5c47cc4725f1 8 minutes ago 893MB <none> <none> 5386324d89fb 8 minutes ago 893MB <none> <none> 631661025e2d 8 minutes ago 893MB node 10 f09e7c96b6de 17 hours ago 893MB

      Remove the stopped container and all of the images, including unused or dangling images, with the following command:

      Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

      You have now removed both the container running your application image and the image itself. For more information on removing Docker containers, images, and volumes, please see How To Remove Docker Images, Containers, and Volumes.

      With all of your images and containers deleted, you can now pull the application image from Docker Hub:

      • docker pull your_dockerhub_username/nodejs-image-demo

      List your images once again:

      You will see your application image:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_username/nodejs-image-demo latest 1c723fb2ef12 11 minutes ago 895MB

      You can now rebuild your container using the command from Step 3:

      • docker run --name nodejs-image-demo -p 80:8080 -d your_dockerhub_username/nodejs-image-demo

      List your running containers:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f6bc2f50dff6 your_dockerhub_username/nodejs-image-demo "npm start" 4 seconds ago Up 3 seconds 0.0.0.0:80->8080/tcp nodejs-image-demo

      Visit http://your_server_ip once again to view your running application.

      Conclusion

      In this tutorial you created a static web application with Express and Bootstrap, as well as a Docker image for this application. You used this image to create a container and pushed the image to Docker Hub. From there, you were able to destroy your image and container and recreate them using your Docker Hub repository.

      If you are interested in learning more about how to work with tools like Docker Compose and Docker Machine to create multi-container setups, you can look at the following guides:

      For general tips on working with container data, see:

      If you are interested in other Docker-related topics, please see our complete library of Docker tutorials.



      Source link

      Como Instalar e Usar o Docker no Ubuntu 18.04


      Uma versão anterior deste tutorial foi escrita por finid.

      Introdução

      O Docker é uma aplicação que simplifica a maneira de gerenciar processos de aplicativos em containers. Os containers lhe permitem executar suas aplicações em processos com isolamento de recursos. Eles são semelhantes às máquinas virtuais, mas os containers são mais portáteis, possuem recursos mais amigáveis, e são mais dependentes do sistema operacional do host.

      Para uma introdução detalhada aos diferentes componentes de um container Docker, dê uma olhada em O Ecossistema do Docker: Uma Introdução aos Componentes Comuns.

      Neste tutorial, você irá instalar e utilizar o Docker Community Edition (CE) no Ubuntu 18.04. Você instalará o próprio Docker, trabalhará com containers e imagens, e irá enviar uma imagem para um repositório do Docker.

      Pré-requisitos

      Para seguir este tutorial, você precisará do seguinte:

      Passo 1 — Instalando o Docker

      O pacote de instalação do Docker disponível no repositório oficial do Ubuntu pode não ser a versão mais recente. Para garantir que teremos a última versão, vamos instalar o Docker a partir do repositório oficial do projeto. Para fazer isto, vamos adicionar uma nova fonte de pacotes, adicionar a chave GPG do Docker para garantir que os downloads são válidos, e então instalar os pacotes.

      Primeiro, atualize sua lista atual de pacotes:

      Em seguida, instale alguns pacotes de pré-requisitos que permitem que o apt utilize pacotes via HTTPS:

      • sudo apt install apt-transport-https ca-certificates curl software-properties-common

      Então adicione a chave GPG para o repositório oficial do Docker em seu sistema:

      • curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -

      Adicione o repositório do Docker às fontes do APT:

      • sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable"

      A seguir, atualize o banco de dados de pacotes com os pacotes Docker do repositório recém adicionado:

      Certifique-se de que você irá instalar a partir do repositório do Docker em vez do repositório padrão do Ubuntu:

      • apt-cache policy docker-ce

      Você verá uma saída como esta, embora o número da versão do Docker possa estar diferente:

      Output of apt-cache policy docker-ce

      
      docker-ce:
        Installed: (none)
        Candidate: 18.03.1~ce~3-0~ubuntu
        Version table:
           18.03.1~ce~3-0~ubuntu 500
              500 https://download.docker.com/linux/ubuntu bionic/stable amd64 Packages
      

      Observe que o docker-ce não está instalado, mas o candidato para instalação é do repositório do Docker para o Ubuntu 18.04 (bionic).

      Finalmente, instale o Docker:

      • sudo apt install docker-ce

      O Docker agora deve ser instalado, o daemon iniciado e o processo ativado para iniciar na inicialização. Verifique se ele está sendo executado:

      • sudo systemctl status docker

      A saída deve ser semelhante à seguinte, mostrando que o serviço está ativo e executando:

      Output

      ● docker.service - Docker Application Container Engine Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2018-07-05 15:08:39 UTC; 2min 55s ago Docs: https://docs.docker.com Main PID: 10096 (dockerd) Tasks: 16 CGroup: /system.slice/docker.service ├─10096 /usr/bin/dockerd -H fd:// └─10113 docker-containerd --config /var/run/docker/containerd/containerd.toml

      A instalação do Docker agora oferece não apenas o serviço Docker (daemon), mas também o utilitário de linha de comando docker ou o cliente Docker. Vamos explorar como usar o comando docker mais adiante neste tutorial.

      Passo 2 — Executando o Comando Docker sem Sudo (Opcional)

      Por padrão o comando docker só pode ser executado pelo usuário root ou por um usuário do grupo docker, que é automaticamente criado durante o processo de instalação do Docker. Se você tentar executar o comando docker sem prefixá-lo com sudo ou sem estar no grupo docker, você obterá uma saída como esta:

      Output

      docker: Cannot connect to the Docker daemon. Is the docker daemon running on this host?. See 'docker run --help'.

      Se você quiser evitar digitar sudo sempre que você executar o comando docker, adicione seu nome de usuário ao grupo docker:

      • sudo usermod -aG docker ${USER}

      Para aplicar a nova associação ao grupo, efetue logout do servidor e faça logon novamente ou digite o seguinte:

      Você será solicitado a entrar com seu usuário e senha para continuar.

      Confirme que seu usuário está agora adicionado ao grupo docker digitando:

      Output

      sammy sudo docker

      Se você precisar adicionar um usuário ao grupo docker com o qual você não está logado, declare o nome do usuário explicitamente usando:

      • sudo usermod -aG docker nome-do-usuário

      O restante desse artigo assume que você está executando o comando docker como um usuário do grupo docker. Se você optar por não fazê-lo, por favor, prefixe os comandos com sudo.

      A seguir, vamos explorar o comando docker.

      Passo 3 — Usando o Comando Docker

      A utilização do comando docker consiste em passar a ele uma cadeia de opções e comandos seguidos de argumentos. A sintaxe assume este formato:

      • docker [option] [command] [arguments]

      Para ver todos os subcomandos disponíveis, digite:

      A partir do Docker 18, a lista completa de subcomandos disponíveis inclui:

      Output

      attach Attach local standard input, output, and error streams to a running container build Build an image from a Dockerfile commit Create a new image from a container's changes cp Copy files/folders between a container and the local filesystem create Create a new container diff Inspect changes to files or directories on a container's filesystem events Get real time events from the server exec Run a command in a running container export Export a container's filesystem as a tar archive history Show the history of an image images List images import Import the contents from a tarball to create a filesystem image info Display system-wide information inspect Return low-level information on Docker objects kill Kill one or more running containers load Load an image from a tar archive or STDIN login Log in to a Docker registry logout Log out from a Docker registry logs Fetch the logs of a container pause Pause all processes within one or more containers port List port mappings or a specific mapping for the container ps List containers pull Pull an image or a repository from a registry push Push an image or a repository to a registry rename Rename a container restart Restart one or more containers rm Remove one or more containers rmi Remove one or more images run Run a command in a new container save Save one or more images to a tar archive (streamed to STDOUT by default) search Search the Docker Hub for images start Start one or more stopped containers stats Display a live stream of container(s) resource usage statistics stop Stop one or more running containers tag Create a tag TARGET_IMAGE that refers to SOURCE_IMAGE top Display the running processes of a container unpause Unpause all processes within one or more containers update Update configuration of one or more containers version Show the Docker version information wait Block until one or more containers stop, then print their exit codes

      Para ver as opções disponíveis para um comando específico, digite:

      • docker subcomando-docker --help

      Para ver informações de sistema sobre o Docker, use:

      Vamos explorar alguns desses comandos. Vamos começar trabalhando com imagens.

      Os containers Docker são construídos a partir de imagens Docker. Por padrão, o Docker extrai essas imagens do Docker Hub, um registro Docker mantido pela Docker, a empresa por trás do projeto Docker. Qualquer pessoa pode hospedar suas imagens do Docker no Docker Hub, portanto, a maioria dos aplicativos e distribuições do Linux que você precisa terá imagens hospedadas lá.

      Para verificar se você pode acessar e baixar imagens do Docker Hub, digite:

      A saída irá indicar que o Docker está funcionando corretamente:

      Output

      Unable to find image 'hello-world:latest' locally latest: Pulling from library/hello-world 9bb5a5d4561a: Pull complete Digest: sha256:3e1764d0f546ceac4565547df2ac4907fe46f007ea229fd7ef2718514bcec35d Status: Downloaded newer image for hello-world:latest Hello from Docker! This message shows that your installation appears to be working correctly. ...

      Inicialmente, o Docker foi incapaz de encontrar a imagem hello-world localmente, então baixou a imagem do Docker Hub, que é o repositório padrão. Depois que a imagem foi baixada, o Docker criou um container a partir da imagem e o aplicativo dentro do container foi executado, exibindo a mensagem.

      Você pode procurar imagens disponíveis no Docker Hub usando o comando docker com o subcomandosearch. Por exemplo, para procurar a imagem do Ubuntu, digite:

      O script rastreará o Docker Hub e retornará uma listagem de todas as imagens cujo nome corresponde à string de pesquisa. Nesse caso, a saída será similar a essa:

      Output

      NAME DESCRIPTION STARS OFFICIAL AUTOMATED ubuntu Ubuntu is a Debian-based Linux operating sys… 7917 [OK] dorowu/ubuntu-desktop-lxde-vnc Ubuntu with openssh-server and NoVNC 193 [OK] rastasheep/ubuntu-sshd Dockerized SSH service, built on top of offi… 156 [OK] ansible/ubuntu14.04-ansible Ubuntu 14.04 LTS with ansible 93 [OK] ubuntu-upstart Upstart is an event-based replacement for th… 87 [OK] neurodebian NeuroDebian provides neuroscience research s… 50 [OK] ubuntu-debootstrap debootstrap --variant=minbase --components=m… 38 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mysql-5 ubuntu-16-nginx-php-phpmyadmin-mysql-5 36 [OK] nuagebec/ubuntu Simple always updated Ubuntu docker images w… 23 [OK] tutum/ubuntu Simple Ubuntu docker images with SSH access 18 i386/ubuntu Ubuntu is a Debian-based Linux operating sys… 13 ppc64le/ubuntu Ubuntu is a Debian-based Linux operating sys… 12 1and1internet/ubuntu-16-apache-php-7.0 ubuntu-16-apache-php-7.0 10 [OK] 1and1internet/ubuntu-16-nginx-php-phpmyadmin-mariadb-10 ubuntu-16-nginx-php-phpmyadmin-mariadb-10 6 [OK] eclipse/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 6 [OK] codenvy/ubuntu_jdk8 Ubuntu, JDK8, Maven 3, git, curl, nmap, mc, … 4 [OK] darksheer/ubuntu Base Ubuntu Image -- Updated hourly 4 [OK] 1and1internet/ubuntu-16-apache ubuntu-16-apache 3 [OK] 1and1internet/ubuntu-16-nginx-php-5.6-wordpress-4 ubuntu-16-nginx-php-5.6-wordpress-4 3 [OK] 1and1internet/ubuntu-16-sshd ubuntu-16-sshd 1 [OK] pivotaldata/ubuntu A quick freshening-up of the base Ubuntu doc… 1 1and1internet/ubuntu-16-healthcheck ubuntu-16-healthcheck 0 [OK] pivotaldata/ubuntu-gpdb-dev Ubuntu images for GPDB development 0 smartentry/ubuntu ubuntu with smartentry 0 [OK] ossobv/ubuntu ...

      Na coluna OFFICIAL, o OK indica uma imagem construída e suportada pela empresa por trás do projeto. Depois de identificar a imagem que você gostaria de usar, você pode baixá-la para o seu computador usando o subcomando pull.

      Execute o seguinte comando para baixar a imagem oficial do ubuntu para seu computador:

      Você verá a seguinte saída:

      Output

      Using default tag: latest latest: Pulling from library/ubuntu 6b98dfc16071: Pull complete 4001a1209541: Pull complete 6319fc68c576: Pull complete b24603670dc3: Pull complete 97f170c87c6f: Pull complete Digest: sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d Status: Downloaded newer image for ubuntu:latest

      Após o download de uma imagem, você pode executar um container usando a imagem baixada com o subcomando run. Como você viu com o exemplo do hello-world, se uma imagem não tiver sido baixada quando o docker for executado com o subcomandorun, o cliente Docker irá primeiro baixar a imagem, depois executar um container usando esta imagem.

      Para ver as imagens que foram baixadas para seu computador, digite:

      A saída deve ser semelhante à seguinte:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE ubuntu latest 113a43faa138 4 weeks ago 81.2MB hello-world latest e38bc07ac18e 2 months ago 1.85kB

      Como você verá posteriormente nesse tutorial, as imagens que você usa para executar containers podem ser modificadas e utilizadas para gerar novas imagens, que podem ser enviadas (fazer um push, em termos técnicos) para o Docker Hub ou outros registros Docker.

      Vamos dar uma olhada em como executar containers em mais detalhes.

      Passo 5 — Executando um Container Docker

      O container hello-world que você executou no passo anterior é um exemplo de um container que executa e sai depois da emissão de uma mensagem de teste. Os containers podem ser muito mais úteis do que isso e podem ser interativos. Afinal, eles são semelhantes às máquinas virtuais, apenas mais fáceis de usar.

      Como um exemplo, vamos executar um container usando a versão mais recente do ubuntu. A combinação das chaves -i e -t dá a você um acesso a um shell interativo dentro do container:

      Seu prompt de comando deve mudar para refletir o fato de que você agora está trabalhando dentro do container e deve assumir essa forma:

      Output

      root@d9b100f2f636:/#

      Observe o id do container no prompt de comando. Nesse exemplo, ele é d9b100f2f636. Você precisará do ID do container posteriormente para identificar o container quando quiser removê-lo.

      Agora você pode executar qualquer comando dento do container. Por exemplo, vamos atualizar o banco de dados de pacotes dentro do container. Você não precisa prefixar quaisquer comandos com sudo porque você está operando dentro do container como usuário root:

      A seguir, instale qualquer aplicação dentro dele. Vamos instalar o Node.js:

      Isso instala o Node.js no container a partir do repositório oficial do Ubuntu. Quando a instalação terminar, verifique que o Node.js está instalado:

      Você verá o número da versão exibido em seu terminal:

      Output

      v8.10.0

      Quaisquer alterações feitas no container só se aplicam a esse container.

      Para sair do container, digite exit no prompt.

      A seguir, vamos analisar o gerenciamento dos containers em nosso sistema.

      Passo 6 — Gerenciando Containers Docker

      Depois de usar o Docker por um tempo, você terá muitos containers ativos (em execução) e inativos em seu computador. Para ver os containers ativos, utilize:

      Você verá uma saída similar à seguinte:

      Output

      CONTAINER ID IMAGE COMMAND CREATED

      Neste tutorial, você iniciou dois containers; um a partir da imagem hello-world e outro a partir da imagem ubuntu. Ambos os containers não estão mais executando, mas eles ainda existem em seu sistema.

      Para ver todos os containers — ativos e inativos, execute docker ps com a chave -a:

      Você verá uma saída semelhante a esta:

      d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Exited (0) 8 minutes ago                           sharp_volhard
      01c950718166        hello-world         "/hello"            About an hour ago   Exited (0) About an hour ago                       festive_williams
      

      Para ver o último container que você criou, passe a ele a chave -l:

      • CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
      • d9b100f2f636 ubuntu "/bin/bash" About an hour ago Exited (0) 10 minutes ago sharp_volhard

      Para iniciar um container parado, use docker start, seguido pelo ID do container ou o nome dele. Vamos iniciar o container baseado no Ubuntu com o ID d9b100f2f636:

      • docker start d9b100f2f636

      O container vai iniciar, e você pode usar docker ps para ver seu status:

      CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS              PORTS               NAMES
      d9b100f2f636        ubuntu              "/bin/bash"         About an hour ago   Up 8 seconds                            sharp_volhard
      

      Para parar um container em execução, use docker stop, seguido do nome ou ID do container. Dessa vez, vamos usar o nome que o registro Docker atribuiu ao container, que é sharp_volhard:

      • docker stop sharp_volhard

      Depois que você decidir que não precisa mais de um container, remova-o com o comando docker rm, novamente usando ou o ID do container ou seu nome. Use o comando docker ps -a para encontrar o ID ou o nome do container associado à imagem hello-world e remova-o.

      • docker rm festive_williams

      Você pode inciar um novo container e dar a ele um nome utilizando a chave --name. Você também pode utilizar a chave --rm para criar um container que se auto remove quando é parado. Veja o comando docker run help para mais informações sobre essas e outras opções.

      Os containers podem ser transformados em imagens que você pode usar para criar novos containers. Vamos ver como isso funciona.

      Passo 7 — Fazendo Commit de Alterações em um Container para uma Imagem Docker

      Quando você inicia uma imagem Docker, você pode criar, modificar e excluir arquivos da mesma forma que você faz em máquinas virtuais. As alterações que você fizer serão aplicadas apenas a esse container. Você pode iniciá-lo ou pará-lo, mas uma vez que você o destrua com o comando docker rm, as mudanças serão perdidas para sempre.

      Esta seção mostra como salvar o estado de um container como uma nova imagem do Docker.

      Depois de instalar o Node.js dentro do container Ubuntu, você tem agora um container executando a partir de uma imagem, mas o container é diferente da imagem que você usou para criá-lo. Mas você pode querer reutilizar esse container Node.js como base para novas imagens posteriormente.

      Então, faça o commit das alterações em uma nova instância de imagem do Docker usando o seguinte comando.

      • docker commit -m "O que você fez na imagem" -a "Nome do Autor" container_id repositório/novo_nome_da_imagem

      A chave -m é para a mensagem de commit que ajuda você e outras pessoas saberem quais mudanças você fez, enquanto -a é usado para especificar o autor. O container_id é aquele que você observou anteriormente no tutorial quando iniciou a sessão interativa do Docker. A menos que você tenha criado repositórios adicionais no Docker Hub, o repositório geralmente é seu nome de usuário do Docker Hub.

      Por exemplo, para o usuário sammy, com ID do container d9b100f2f636, o comando seria:

      • docker commit -m "added Node.js" -a "sammy" d9b100f2f636 sammy/ubuntu-nodejs

      Quando você faz o commit de uma imagem, a nova imagem é salva localmente em seu computador. Posteriormente, nesse tutorial, você aprenderá a enviar uma imagem para um registro do Docker, como o Docker Hub, para que outras pessoas possam acessá-la.

      Ao listar as imagens do Docker novamente será mostrado a nova imagem, bem como a antiga da qual foi derivada:

      docker images
      

      Você verá uma saída como essa:

      Output
      REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
      sammy/ubuntu-nodejs   latest              7c1f35226ca6        7 seconds ago       179MB
      ubuntu                   latest              113a43faa138        4 weeks ago         81.2MB
      hello-world              latest              e38bc07ac18e        2 months ago        1.85kB
      

      Neste exemplo, ubuntu-nodejs é a nova imagem, a qual foi derivada da imagem existente ubuntu do Docker Hub. A diferença de tamanho reflete as alterações feitas. E neste exemplo, a mudança foi que o NodeJS foi instalado. Então, da próxima vez que você precisar executar um container usando o Ubuntu com o NodeJS pré-instalado, você pode simplesmente usar a nova imagem.

      Você também pode criar imagens a partir de um Dockerfile, que permite automatizar a instalação de software em uma nova imagem. No entanto, isso está fora do escopo deste tutorial.

      Agora vamos compartilhar a nova imagem com outras pessoas para que elas possam criar containers a partir dela.

      Passo 8 — Enviando Imagens Docker para um Repositório Docker

      A próxima etapa lógica após criar uma nova imagem a partir de uma imagem existente é compartilhá-la com alguns poucos amigos selecionados, o mundo inteiro no Docker Hub ou outro registro do Docker ao qual você tem acesso. Para enviar uma imagem para o Docker Hub ou qualquer outro registro Docker, você deve ter uma conta lá.

      Esta seção mostra como enviar uma imagem do Docker para o Docker Hub. Para aprender como criar seu próprio registro privado do Docker, confira How To Set Up a Private Docker Registry on Ubuntu 14.04.

      Para enviar sua imagem, primeiro efetue o login no Docker Hub.

      • docker login -u nome-de-usuário-do-registro-docker

      Você será solicitado a autenticar usando sua senha do Docker Hub. Se você especificou a senha correta, a autenticação deve ser bem-sucedida.

      Note: Se seu nome de usuário do registro do Docker for diferente do nome de usuário local usado para criar a imagem, você terá que marcar sua imagem com o nome de usuário do registro. Para o exemplo dado na última etapa, você digitaria:

      • docker tag sammy/ubuntu-nodejs nome-de-usuário-do-registro-docker/ubuntu-nodejs

      Então você pode enviar sua própria imagem usando:

      • docker push nome-de-usuário-do-registro-docker/nome-da-imagem-docker

      Para enviar a imagem ubuntu-nodejs para o repositório sammy, o comando seria:

      • docker push sammy/ubuntu-nodejs

      O processo pode levar algum tempo para ser concluído enquanto ele carrega as imagens, mas quando concluído, a saída será algo assim:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Pushed 5f70bf18a086: Pushed a3b5c80a4eba: Pushed 7f18b442972b: Pushed 3ce512daaf78: Pushed 7aae4540b42d: Pushed ...

      Após o envio de uma imagem para um registro, ela deve ser listada no dashboard de sua conta, como aquele mostrado na imagem abaixo:

      Se uma tentativa de envio resultar em um erro desse tipo, provavelmente você não efetuou login:

      Output

      The push refers to a repository [docker.io/sammy/ubuntu-nodejs] e3fbbfb44187: Preparing 5f70bf18a086: Preparing a3b5c80a4eba: Preparing 7f18b442972b: Preparing 3ce512daaf78: Preparing 7aae4540b42d: Waiting unauthorized: authentication required

      Faça login com docker login e repita a tentativa de envio. Em seguida, verifique que ela existe na sua página de repositório do Docker Hub.

      Agora voce pode usar docker pull sammy/ubuntu-nodejs para para puxar a imagem para uma nova máquina e usá-la para executar um novo container.

      Conclusão

      Neste tutorial, você instalou o Docker, trabalhou com imagens e containers e enviou uma imagem modificada para o Docker Hub. Agora que você conhece o básico, explore os outros tutoriais do Docker na comunidade da DigitalOcean.

      Por Brian Hogan



      Source link

      How To Set Up Laravel, Nginx, and MySQL with Docker Compose


      The author selected The FreeBSD Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the past few years, Docker has become a frequently used solution for deploying applications thanks to how it simplifies running and deploying applications in ephemeral containers. When using a LEMP application stack, for example, with PHP, Nginx, MySQL and the Laravel framework, Docker can significantly streamline the setup process.

      Docker Compose has further simplified the development process by allowing developers to define their infrastructure, including application services, networks, and volumes, in a single file. Docker Compose offers an efficient alternative to running multiple docker container create and docker container run commands.

      In this tutorial, you will build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, all inside Docker containers. You will define the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

      Prerequisites

      Before you start, you will need:

      Step 1 — Downloading Laravel and Installing Dependencies

      As a first step, we will get the latest version of Laravel and install the dependencies for the project, including Composer, the application-level package manager for PHP. We will install these dependencies with Docker to avoid having to install Composer globally.

      First, check that you are in your home directory and clone the latest Laravel release to a directory called laravel-app:

      • cd ~
      • git clone https://github.com/laravel/laravel.git laravel-app

      Move into the laravel-app directory:

      Next, use Docker's composer image to mount the directories that you will need for your Laravel project and avoid the overhead of installing Composer globally:

      • docker run --rm -v $(pwd):/app composer install

      Using the -v and --rm flags with docker run creates an ephemeral container that will be bind-mounted to your current directory before being removed. This will copy the contents of your ~/laravel-app directory to the container and also ensure that the vendor folder Composer creates inside the container is copied to your current directory.

      As a final step, set permissions on the project directory so that it is owned by your non-root user:

      • sudo chown -R $USER:$USER ~/laravel-app

      This will be important when you write the Dockerfile for your application image in Step 4, as it will allow you to work with your application code and run processes in your container as a non-root user.

      With your application code in place, you can move on to defining your services with Docker Compose.

      Step 2 — Creating the Docker Compose File

      Building your applications with Docker Compose simplifies the process of setting up and versioning your infrastructure. To set up our Laravel application, we will write a docker-compose file that defines our web server, database, and application services.

      Open the file:

      • nano ~/laravel-app/docker-compose.yml

      In the docker-compose file, you will define three services: app, webserver, and db. Add the following code to the file, being sure to replace the root password for MYSQL_ROOT_PASSWORD, defined as an environment variable under the db service, with a strong password of your choice:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      

      The services defined here include:

      • app: This service definition contains the Laravel application and runs a custom Docker image, digitalocean.com/php, that you will define in Step 4. It also sets the working_dir in the container to /var/www.
      • webserver: This service definition pulls the nginx:alpine image from Docker and exposes ports 80 and 443.
      • db: This service definition pulls the mysql:5.7.22 image from Docker and defines a few environmental variables, including a database called laravel for your application and the root password for the database. You are free to name the database whatever you would like, and you should replace your_mysql_root_password with your own strong password. This service definition also maps port 3306 on the host to port 3306 on the container.

      Each container_name property defines a name for the container, which corresponds to the name of the service. If you don't define this property, Docker will assign a name to each container by combining a historically famous person's name and a random word separated by an underscore.

      To facilitate communication between containers, the services are connected to a bridge network called app-network. A bridge network uses a software bridge that allows containers connected to the same bridge network to communicate with each other. The bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. This creates a greater level of security for applications, ensuring that only related services can communicate with one another. It also means that you can define multiple networks and services connecting to related functions: front-end application services can use a frontend network, for example, and back-end services can use a backend network.

      Let's look at how to add volumes and bind mounts to your service definitions to persist your application data.

      Step 3 — Persisting Data

      Docker has powerful and convenient features for persisting data. In our application, we will make use of volumes and bind mounts for persisting the database, and application and configuration files. Volumes offer flexibility for backups and persistence beyond a container's lifecycle, while bind mounts facilitate code changes during development, making changes to your host files or directories immediately available in your containers. Our setup will make use of both.

      Warning: By using bind mounts, you make it possible to change the host filesystem through processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability with security implications, and could impact non-Docker processes on the host system. Use bind mounts with care.

      In the docker-compose file, define a volume called dbdata under the db service definition to persist the MySQL database:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
          networks:
            - app-network
        ...
      

      The named volume dbdata persists the contents of the /var/lib/mysql folder present inside the container. This allows you to stop and restart the db service without losing data.

      At the bottom of the file, add the definition for the dbdata volume:

      ~/laravel-app/docker-compose.yml

      ...
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      With this definition in place, you will be able to use this volume across services.

      Next, add a bind mount to the db service for the MySQL configuration files you will create in Step 7:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
            - ./mysql/my.cnf:/etc/mysql/my.cnf
        ...
      

      This bind mount binds ~/laravel-app/mysql/my.cnf to /etc/mysql/my.cnf in the container.

      Next, add bind mounts to the webserver service. There will be two: one for your application code and another for the Nginx configuration definition that you will create in Step 6:

      ~/laravel-app/docker-compose.yml

      #Nginx Service
      webserver:
        ...
        volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
        networks:
            - app-network
      

      The first bind mount binds the application code in the ~/laravel-app directory to the /var/www directory inside the container. The configuration file that you will add to ~/laravel-app/nginx/conf.d/ will also be mounted to /etc/nginx/conf.d/ in the container, allowing you to add or modify the configuration directory's contents as needed.

      Finally, add the following bind mounts to the app service for the application code and configuration files:

      ~/laravel-app/docker-compose.yml

      #PHP Service
      app:
        ...
        volumes:
             - ./:/var/www
             - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
            - app-network
      

      The app service is bind-mounting the ~/laravel-app folder, which contains the application code, to the /var/www folder in the container. This will speed up the development process, since any changes made to your local application directory will be instantly reflected inside the container. You are also binding your PHP configuration file, ~/laravel-app/php/local.ini, to /usr/local/etc/php/conf.d/local.ini inside the container. You will create the local PHP configuration file in Step 5.

      Your docker-compose file will now look like this:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          volumes:
            - ./:/var/www
            - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - dbdata:/var/lib/mysql/
            - ./mysql/my.cnf:/etc/mysql/my.cnf
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      Save the file and exit your editor when you are finished making changes.

      With your docker-compose file written, you can now build the custom image for your application.

      Step 4 — Creating the Dockerfile

      Docker allows you to specify the environment inside of individual containers with a Dockerfile. A Dockerfile enables you to create custom images that you can use to install the software required by your application and configure settings based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

      Our Dockerfile will be located in our ~/laravel-app directory. Create the file:

      • nano ~/laravel-app/Dockerfile

      This Dockerfile will set the base image and specify the necessary commands and instructions to build the Laravel application image. Add the following code to the file:

      ~/laravel-app/php/Dockerfile

      FROM php:7.2-fpm
      
      # Copy composer.lock and composer.json
      COPY composer.lock composer.json /var/www/
      
      # Set working directory
      WORKDIR /var/www
      
      # Install dependencies
      RUN apt-get update && apt-get install -y 
          build-essential 
          mysql-client 
          libpng-dev 
          libjpeg62-turbo-dev 
          libfreetype6-dev 
          locales 
          zip 
          jpegoptim optipng pngquant gifsicle 
          vim 
          unzip 
          git 
          curl
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install extensions
      RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
      RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
      RUN docker-php-ext-install gd
      
      # Install composer
      RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
      
      # Add user for laravel application
      RUN groupadd -g 1000 www
      RUN useradd -u 1000 -ms /bin/bash -g www www
      
      # Copy existing application directory contents
      COPY . /var/www
      
      # Copy existing application directory permissions
      COPY --chown=www:www . /var/www
      
      # Change current user to www
      USER www
      
      # Expose port 9000 and start php-fpm server
      EXPOSE 9000
      CMD ["php-fpm"]
      

      First, the Dockerfile creates an image on top of the php:7.2-fpm Docker image. This is a Debian-based image that has the PHP FastCGI implementation PHP-FPM installed. The file also installs the prerequisite packages for Laravel: mcrypt, pdo_mysql, mbstring, and imagick with composer.

      The RUN directive specifies the commands to update, install, and configure settings inside the container, including creating a dedicated user and group called www. The WORKDIR instruction specifies the /var/www directory as the working directory for the application.

      Creating a dedicated user and group with restricted permissions mitigates the inherent vulnerability when running Docker containers, which run by default as root. Instead of running this container as root, we've created the www user, who has read/write access to the /var/www folder thanks to the COPY instruction that we are using with the --chown flag to copy the application folder's permissions.

      Finally, the EXPOSE command exposes a port in the container, 9000, for the php-fpm server. CMD specifies the command that should run once the container is created. Here, CMD specifies "php-fpm", which will start the server.

      Save the file and exit your editor when you are finished making changes.

      You can now move on to defining your PHP configuration.

      Step 5 — Configuring PHP

      Now that you have defined your infrastructure in the docker-compose file, you can configure the PHP service to act as a PHP processor for incoming requests from Nginx.

      To configure PHP, you will create the local.ini file inside the php folder. This is the file that you bind-mounted to /usr/local/etc/php/conf.d/local.ini inside the container in Step 2. Creating this file will allow you to override the default php.ini file that PHP reads when it starts.

      Create the php directory:

      Next, open the local.ini file:

      • nano ~/laravel-app/php/local.ini

      To demonstrate how to configure PHP, we'll add the following code to set size limitations for uploaded files:

      ~/laravel-app/php/local.ini

      upload_max_filesize=40M
      post_max_size=40M
      

      The upload_max_filesize and post_max_size directives set the maximum allowed size for uploaded files, and demonstrate how you can set php.ini configurations from your local.ini file. You can put any PHP-specific configuration that you want to override in the local.ini file.

      Save the file and exit your editor.

      With your PHP local.ini file in place, you can move on to configuring Nginx.

      Step 6 — Configuring Nginx

      With the PHP service configured, you can modify the Nginx service to use PHP-FPM as the FastCGI server to serve dynamic content. The FastCGI server is based on a binary protocol for interfacing interactive programs with a web server. For more information, please refer to this article on Understanding and Implementing FastCGI Proxying in Nginx.

      To configure Nginx, you will create an app.conf file with the service configuration in the ~/laravel-app/nginx/conf.d/ folder.

      First, create the nginx/conf.d/ directory:

      • mkdir -p ~/laravel-app/nginx/conf.d

      Next, create the app.conf configuration file:

      • nano ~/laravel-app/nginx/conf.d/app.conf

      Add the following code to the file to specify your Nginx configuration:

      ~/laravel-app/nginx/conf.d/app.conf

      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      The server block defines the configuration for the Nginx web server with the following directives:

      • listen: This directive defines the port on which the server will listen to incoming requests.
      • error_log and access_log: These directives define the files for writing logs.
      • root: This directive sets the root folder path, forming the complete path to any requested file on the local file system.

      In the php location block, the fastcgi_pass directive specifies that the app service is listening on a TCP socket on port 9000. This makes the PHP-FPM server listen over the network rather than on a Unix socket. Though a Unix socket has a slight advantage in speed over a TCP socket, it does not have a network protocol and thus skips the network stack. For cases where hosts are located on one machine, a Unix socket may make sense, but in cases where you have services running on different hosts, a TCP socket offers the advantage of allowing you to connect to distributed services. Because our app container is running on a different host from our webserver container, a TCP socket makes the most sense for our configuration.

      Save the file and exit your editor when you are finished making changes.

      Thanks to the bind mount you created in Step 2, any changes you make inside the nginx/conf.d/ folder will be directly reflected inside the webserver container.

      Next, let's look at our MySQL settings.

      Step 7 — Configuring MySQL

      With PHP and Nginx configured, you can enable MySQL to act as the database for your application.

      To configure MySQL, you will create the my.cnf file in the mysql folder. This is the file that you bind-mounted to /etc/mysql/my.cnf inside the container in Step 2. This bind mount allows you to override the my.cnf settings as and when required.

      To demonstrate how this works, we'll add settings to the my.cnf file that enable the general query log and specify the log file.

      First, create the mysql directory:

      • mkdir ~/laravel-app/mysql

      Next, make the my.cnf file:

      • nano ~/laravel-app/mysql/my.cnf

      In the file, add the following code to enable the query log and set the log file location:

      ~/laravel-app/mysql/my.cnf

      [mysqld]
      general_log = 1
      general_log_file = /var/lib/mysql/general.log
      

      This my.cnf file enables logs, defining the general_log setting as 1 to allow general logs. The general_log_file setting specifies where the logs will be stored.

      Save the file and exit your editor.

      Our next step will be to start the containers.

      Step 8 — Running the Containers and Modifying Environment Settings

      Now that you have defined all of your services in your docker-compose file and created the configuration files for these services, you can start the containers. As a final step, though, we will make a copy of the .env.example file that Laravel includes by default and name the copy .env, which is the file Laravel expects to define its environment:

      We will configure the specific details of our setup in this file once we have started the containers.

      With all of your services defined in your docker-compose file, you just need to issue a single command to start all of the containers, create the volumes, and set up and connect the networks:

      When you run docker-compose up for the first time, it will download all of the necessary Docker images, which might take a while. Once the images are downloaded and stored in your local machine, Compose will create your containers. The -d flag daemonizes the process, running your containers in the background.

      Once the process is complete, use the following command to list all of the running containers:

      You will see the following output with details about your app, webserver, and db containers:

      Output

      CONTAINER ID NAMES IMAGE STATUS PORTS c31b7b3251e0 db mysql:5.7.22 Up 2 seconds 0.0.0.0:3306->3306/tcp ed5a69704580 app digitalocean.com/php Up 2 seconds 9000/tcp 5ce4ee31d7c0 webserver nginx:alpine Up 2 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

      The CONTAINER ID in this output is a unique identifier for each container, while NAMES lists the service name associated with each. You can use both of these identifiers to access the containers. IMAGE defines the image name for each container, while STATUS provides information about the container's state: whether it's running, restarting, or stopped.

      You can now modify the .env file on the app container to include specific details about your setup.

      Open the file using docker-compose exec, which allows you to run specific commands in containers. In this case, you are opening the file for editing:

      • docker-compose exec app nano .env

      Find the block that specifies DB_CONNECTION and update it to reflect the specifics of your setup. You will modify the following fields:

      • DB_HOST will be your db database container.
      • DB_DATABASE will be the laravel database.
      • DB_USERNAME will be the username you will use for your database. In this case, we will use laraveluser.
      • DB_PASSWORD will be the secure password you would like to use for this user account.

      /var/www/.env

      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=laravel
      DB_USERNAME=laraveluser
      DB_PASSWORD=your_laravel_db_password
      

      Save your changes and exit your editor.

      Next, set the application key for the Laravel application with the php artisan key:generate command. This command will generate a key and copy it to your .env file, ensuring that your user sessions and encrypted data remain secure:

      • docker-compose exec app php artisan key:generate

      You now have the environment settings required to run your application. To cache these settings into a file, which will boost your application's load speed, run:

      • docker-compose exec app php artisan config:cache

      Your configuration settings will be loaded into /var/www/bootstrap/cache/config.php on the container.

      As a final step, visit http://your_server_ip in the browser. You will see the following home page for your Laravel application:

      Laravel Home Page

      With your containers running and your configuration information in place, you can move on to configuring your user information for the laravel database on the db container.

      Step 9 — Creating a User for MySQL

      The default MySQL installation only creates the root administrative account, which has unlimited privileges on the database server. In general, it's better to avoid using the root administrative account when interacting with the database. Instead, let's create a dedicated database user for our application's Laravel database.

      To create a new user, execute an interactive bash shell on the db container with docker-compose exec:

      • docker-compose exec db bash

      Inside the container, log into the MySQL root administrative account:

      You will be prompted for the password you set for the MySQL root account during installation in your docker-compose file.

      Start by checking for the database called laravel, which you defined in your docker-compose file. Run the show databases command to check for existing databases:

      You will see the laravel database listed in the output:

      Output

      +--------------------+ | Database | +--------------------+ | information_schema | | laravel | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

      Next, create the user account that will be allowed to access this database. Our username will be laraveluser, though you can replace this with another name if you'd prefer. Just be sure that your username and password here match the details you set in your .env file in the previous step:

      • GRANT ALL ON laravel.* TO 'laraveluser'@'%' IDENTIFIED BY 'your_laravel_db_password';

      Flush the privileges to notify the MySQL server of the changes:

      Exit MySQL:

      Finally, exit the container:

      You have configured the user account for your Laravel application database and are ready to migrate your data and work with the Tinker console.

      Step 10 — Migrating Data and Working with the Tinker Console

      With your application running, you can migrate your data and experiment with the tinker command, which will initiate a PsySH console with Laravel preloaded. PsySH is a runtime developer console and interactive debugger for PHP, and Tinker is a REPL specifically for Laravel. Using the tinker command will allow you to interact with your Laravel application from the command line in an interactive shell.

      First, test the connection to MySQL by running the Laravel artisan migrate command, which creates a migrations table in the database from inside the container:

      • docker-compose exec app php artisan migrate

      This command will migrate the default Laravel tables. The output confirming the migration will look like this:

      Output

      Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table

      Once the migration is complete, you can run a query to check if you are properly connected to the database using the tinker command:

      • docker-compose exec app php artisan tinker

      Test the MySQL connection by getting the data you just migrated:

      • DB::table('migrations')->get();

      You will see output that looks like this:

      Output

      => IlluminateSupportCollection {#2856 all: [ {#2862 +"id": 1, +"migration": "2014_10_12_000000_create_users_table", +"batch": 1, }, {#2865 +"id": 2, +"migration": "2014_10_12_100000_create_password_resets_table", +"batch": 1, }, ], }

      You can use tinker to interact with your databases and to experiment with services and models.

      With your Laravel application in place, you are ready for further development and experimentation.

      Conclusion

      You now have a LEMP stack application running on your server, which you've tested by accessing the Laravel welcome page and creating MySQL database migrations.

      Key to the simplicity of this installation is Docker Compose, which allows you to create a group of Docker containers, defined in a single file, with a single command. If you would like to learn more about how to do CI with Docker Compose, take a look at How To Configure a Continuous Integration Testing Environment with Docker and Docker Compose on Ubuntu 16.04. If you want to streamline your Laravel application deployment process then How to Automatically Deploy Laravel Applications with Deployer on Ubuntu 16.04 will be a relevant resource.



      Source link