One place for hosting & domains

      Compose

      Create a MEAN app with Angular and Docker Compose

      Introduction

      Note: Update: 30/03/2019

      This article has been updated based on the updates to both docker and angular since this article was written. The current version of angular is 7, the updates also adds an attached docker volume to the angular client so that you don’t need to run docker-compose build every time.

      Docker allows us to run applications inside containers. These containers in most cases communicate with each other.

      Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server. This guarantees that the software will always run the same, regardless of its environment.

      We’ll build an angular app in one container, point it to an Express API in another container, which connects to MongoDB in another container.

      If you haven’t worked with Docker before, this would be a good starting point as we will explain every step covered, in some detail.

      Why Use Docker

      1. Docker images usually include only what your application needs to run. As a result, you don’t have to worry about having a whole operating system with things you will never use. This results in smaller images of your application.
      2. Platform Independent – I bet you’ve heard of the phrase ‘It worked on my machine and doesn’t work on the server’. With Docker, all either environments need to have is the Docker Engine or the Docker Daemon, and when we have a successful build of our image, it should run anywhere.
      3. Once you have an image of your application built, you can easily share the image with anyone who wants to run your application. They need not worry about dependencies, or setting up their individual environments. All they need to have is Docker Engine installed.
      4. Isolation – You’ll see from the article that I try to separate the individual apps to become independent, and only point to each other. The reason behind this is that each part of our entire application should be somewhat independent, and scalable on its own. Docker in this instance would make scaling these individual parts as easy as spinning up another instance of their images. This concept of building isolated, independently scalable parts of an entire system are what is called Microservices Approach. You can read more about it in Introduction to Microservices
      5. Docker images usually have tags, referring to their versions. This means you can have versioned builds of your image, enabling you to roll back to a previous version should something unexpected break.

      You need to have docker and docker-compose installed in your setup. Instructions for installing docker in your given platform can be found here.

      Instructions for installing docker-compose can be found here.

      Verify your installation by running:

      1. docker -v

      Output

      Docker version 18.09.2, build 6247962
      1. docker-compose -v

      Output

      docker-compose version 1.23.2, build 1110ad01
      1. node -v

      Output

      v11.12.0

      Next, you need to know how to build a simple Angular app and an Express App. We’ll be using the Angular CLI to build a simple app.

      We’ll now separately build out these three parts of our app. The approach we are going to take is building the app in our local environment, then dockerizing the app.

      Once these are running, we’ll connect the three docker containers. Note that we are only building two containers, Angular and the Express/Node API. The third container will be from a MongoDB image that we’ll just pull from the Docker Hub.

      Docker Hub is a repository for docker images. It’s where we pull down official docker images such as MongoDB, NodeJs, Ubuntu, and we can also create custom images and push them to Docker Hub for other people to pull and use.

      Let’s create a directory for our whole setup, we’ll call it mean-docker.

      1. mkdir mean-docker

      Next, we’ll create an Angular app and make sure it runs in a docker container.

      Create a directory called angular-client inside the mean-docker directory we created above, and initialize an Angular App with the Angular CLI.

      We’ll use npx, a tool that allows us to run CLI apps without installing them into our system. It comes preinstalled when you install Node.js since version 5.2.0

      1. npx @angular/cli new angular-client
      ? Would you like to add Angular routing? No
      ? Which stylesheet format would you like to use? CSS
      

      This scaffolds an Angular app, and npm installs the app’s dependencies. Our directory structure should be like this

      └── mean-docker
          └── angular-client
              ├── README.md
              ├── angular.json
              ├── e2e
              ├── node_modules
              ├── package.json
              ├── package-lock.json
              ├── src
              ├── tsconfig.json
              └── tslint.json
      

      Running npm start, inside the angular-client directory should start the angular app at http://localhost:4200.

      To dockerize any app, we usually need to write a Dockerfile

      A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image.

      To quickly brainstorm on what our angular app needs in order to run,

      1. We need an image with Node.js installed on it
      2. We could have the Angular CLI installed on the image, but the package.json file has it as a dependency, so it’s not a requirement.
      3. We can add our Angular app to the image and install its dependencies.
      4. It needs to expose port 4200 so that we can access it from our host machine through localhost:4200.
      5. If all these requirements are met, we can run npm start in the container, which in turn runs ng serve since it’s a script in the package.json file, created from the image and our app should run.

      Those are the exact instructions we are going to write in our Dockerfile.

      mean-docker/angular-client/Dockerfile

      # Create image based on the official Node 10 image from dockerhub
      FROM node:10
      
      # Create a directory where our app will be placed
      RUN mkdir -p /app
      
      # Change directory so that our commands run inside this new directory
      WORKDIR /app
      
      # Copy dependency definitions
      COPY package*.json /app/
      
      # Install dependencies
      RUN npm install
      
      # Get all the code needed to run the app
      COPY . /app/
      
      # Expose the port the app runs in
      EXPOSE 4200
      
      # Serve the app
      CMD ["npm", "start"]
      

      I’ve commented on the file to show what each instruction clearly does.

      Note: Before we build the image, if you are keen, you may have noticed that the line COPY . /app/ copies our whole directory into the container, including node_modules. To ignore this, and other files that are irrelevant to our container, we can add a .dockerignore file and list what is to be ignored. This file is usually sometimes identical to the .gitignore file.

      Create a .dockerignore file.

      mean-docker/angular-client/.dockerignore

      node_modules/
      

      One last thing we have to do before building the image is to ensure that the app is served from the host created by the docker image. To ensure this, go into your package.json and change the start script to:

      mean-docker/angular-client/package.json

      {
       ...
        "scripts": {
          "start": "ng serve --host 0.0.0.0",
          ...
        },
        ...
      }
      

      To build the image we will use docker build command. The syntax is

      1. docker build -t <image_tag>:<tag> <directory_with_Dockerfile>

      Make sure you are in the mean_docker/angular-client directory, then build your image.

      1. cd angular-client
      1. docker build -t angular-client:dev .

      -t is a shortform of --tag, and refers to the name or tag given to the image to be built. In this case the tag will be angular-client:dev.

      The . (dot) at the end refers to the current directory. Docker will look for the Dockerfile in our current directory and use it to build an image.

      This could take a while depending on your internet connection.

      Now that the image is built, we can run a container based on that image, using this syntax

      1. docker run -d --name <container_name> -p <host-port:exposed-port> <image-name>

      The -d flag tells docker to run the container in detached mode. Meaning, it will run and get you back to your host, without going into the container.

      1. docker run -d --name angular-client -p 4200:4200 angular-client:dev 8310253fe80373627b2c274c5a9de930dc7559b3dc8eef4abe4cb09aa1828a22

      --name refers to the name that will be assigned to the container.

      -p or --port refers to which port our host machine should point to in the docker container. In this case, localhost:4200 should point to dockerhost:4200, and thus the syntax 4200:4200.

      Visit localhost:4200 in your host browser should be serving the angular app from the container.

      You can stop the container running with:

      1. docker stop angular-client

      We’ve containerized the angular app, we are now two steps away from our complete setup.

      Containerizing an express app should now be straightforward. Create a directory in the mean-docker directory called express-server.

      1. mkdir express-server

      Add the following package.json file inside the app.

      mean-docker/express-server/package.json

      {
        "name": "express-server",
        "version": "0.0.0",
        "private": true,
        "scripts": {
          "start": "node server.js"
        },
        "dependencies": {
          "body-parser": "~1.15.2",
          "express": "~4.14.0"
        }
      }
      

      Then, we’ll create a simple express app inside it. Create a file server.js

      1. cd express-serve
      1. touch server.js
      1. mkdir routes && cd routes
      1. touch api.js

      mean-docker/express-server/server.js

      
      const express = require('express');
      const path = require('path');
      const http = require('http');
      const bodyParser = require('body-parser');
      
      
      const api = require('./routes/api');
      
      const app = express();
      
      
      app.use(bodyParser.json());
      app.use(bodyParser.urlencoded({ extended: false }));
      
      
      app.use('/', api);
      
      
      const port = process.env.PORT || '3000';
      app.set('port', port);
      
      
      const server = http.createServer(app);
      
      
      server.listen(port, () => console.log(`API running on localhost:${port}`));
      
      [labe mean-docker/express-server/routes/api.js]
      const express = require('express');
      const router = express.Router();
      
      
      router.get('/', (req, res) => {
          res.send('api works');
      });
      
      module.exports = router;
      

      This is a simple express app, install the dependencies and start the app.

      1. npm install
      1. npm start

      Going to localhost:3000 in your browser should serve the app.

      To run this app inside a Docker container, we’ll also create a Dockerfile for it. It should be pretty similar to what we already have for the angular-client.

      mean-docker/express-server/Dockerfile

      # Create image based on the official Node 6 image from the dockerhub
      FROM node:6
      
      # Create a directory where our app will be placed
      RUN mkdir -p /usr/src/app
      
      # Change directory so that our commands run inside this new directory
      WORKDIR /usr/src/app
      
      # Copy dependency definitions
      COPY package.json /usr/src/app
      
      # Install dependencies
      RUN npm install
      
      # Get all the code needed to run the app
      COPY . /usr/src/app
      
      # Expose the port the app runs in
      EXPOSE 3000
      
      # Serve the app
      CMD ["npm", "start"]
      

      You can see the file is pretty much the same as the angular-client Dockerfile, except for the exposed port.

      You could also add a .dockerignore file to ignore files we do not need.

      mean-docker/express-server/.dockerignore

      node_modules/
      

      We can then build the image and run a container based on the image with:

      1. docker build -t express-server:dev .
      1. docker run -d --name express-server -p 3000:3000 express-server:dev

      Going to localhost:3000 in your browser should serve the API.

      Once you are done, you can stop the container with

      1. docker stop express-server

      The last part of our MEAN setup, before we connect them all together is the MongoDB. Now, we can’t have a Dockerfile to build a MongoDB image, because one already exists from the Docker Hub. We only need to know how to run it.

      Assuming we had a MongoDB image already, we’d run a container based on the image with

      1. docker run -d --name mongodb -p 27017:27017 mongo

      The image name in this instance is mongo, the last parameter, and the container name will be mongodb.

      Docker will check to see if you have a mongo image already downloaded, or built. If not, it will look for the image in the Dockerhub. If you run the above command, you should have a mongodb instance running inside a container.

      To check if MongoDB is running, simply go to http://localhost:27017 in your browser, and you should see this message. It looks like you are trying to access MongoDB over HTTP on the native driver port.

      Alternatively, if you have mongo installed in your host machine, simply run mongo in the terminal. And it should run and give you the mongo shell, without any errors.

      To connect and run multiple containers with docker, we use Docker Compose.

      Compose is a tool for defining and running multi-container Docker applications. With Compose, you use a Compose file to configure your application’s services. Then, using a single command, you create and start all the services from your configuration.

      docker-compose is usually installed when you install docker. So to simply check if you have it installed, run:

      1. docker-compose

      You should see a list of commands from docker-compose. If not, you can go through the installation here

      Note: Ensure that you have docker-compose version 1.6 and above by running docker-compose -v

      Create a docker-compose.yml file at the root of our setup.

      1. touch docker-compose.yml

      Our directory tree should now look like this.

      .
      ├── angular-client
      ├── docker-compose.yml
      └── express-server
      

      Then edit the docker-compose.yml file

      mean-docker/docker-compose.yml

      version: '2' 
      
      
      services:
        angular: 
          build: angular-client 
          ports:
            - "4200:4200" 
      
        express: 
          build: express-server 
          ports:
            - "3000:3000" 
      
        database: 
          image: mongo 
          ports:
            - "27017:27017" 
      

      The docker-compose.yml file is a simple configuration file telling docker-compose which containers to build. That’s pretty much it.

      Now, to run containers based on the three images, simply run

      1. docker-compose up

      This will build the images if not already built, and run them. Once it’s running, and your terminal looks something like this.

      You can visit all three apps: http://localhost:4200, http://localhost:3000, or mongodb://localhost:27017. And you’ll see that all three containers are running.

      Finally, the fun part.

      Express and MongoDB

      We now finally need to connect the three containers. We’ll first create a simple CRUD feature in our API using mongoose. You can go through Easily Develop Node.js and MongoDB Apps with Mongoose to get a more detailed explanation of mongoose.

      First of all, add mongoose to your express server package.json

      mean-docker/express-server/package.json

      {
        "name": "express-server",
        "version": "0.0.0",
        "private": true,
        "scripts": {
          "start": "node server.js"
        },
        "dependencies": {
          "body-parser": "~1.15.2",
          "express": "~4.14.0",
          "mongoose": "^4.7.0"
        }
      }
      

      We need to update our API to use MongoDB:

      mean-docker/express-server/routes/api.js

      
      const mongoose = require('mongoose');
      const express = require('express');
      const router = express.Router();
      
      
      const dbHost = 'mongodb://database/mean-docker';
      
      
      mongoose.connect(dbHost);
      
      
      const userSchema = new mongoose.Schema({
        name: String,
        age: Number
      });
      
      
      const User = mongoose.model('User', userSchema);
      
      
      
      router.get('/', (req, res) => {
              res.send('api works');
      });
      
      
      router.get('/users', (req, res) => {
          User.find({}, (err, users) => {
              if (err) res.status(500).send(error)
      
              res.status(200).json(users);
          });
      });
      
      
      router.get('/users/:id', (req, res) => {
          User.findById(req.param.id, (err, users) => {
              if (err) res.status(500).send(error)
      
              res.status(200).json(users);
          });
      });
      
      
      router.post('/users', (req, res) => {
          let user = new User({
              name: req.body.name,
              age: req.body.age
          });
      
          user.save(error => {
              if (error) res.status(500).send(error);
      
              res.status(201).json({
                  message: 'User created successfully'
              });
          });
      });
      
      module.exports = router;
      

      Two main differences, first of all, our connection to MongoDB is in the line const dbHost="mongodb://database/mean-docker";. This database is the same as the database service we created in the docker-compose file.

      We’ve also added rest routes GET /users, GET /users/:id and POST /user.

      Update the docker-compose file, telling the express service to link to the database service.

      mean-docker/docker-compose.yml

      version: '2' 
      
      
      services:
        angular: 
          build: angular-client 
          ports:
            - "4200:4200" 
          volumes:
            - ./angular-client:/app 
      
        express: 
          build: express-server 
          ports:
            - "3000:3000" 
          links:
            - database
      
        database: 
          image: mongo 
          ports:
            - "27017:27017" 
      

      The links property of the docker-compose file creates a connection to the other service with the name of the service as the hostname. In this case database will be the hostname. Meaning, to connect to it from the express service, we should use database:27017. That’s why we made the dbHost equal to mongodb://database/mean-docker.

      Also, I’ve added a volume to the angular service. This will enable changes we make to the Angular App to automatically trigger recompilation in the container

      Angular and Express

      The last part is to connect the Angular app to the express server. To do this, we’ll need to make some modifications to our angular app to consume the express API.

      Add the Angular HTTP Client.

      mean-docker/angular-client/src/app.module.ts

      import { BrowserModule } from '@angular/platform-browser';
      import { NgModule } from '@angular/core';
      import { HttpClientModule } from '@angular/common/http'; 
      
      import { AppComponent } from './app.component';
      
      @NgModule({
        declarations: [
          AppComponent
        ],
        imports: [
          BrowserModule,
          HttpClientModule 
        ],
        providers: [],
        bootstrap: [AppComponent]
      })
      export class AppModule { }
      

      mean-docker/angular-client/src/app/app.component.ts

      import { Component, OnInit } from '@angular/core';
      import { HttpClient } from '@angular/common/http';
      
      @Component({
        selector: 'app-root',
        templateUrl: './app.component.html',
        styleUrls: ['./app.component.css']
      })
      export class AppComponent implements OnInit {
        title = 'app works!';
      
        
        API = 'http://localhost:3000';
      
        
        people: any[] = [];
      
        constructor(private http: HttpClient) {}
      
        
        ngOnInit() {
          this.getAllPeople();
        }
      
        
        addPerson(name, age) {
          this.http.post(`${this.API}/users`, {name, age})
            .subscribe(() => {
              this.getAllPeople();
            })
        }
      
        
        getAllPeople() {
          this.http.get(`${this.API}/users`)
            .subscribe((people: any) => {
              console.log(people)
              this.people = people
            })
        }
      }
      

      Angular best practices guides usually recommend separating most logic into a service/provider. We’ve placed all the code in the component here for brevity.

      We’ve imported the OnInit interface, to call events when the component is initialized, then added two methods AddPerson and getAllPeople, that call the API.

      Notice that this time around, our API is pointing to localhost. This is because while the Angular 2 app will be running inside the container, it’s served to the browser. And the browser is the one that makes requests. It will thus make a request to the exposed Express API. As a result, we don’t need to link Angular and Express in the docker-compose.yml file.

      Next, we need to make some changes to the template. I first added bootstrap via CDN to the index.html

      mean-docker/angular-client/src/app/index.html

      <!doctype html>
      <html>
      <head>
        <meta charset="utf-8">
        <title>Angular Client</title>
        <base href="/">
      
        <meta name="viewport" content="width=device-width, initial-scale=1">
      
        
        <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/4.0.0-alpha.2/css/bootstrap.min.css">
      
        <link rel="icon" type="image/x-icon" href="favicon.ico">
      </head>
      <body>
        <app-root>Loading...</app-root>
      </body>
      </html>
      

      Then update the app.component.html template

      mean-docker/angular-client/src/app/app.component.html

      
      <nav class="navbar navbar-light bg-faded">
        <div class="container">
          <a class="navbar-brand" href="#">Mean Docker</a>
        </div>
      </nav>
      
      <div class="container">
          <h3>Add new person</h3>
          <form>
            <div class="form-group">
              <label for="name">Name</label>
              <input type="text" class="form-control" id="name" #name>
            </div>
            <div class="form-group">
              <label for="age">Age</label>
              <input type="number" class="form-control" id="age" #age>
            </div>
            <button type="button" (click)="addPerson(name.value, age.value)" class="btn btn-primary">Add person</button>
          </form>
      
          <h3>People</h3>
          
          <div class="card card-block col-md-3" *ngFor="let person of people">
            <h4 class="card-title">{{person.name}}  {{person.age}}</h4>
          </div>
      </div>
      

      The above template shows the components’ properties and bindings. We are almost done.

      Since we’ve made changes to our code, we need to do a build for our Docker Compose

      1. docker-compose up --build

      The --build flag tells docker compose that we’ve made changes and it needs to do a clean build of our images.

      Once this is done, go to localhost:4200 in your browser,

      We are getting a No 'Access-Control-Allow-Origin' error. To quickly fix this, we need to enable Cross-Origin in our express app. We’ll do this with a simple middleware.

      mean-docker/express-server/server.js

      
      
      
      app.use(bodyParser.json());
      app.use(bodyParser.urlencoded({ extended: false }));
      
      
      app.use(function(req, res, next) {
        res.header("Access-Control-Allow-Origin", "*")
        res.header("Access-Control-Allow-Headers", "Origin, X-Requested-With, Content-Type, Accept")
        next()
      })
      
      
      app.use('/', api);
      
      
      

      We can now run docker-compose again with the build flag. You should be in the mean-docker directory.

      1. docker-compose up --build

      Going to localhost:4200 on the browser.

      Note: I added an attached volume to the docker-compose file, and we now no longer need to rebuild the service every time we make a change.

      I bet you’ve learned a thing or two about MEAN or docker and docker-compose.

      The problem with our set up however is that any time we make changes to either the angular app or the express API, we need to run docker-compose up --build.

      This can get tedious or even boring over time. We’ll look at this in another article.

      How To Set Up Laravel, Nginx, and MySQL With Docker Compose on Ubuntu 20.04


      The author selected The FreeBSD Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Over the past few years, Docker has become a frequently used solution for deploying applications thanks to how it simplifies running and deploying applications in ephemeral containers. When you are using a LEMP application stack, for example, with PHP, Nginx, MySQL and the Laravel framework, Docker can significantly streamline the setup process.

      Docker Compose has further simplified the development process by allowing developers to define their infrastructure, including application services, networks, and volumes, in a single file. Docker Compose offers an efficient alternative to running multiple docker container create and docker container run commands.

      In this tutorial, you will build a web application using the Laravel framework, with Nginx as the web server and MySQL as the database, all inside Docker containers. You will define the entire stack configuration in a docker-compose file, along with configuration files for PHP, MySQL, and Nginx.

      Prerequisites

      Before you start, you will need:

      Step 1 — Downloading Laravel and Installing Dependencies

      As a first step, you will get the latest version of Laravel and install the dependencies for the project, including Composer, the application-level package manager for PHP. We will install these dependencies with Docker to avoid having to install Composer globally.

      First, check that you are in your home directory and clone the latest Laravel release to a directory called laravel-app:

      • cd ~
      • git clone https://github.com/laravel/laravel.git laravel-app

      Move into the laravel-app directory:

      Next, use Docker’s composer image to mount the directories that you will need for your Laravel project and avoid the overhead of installing Composer globally:

      • docker run --rm -v $(pwd):/app composer install

      Using the -v and --rm flags with docker run creates an ephemeral container that will be bind-mounted to your current directory before being removed. This will copy the contents of your ~/laravel-app directory to the container and also ensure that the vendor folder Composer creates inside the container is copied to your current directory.

      As a final step, set permissions on the project directory so that it is owned by your non-root user:

      • sudo chown -R sammy:sammy ~/laravel-app

      This will be important when you write the Dockerfile for your application image in Step 4, as it will allow you to work with your application code and run processes in your container as a non-root user.

      With your application code in place, you can move on to defining your services with Docker Compose.

      Step 2 — Creating the Docker Compose File

      Building your applications with Docker Compose simplifies the process of setting up and versioning your infrastructure. To set up your Laravel application, you will write a docker-compose file that defines your web server, database, and application services.

      Open the file:

      • nano ~/laravel-app/docker-compose.yml

      In the docker-compose file, you will define three services: app, webserver, and db. Add the following code to the file, being sure to replace MYSQL_ROOT_PASSWORD with the root user’s MySQL password, defined as an environment variable under the db service, with a strong password of your choice:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: MYSQL_ROOT_PASSWORD
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      

      The services defined here include:

      • app: This service definition contains the Laravel application and runs a custom Docker image, digitalocean.com/php, that you will define in Step 4. It also sets the working_dir in the container to /var/www.
      • webserver: This service definition pulls the nginx:alpine image from Docker and exposes ports 80 and 443.
      • db: This service definition pulls the mysql:5.7.22 image from Docker and defines a few environmental variables, including a database called laravel for your application and the root password for the database. You are free to name the database whatever you would like, and you should replace your_mysql_root_password with your own strong password. This service definition also maps port 3306 on the host to port 3306 on the container.

      Each container_name property defines a name for the container, which corresponds to the name of the service. If you don’t define this property, Docker will assign a name to each container by combining a historically famous person’s name and a random word separated by an underscore.

      To facilitate communication between containers, the services are connected to a bridge network called app-network. A bridge network uses a software bridge that allows containers connected to the same bridge network to communicate with each other. The bridge driver automatically installs rules in the host machine so that containers on different bridge networks cannot communicate directly with each other. This creates a greater level of security for applications, ensuring that only related services can communicate with one another. It also means that you can define multiple networks and services connecting to related functions: front-end application services can use a frontend network, for example, and back-end services can use a backend network.

      Next, you’ll look at how to add volumes and bind mounts to your service definitions to persist your application data.

      Step 3 — Persisting Data

      Docker has powerful and convenient features for persisting data. In your application, you will make use of volumes and bind mounts for persisting the database, and application and configuration files. Volumes offer flexibility for backups and persistence beyond a container’s lifecycle, while bind mounts facilitate code changes during development, making changes to your host files or directories immediately available in your containers. Your setup will make use of both.

      Warning: By using bind mounts, you make it possible to change the host filesystem through processes running in a container, including creating, modifying, or deleting important system files or directories. This is a powerful ability with security implications, and could impact non-Docker processes on the host system. Use bind mounts with care.

      In the docker-compose file, define a volume called dbdata under the db service definition to persist the MySQL database:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata: /var/lib/mysql
          networks:
            - app-network
        ...
      

      The named volume dbdata persists the contents of the /var/lib/mysql folder present inside the container. This allows you to stop and restart the db service without losing data.

      At the bottom of the file, add the definition for the dbdata volume:

      ~/laravel-app/docker-compose.yml

      ...
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      With this definition in place, you will be able to use this volume across services.

      Next, add a bind mount to the db service for the MySQL configuration files you will create in Step 7:

      ~/laravel-app/docker-compose.yml

      ...
      #MySQL Service
      db:
        ...
          volumes:
            - dbdata:/var/lib/mysql
            - ./mysql/my.cnf:/etc/mysql/my.cnf
        ...
      

      This bind mount binds ~/laravel-app/mysql/my.cnf to /etc/mysql/my.cnf in the container.

      Next, add bind mounts to the webserver service. There will be two: one for your application code and another for the Nginx configuration definition that you will create in Step 6:

      ~/laravel-app/docker-compose.yml

      #Nginx Service
      webserver:
        ...
        volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
        networks:
            - app-network
      

      The first bind mount binds the application code in the ~/laravel-app directory to the /var/www directory inside the container. The configuration file that you will add to ~/laravel-app/nginx/conf.d/ will also be mounted to /etc/nginx/conf.d/ in the container, allowing you to add or modify the configuration directory’s contents as needed.

      Finally, add the following bind mounts to the app service for the application code and configuration files:

      ~/laravel-app/docker-compose.yml

      #PHP Service
      app:
        ...
        volumes:
             - ./:/var/www
             - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
        networks:
            - app-network
      

      The app service is bind-mounting the ~/laravel-app folder, which contains the application code, to the /var/www folder in the container. This will speed up the development process, since any changes made to your local application directory will be instantly reflected inside the container. You are also binding your PHP configuration file, ~/laravel-app/php/local.ini, to /usr/local/etc/php/conf.d/local.ini inside the container. You will create the local PHP configuration file in Step 5.

      Your docker-compose file will now look like this:

      ~/laravel-app/docker-compose.yml

      version: '3'
      services:
      
        #PHP Service
        app:
          build:
            context: .
            dockerfile: Dockerfile
          image: digitalocean.com/php
          container_name: app
          restart: unless-stopped
          tty: true
          environment:
            SERVICE_NAME: app
            SERVICE_TAGS: dev
          working_dir: /var/www
          volumes:
            - ./:/var/www
            - ./php/local.ini:/usr/local/etc/php/conf.d/local.ini
          networks:
            - app-network
      
        #Nginx Service
        webserver:
          image: nginx:alpine
          container_name: webserver
          restart: unless-stopped
          tty: true
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - ./:/var/www
            - ./nginx/conf.d/:/etc/nginx/conf.d/
          networks:
            - app-network
      
        #MySQL Service
        db:
          image: mysql:5.7.22
          container_name: db
          restart: unless-stopped
          tty: true
          ports:
            - "3306:3306"
          environment:
            MYSQL_DATABASE: laravel
            MYSQL_ROOT_PASSWORD: your_mysql_root_password
            SERVICE_TAGS: dev
            SERVICE_NAME: mysql
          volumes:
            - dbdata:/var/lib/mysql/
            - ./mysql/my.cnf:/etc/mysql/my.cnf
          networks:
            - app-network
      
      #Docker Networks
      networks:
        app-network:
          driver: bridge
      #Volumes
      volumes:
        dbdata:
          driver: local
      

      Save the file and exit your editor when you are finished making changes.

      With your docker-compose file written, you can now build the custom image for your application.

      Step 4 — Creating the Dockerfile

      Docker allows you to specify the environment inside of individual containers with a Dockerfile. A Dockerfile enables you to create custom images that you can use to install the software required by your application and configure settings based on your requirements. You can push the custom images you create to Docker Hub or any private registry.

      Your Dockerfile will be located in your ~/laravel-app directory. Create the file:

      • nano ~/laravel-app/Dockerfile

      This Dockerfile will set the base image and specify the necessary commands and instructions to build the Laravel application image. Add the following code to the file:

      ~/laravel-app/php/Dockerfile

      FROM php:7.4-fpm
      
      # Copy composer.lock and composer.json
      COPY composer.lock composer.json /var/www/
      
      # Set working directory
      WORKDIR /var/www
      
      # Install dependencies
      RUN apt-get update && apt-get install -y 
          build-essential 
          libpng-dev 
          libjpeg62-turbo-dev 
          libfreetype6-dev 
          locales 
          zip 
          jpegoptim optipng pngquant gifsicle 
          vim 
          unzip 
          git 
          curl 
          libzip-dev
      
      # Clear cache
      RUN apt-get clean && rm -rf /var/lib/apt/lists/*
      
      # Install extensions
      RUN docker-php-ext-install pdo_mysql mbstring zip exif pcntl
      RUN docker-php-ext-configure gd --with-gd --with-freetype-dir=/usr/include/ --with-jpeg-dir=/usr/include/ --with-png-dir=/usr/include/
      RUN docker-php-ext-install gd
      
      # Install composer
      RUN curl -sS https://getcomposer.org/installer | php -- --install-dir=/usr/local/bin --filename=composer
      
      # Add user for laravel application
      RUN groupadd -g 1000 www
      RUN useradd -u 1000 -ms /bin/bash -g www www
      
      # Copy existing application directory contents
      COPY . /var/www
      
      # Copy existing application directory permissions
      COPY --chown=www:www . /var/www
      
      # Change current user to www
      USER www
      
      # Expose port 9000 and start php-fpm server
      EXPOSE 9000
      CMD ["php-fpm"]
      

      First, the Dockerfile creates an image on top of the php:7.4-fpm Docker image. This is a Debian-based image that has the PHP FastCGI implementation PHP-FPM installed. The file also installs the prerequisite packages for Laravel: mcrypt, pdo_mysql, mbstring, and imagick with composer.

      The RUN directive specifies the commands to update, install, and configure settings inside the container, including creating a dedicated user and group called www. The WORKDIR instruction specifies the /var/www directory as the working directory for the application.

      Creating a dedicated user and group with restricted permissions mitigates the inherent vulnerability when running Docker containers, which run by default as root. Instead of running this container as root, you’ve created the www user, who has read/write access to the /var/www folder thanks to the COPY instruction that you are using with the --chown flag to copy the application folder’s permissions.

      Finally, the EXPOSE command exposes a port in the container, 9000, for the php-fpm server. CMD specifies the command that should run once the container is created. Here, CMD specifies "php-fpm", which will start the server.

      Save the file and exit your editor when you are finished making changes.

      You can now move on to defining your PHP configuration.

      Step 5 — Configuring PHP

      Now that you have defined your infrastructure in the docker-compose file, you can configure the PHP service to act as a PHP processor for incoming requests from Nginx.

      To configure PHP, you will create the local.ini file inside the php folder. This is the file that you bind-mounted to /usr/local/etc/php/conf.d/local.ini inside the container in Step 2. Creating this file will allow you to override the default php.ini file that PHP reads when it starts.

      Create the php directory:

      Next, open the local.ini file:

      • nano ~/laravel-app/php/local.ini

      To demonstrate how to configure PHP, you’ll add the following code to set size limitations for uploaded files:

      ~/laravel-app/php/local.ini

      upload_max_filesize=40M
      post_max_size=40M
      

      The upload_max_filesize and post_max_size directives set the maximum allowed size for uploaded files, and demonstrate how you can set php.ini configurations from your local.ini file. You can put any PHP-specific configuration that you want to override in the local.ini file.

      Save the file and exit your editor.

      With your PHP local.ini file in place, you can move on to configuring Nginx.

      Step 6 — Configuring Nginx

      With the PHP service configured, you can modify the Nginx service to use PHP-FPM as the FastCGI server to serve dynamic content. The FastCGI server is based on a binary protocol for interfacing interactive programs with a web server. For more information, please refer to this article on Understanding and Implementing FastCGI Proxying in Nginx.

      To configure Nginx, you will create an app.conf file with the service configuration in the ~/laravel-app/nginx/conf.d/ folder.

      First, create the nginx/conf.d/ directory:

      • mkdir -p ~/laravel-app/nginx/conf.d

      Next, create the app.conf configuration file:

      • nano ~/laravel-app/nginx/conf.d/app.conf

      Add the following code to the file to specify your Nginx configuration:

      ~/laravel-app/nginx/conf.d/app.conf

      server {
          listen 80;
          index index.php index.html;
          error_log  /var/log/nginx/error.log;
          access_log /var/log/nginx/access.log;
          root /var/www/public;
          location ~ .php$ {
              try_files $uri =404;
              fastcgi_split_path_info ^(.+.php)(/.+)$;
              fastcgi_pass app:9000;
              fastcgi_index index.php;
              include fastcgi_params;
              fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
              fastcgi_param PATH_INFO $fastcgi_path_info;
          }
          location / {
              try_files $uri $uri/ /index.php?$query_string;
              gzip_static on;
          }
      }
      

      The server block defines the configuration for the Nginx web server with the following directives:

      • listen: This directive defines the port on which the server will listen to incoming requests.
      • error_log and access_log: These directives define the files for writing logs.
      • root: This directive sets the root folder path, forming the complete path to any requested file on the local file system.

      In the php location block, the fastcgi_pass directive specifies that the app service is listening on a TCP socket on port 9000. This makes the PHP-FPM server listen over the network rather than on a Unix socket. Though a Unix socket has a slight advantage in speed over a TCP socket, it does not have a network protocol and thus skips the network stack. For cases where hosts are located on one machine, a Unix socket may make sense, but in cases where you have services running on different hosts, a TCP socket offers the advantage of allowing you to connect to distributed services. Because your app container is running on a different host from your webserver container, a TCP socket makes the most sense for your configuration.

      Save the file and exit your editor when you are finished making changes.

      Thanks to the bind mount you created in Step 2, any changes you make inside the nginx/conf.d/ folder will be directly reflected inside the webserver container.

      Next, you’ll look at and configure your MySQL settings.

      Step 7 — Configuring MySQL

      With PHP and Nginx configured, you can enable MySQL to act as the database for your application.

      To configure MySQL, you will create the my.cnf file in the mysql folder. This is the file that you bind-mounted to /etc/mysql/my.cnf inside the container in Step 2. This bind mount allows you to override the my.cnf settings as and when required.

      To demonstrate how this works, you’ll add settings to the my.cnf file that enable the general query log and specify the log file.

      First, create the mysql directory:

      • mkdir ~/laravel-app/mysql

      Next, make the my.cnf file:

      • nano ~/laravel-app/mysql/my.cnf

      In the file, add the following code to enable the query log and set the log file location:

      ~/laravel-app/mysql/my.cnf

      [mysqld]
      general_log = 1
      general_log_file = /var/lib/mysql/general.log
      

      This my.cnf file enables logs, defining the general_log setting as 1 to allow general logs. The general_log_file setting specifies where the logs will be stored.

      Save the file and exit your editor.

      Your next step will be to start the containers.

      Step 8 — Modifying Environment Settings and Running the Containers

      Now that you have defined all of your services in your docker-compose file and created the configuration files for these services, you can start the containers. As a final step, though, you will make a copy of the .env.example file that Laravel includes by default and name the copy .env, which is the file Laravel expects to define its environment:

      You can now modify the .env file on the app container to include specific details about your setup.

      Open the file using nano or your text editor of choice:

      Find the block that specifies DB_CONNECTION and update it to reflect the specifics of your setup. You will modify the following fields:

      • DB_HOST will be your db database container.
      • DB_DATABASE will be the laravel database.
      • DB_USERNAME will be the username you will use for your database. In this case, you will use laraveluser.
      • DB_PASSWORD will be the secure password you would like to use for this user account.

      /var/www/.env

      DB_CONNECTION=mysql
      DB_HOST=db
      DB_PORT=3306
      DB_DATABASE=laravel
      DB_USERNAME=laraveluser
      DB_PASSWORD=your_laravel_db_password
      

      Save your changes and exit your editor.

      With all of your services defined in your docker-compose file, use the following single command to start all of the containers, create the volumes, and set up and connect the networks:

      When you run docker-compose up for the first time, it will download all of the necessary Docker images, which might take a while. Once the images are downloaded and stored in your local machine, Compose will create your containers. The -d flag daemonizes the process, running your containers in the background.

      Once the process is complete, use the following command to list all of the running containers:

      You will see the following output with details about your app, webserver, and db containers:

      Output

      CONTAINER ID NAMES IMAGE STATUS PORTS c31b7b3251e0 db mysql:5.7.22 Up 2 seconds 0.0.0.0:3306->3306/tcp ed5a69704580 app digitalocean.com/php Up 2 seconds 9000/tcp 5ce4ee31d7c0 webserver nginx:alpine Up 2 seconds 0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp

      The CONTAINER ID in this output is a unique identifier for each container, while NAMES lists the service name associated with each. You can use both of these identifiers to access the containers. IMAGE defines the image name for each container, while STATUS provides information about the container’s state: whether it’s running, restarting, or stopped.

      You’ll now use docker-compose exec to set the application key for the Laravel application. The docker-compose exec command allows you to run specific commands in containers.

      The following command will generate a key and copy it to your .env file, ensuring that your user sessions and encrypted data remain secure:

      • docker-compose exec app php artisan key:generate

      You now have the environment settings required to run your application. To cache these settings into a file, which will boost your application’s load speed, run:

      • docker-compose exec app php artisan config:cache

      Your configuration settings will be loaded into /var/www/bootstrap/cache/config.php on the container.

      As a final step, visit http://your_server_ip in the browser. You will see the following home page for your Laravel application:

      Laravel Home Page

      With your containers running and your configuration information in place, you can move on to configuring your user information for the laravel database on the db container.

      Step 9 — Creating a User for MySQL

      The default MySQL installation only creates the root administrative account, which has unlimited privileges on the database server. In general, it’s better to avoid using the root administrative account when interacting with the database. Instead, let’s create a dedicated database user for your application’s Laravel database.

      To create a new user, execute an interactive bash shell on the db container with docker-compose exec:

      • docker-compose exec db bash

      Inside the container, log into the MySQL root administrative account:

      You will be prompted for the password you set for the MySQL root account during installation in your docker-compose file.

      Start by checking for the database called laravel, which you defined in your docker-compose file. Run the show databases command to check for existing databases:

      You will see the laravel database listed in the output:

      Output

      +--------------------+ | Database | +--------------------+ | information_schema | | laravel | | mysql | | performance_schema | | sys | +--------------------+ 5 rows in set (0.00 sec)

      Next, create the user account that will be allowed to access this database. Your username will be laraveluser, though you can replace this with another name if you’d prefer. Just be sure that your username and password here match the details you set in your .env file in the previous step:

      • GRANT ALL ON laravel.* TO 'laraveluser'@'%' IDENTIFIED BY 'your_laravel_db_password';

      Flush the privileges to notify the MySQL server of the changes:

      Exit MySQL:

      Finally, exit the container:

      You have configured the user account for your Laravel application database and are ready to migrate your data and work with the Tinker console.

      Step 10 — Migrating Data and Working with the Tinker Console

      With your application running, you can migrate your data and experiment with the tinker command, which will initiate a PsySH console with Laravel preloaded. PsySH is a runtime developer console and interactive debugger for PHP, and Tinker is a REPL specifically for Laravel. Using the tinker command will allow you to interact with your Laravel application from the command line in an interactive shell.

      First, test the connection to MySQL by running the Laravel artisan migrate command, which creates a migrations table in the database from inside the container:

      • docker-compose exec app php artisan migrate

      This command will migrate the default Laravel tables. The output confirming the migration will look like this:

      Output

      Migration table created successfully. Migrating: 2014_10_12_000000_create_users_table Migrated: 2014_10_12_000000_create_users_table Migrating: 2014_10_12_100000_create_password_resets_table Migrated: 2014_10_12_100000_create_password_resets_table

      Once the migration is complete, you can run a query to check if you are properly connected to the database using the tinker command:

      • docker-compose exec app php artisan tinker

      Test the MySQL connection by getting the data you just migrated:

      • DB::table('migrations')->get();

      You will see output that looks like this:

      Output

      => IlluminateSupportCollection {#2856 all: [ {#2862 +"id": 1, +"migration": "2014_10_12_000000_create_users_table", +"batch": 1, }, {#2865 +"id": 2, +"migration": "2014_10_12_100000_create_password_resets_table", +"batch": 1, }, ], }

      You can use tinker to interact with your databases and to experiment with services and models.

      With your Laravel application in place, you are ready for further development and experimentation.

      Conclusion

      You now have a LEMP stack application running on your server, which you’ve tested by accessing the Laravel welcome page and creating MySQL database migrations. Key to the simplicity of this installation is Docker Compose, which allows you to create a group of Docker containers, defined in a single file, with a single command.



      Source link

      How To Migrate a Docker Compose Workflow for Rails Development to Kubernetes


      Introduction

      When building modern, stateless applications, containerizing your application’s components is the first step in deploying and scaling on distributed platforms. If you have used Docker Compose in development, you will have modernized and containerized your application by:

      • Extracting necessary configuration information from your code.
      • Offloading your application’s state.
      • Packaging your application for repeated use.

      You will also have written service definitions that specify how your container images should run.

      To run your services on a distributed platform like Kubernetes, you will need to translate your Compose service definitions to Kubernetes objects. This will allow you to scale your application with resiliency. One tool that can speed up the translation process to Kubernetes is kompose, a conversion tool that helps developers move Compose workflows to container orchestrators like Kubernetes or OpenShift.

      In this tutorial, you will translate Compose services to Kubernetes objects using kompose. You will use the object definitions that kompose provides as a starting point and make adjustments to ensure that your setup will use Secrets, Services, and PersistentVolumeClaims in the way that Kubernetes expects. By the end of the tutorial, you will have a single-instance Rails application with a PostgreSQL database running on a Kubernetes cluster. This setup will mirror the functionality of the code described in Containerizing a Ruby on Rails Application for Development with Docker Compose and will be a good starting point to build out a production-ready solution that will scale with your needs.

      Prerequisites

      Step 1 — Installing kompose

      To begin using kompose, navigate to the project’s GitHub Releases page, and copy the link to the current release (version 1.22.0 as of this writing). Paste this link into the following curl command to download the latest version of kompose:

      • curl -L https://github.com/kubernetes/kompose/releases/download/v1.22.0/kompose-linux-amd64 -o kompose

      For details about installing on non-Linux systems, please refer to the installation instructions.

      Make the binary executable:

      Move it to your PATH:

      • sudo mv ./kompose /usr/local/bin/kompose

      To verify that it has been installed properly, you can do a version check:

      If the installation was successful, you will see output like the following:

      Output

      1.22.0 (955b78124)

      With kompose installed and ready to use, you can now clone the Node.js project code that you will be translating to Kubernetes.

      Step 2 — Cloning and Packaging the Application

      To use our application with Kubernetes, we will need to clone the project code and package the application so that the kubelet service can pull the image.

      Our first step will be to clone the rails-sidekiq repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose, which uses a demo Rails application to demonstrate how to set up a development environment using Docker Compose. You can find more information about the application itself in the series Rails on Containers.

      Clone the repository into a directory called rails_project:

      • git clone https://github.com/do-community/rails-sidekiq.git rails_project

      Navigate to the rails_project directory:

      Now checkout the code for this tutorial from the compose-workflow branch:

      • git checkout compose-workflow

      Output

      Branch 'compose-workflow' set up to track remote branch 'compose-workflow' from 'origin'. Switched to a new branch 'compose-workflow'

      The rails_project directory contains files and directories for a shark information application that works with user input. It has been modernized to work with containers: sensitive and specific configuration information has been removed from the application code and refactored to be injected at runtime, and the application’s state has been offloaded to a PostgreSQL database.

      For more information about designing modern, stateless applications, please see Architecting Applications for Kubernetes and Modernizing Applications for Kubernetes.

      The project directory includes a Dockerfile with instructions for building the application image. Let’s build the image now so that you can push it to your Docker Hub account and use it in your Kubernetes setup.

      Using the docker build command, build the image with the -t flag, which allows you to tag it with a memorable name. In this case, tag the image with your Docker Hub username and name it rails-kubernetes or a name of your own choosing:

      • docker build -t your_dockerhub_user/rails-kubernetes .

      The . in the command specifies that the build context is the current directory.

      It will take a minute or two to build the image. Once it is complete, check your images:

      You will see the following output:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE your_dockerhub_user/rails-kubernetes latest 24f7e88b6ef2 2 days ago 606MB alpine latest d6e46aa2470d 6 weeks ago 5.57MB

      Next, log in to the Docker Hub account you created in the prerequisites:

      • docker login -u your_dockerhub_user

      When prompted, enter your Docker Hub account password. Logging in this way will create a ~/.docker/config.json file in your user’s home directory with your Docker Hub credentials.

      Push the application image to Docker Hub with the docker push command. Remember to replace your_dockerhub_user with your own Docker Hub username:

      • docker push your_dockerhub_user/rails-kubernetes

      You now have an application image that you can pull to run your application with Kubernetes. The next step will be to translate your application service definitions to Kubernetes objects.

      Step 3 — Translating Compose Services to Kubernetes Objects with kompose

      Our Docker Compose file, here called docker-compose.yml, lays out the definitions that will run our services with Compose. A service in Compose is a running container, and service definitions contain information about how each container image will run. In this step, we will translate these definitions to Kubernetes objects by using kompose to create yaml files. These files will contain specs for the Kubernetes objects that describe their desired state.

      We will use these files to create different types of objects: Services, which will ensure that the Pods running our containers remain accessible; Deployments, which will contain information about the desired state of our Pods; a PersistentVolumeClaim to provision storage for our database data; a ConfigMap for environment variables injected at runtime; and a Secret for our application’s database user and password. Some of these definitions will be in the files kompose will create for us, and others we will need to create ourselves.

      First, we will need to modify some of the definitions in our docker-compose.yml file to work with Kubernetes. We will include a reference to our newly-built application image in our app service definition and remove the bind mounts, volumes, and additional commands that we used to run the application container in development with Compose. Additionally, we’ll redefine both containers’ restart policies to be in line with the behavior Kubernetes expects.

      If you have followed the steps in this tutorial and checked out the compose-workflow branch with git, then you should have a docker-compose.yml file in your working directory.

      If you don’t have a docker-compose.yml then be sure to visit the previous tutorial in this series, Containerizing a Ruby on Rails Application for Development with Docker Compose, and paste the contents from the linked section into a new docker-compose.yml file.

      Open the file with nano or your favorite editor:

      The current definition for the app application service looks like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          build:
            context: .
            dockerfile: Dockerfile
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          volumes:
            - .:/app
            - gem_cache:/usr/local/bundle/gems
            - node_modules:/app/node_modules
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Make the following edits to your service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      The finished service definition will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - database
            - redis
          ports:
            - "3000:3000"
          env_file: .env
          environment:
            RAILS_ENV: development
      . . .
      

      Next, scroll down to the database service definition and make the following edits:

      • Remove the - ./init.sql:/docker-entrypoint-initdb.d/init.sql volume line. Instead of using values from the local SQL file, we will pass the values for our POSTGRES_USER and POSTGRES_PASSWORD to the database container using the Secret we will create in Step 4.
      • Add a ports: section that will make PostgreSQL available inside your Kubernetes cluster on port 5432.
      • Add an environment: section with a PGDATA variable that points to a directory inside /var/lib/postgresql/data. This setting is required when PostgreSQL is configured to use block storage, since the database engine expects to find its data files in a sub-directory.

      The database service definition should look like this when you are finished editing it:

      ~/rails_project/docker-compose.yml

      . . .
        database:
          image: postgres:12.1
          volumes:
            - db_data:/var/lib/postgresql/data
          ports:
            - "5432:5432"
          environment:
            PGDATA: /var/lib/postgresql/data/pgdata
      . . .
      

      Next, edit the redis service definition to expose its default TCP port by adding a ports: section with the default 6379 port. Adding the ports: section will make Redis available inside your Kubernetes cluster. Your edited redis service should resemble the following:

      ~/rails_project/docker-compose.yml

      . . .
        redis:
          image: redis:5.0.7
          ports:
            - "6379:6379"
      

      After editing the redis section of the file, continue to the sidekiq service definition. Just as with the app service, you’ll need to switch from building a local docker image to pulling from Docker Hub. Make the following edits to your sidekiq service definition:

      • Replace the build: line with image: your_dockerhub_user/rails-kubernetes
      • Remove the following context: ., and dockerfile: Dockerfile lines.
      • Remove the volumes list.

      ~/rails_project/docker-compose.yml

      . . .
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
            - app
            - database
            - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      

      Finally, at the bottom of the file, remove the gem_cache and node_modules volumes from the top-level volumes key. The key will now look like this:

      ~/rails_project/docker-compose.yml

      . . .
      volumes:
        db_data:
      

      Save and close the file when you are finished editing.

      For reference, your completed docker-compose.yml file should contain the following:

      ~/rails_project/docker-compose.yml

      version: '3'
      
      services:
        app:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - database
              - redis
          ports:
              - "3000:3000"
          env_file: .env
          environment:
              RAILS_ENV: development
      
        database:
          image: postgres:12.1
          volumes:
              - db_data:/var/lib/postgresql/data
          ports:
              - "5432:5432"
          environment:
              PGDATA: /var/lib/postgresql/data/pgdata
      
        redis:
          image: redis:5.0.7
          ports:
              - "6379:6379"
      
        sidekiq:
          image: your_dockerhub_user/rails-kubernetes
          depends_on:
              - app
              - database
              - redis
          env_file: .env
          environment:
              RAILS_ENV: development
          entrypoint: ./entrypoints/sidekiq-entrypoint.sh
      
      volumes:
        db_data:
      

      Before translating our service definitions, we will need to write the .env file that kompose will use to create the ConfigMap with our non-sensitive information. Please see Step 2 of Containerizing a Ruby on Rails Application for Development with Docker Compose for a longer explanation of this file.

      In that tutorial, we added .env to our .gitignore file to ensure that it would not copy to version control. This means that it did not copy over when we cloned the rails-sidekiq repository in Step 2 of this tutorial. We will therefore need to recreate it now.

      Create the file:

      kompose will use this file to create a ConfigMap for our application. However, instead of assigning all of the variables from the app service definition in our Compose file, we will only add settings for the PostgreSQL and Redis. We will assign the database name, username, and password separately when we manually create a Secret object in Step 4.

      Add the following port and database name information to the .env file. Feel free to rename your database if you would like:

      ~/rails_project/.env

      DATABASE_HOST=database
      DATABASE_PORT=5432
      REDIS_HOST=redis
      REDIS_PORT=6379
      

      Save and close the file when you are finished editing.

      You are now ready to create the files with your object specs. kompose offers multiple options for translating your resources. You can:

      • Create yaml files based on the service definitions in your docker-compose.yml file with kompose convert.
      • Create Kubernetes objects directly with kompose up.
      • Create a Helm chart with kompose convert -c.

      For now, we will convert our service definitions to yaml files and then add to and revise the files that kompose creates.

      Convert your service definitions to yaml files with the following command:

      After you run this command, kompose will output information about the files it has created:

      Output

      INFO Kubernetes file "app-service.yaml" created INFO Kubernetes file "database-service.yaml" created INFO Kubernetes file "redis-service.yaml" created INFO Kubernetes file "app-deployment.yaml" created INFO Kubernetes file "env-configmap.yaml" created INFO Kubernetes file "database-deployment.yaml" created INFO Kubernetes file "db-data-persistentvolumeclaim.yaml" created INFO Kubernetes file "redis-deployment.yaml" created INFO Kubernetes file "sidekiq-deployment.yaml" created

      These include yaml files with specs for the Rails application Service, Deployment, and ConfigMap, as well as for the db-data PersistentVolumeClaim and PostgreSQL database Deployment. Also included are files for Redis and Sidekiq respectively.

      To keep these manifests out of the main directory for your Rails project, create a new directory called k8s-manifests and then use the mv command to move the generated files into it:

      • mkdir k8s-manifests
      • mv *.yaml k8s-manifests

      Finally, cd into the k8s-manifests directory. We’ll work from inside this directory from now on to keep things tidy:

      These files are a good starting point, but in order for our application’s functionality to match the setup described in Containerizing a Ruby on Rails Application for Development with Docker Compose we will need to make a few additions and changes to the files that kompose has generated.

      Step 4 — Creating Kubernetes Secrets

      In order for our application to function in the way we expect, we will need to make a few modifications to the files that kompose has created. The first of these changes will be generating a Secret for our database user and password and adding it to our application and database Deployments. Kubernetes offers two ways of working with environment variables: ConfigMaps and Secrets. kompose has already created a ConfigMap with the non-confidential information we included in our .env file, so we will now create a Secret with our confidential information: our database name, username and password.

      The first step in manually creating a Secret will be to convert the data to base64, an encoding scheme that allows you to uniformly transmit data, including binary data.

      First convert the database name to base64 encoded data:

      • echo -n 'your_database_name' | base64

      Note down the encoded value.

      Next convert your database username:

      • echo -n 'your_database_username' | base64

      Again record the value you see in the output.

      Finally, convert your password:

      • echo -n 'your_database_password' | base64

      Take note of the value in the output here as well.

      Open a file for the Secret:

      Note: Kubernetes objects are typically defined using YAML, which strictly forbids tabs and requires two spaces for indentation. If you would like to check the formatting of any of your yaml files, you can use a linter or test the validity of your syntax using kubectl create with the --dry-run and --validate flags:

      • kubectl create -f your_yaml_file.yaml --dry-run --validate=true

      In general, it is a good idea to validate your syntax before creating resources with kubectl.

      Add the following code to the file to create a Secret that will define your DATABASE_NAME, DATABASE_USER and DATABASE_PASSWORD using the encoded values you just created. Be sure to replace the highlighted placeholder values here with your encoded database name, username and password:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
      

      We have named the Secret object database-secret, but you are free to name it anything you would like.

      These secrets are used with the Rails application so that it can connect to PostgreSQL. However, the database itself needs to be initialized with these same values. So next, copy the three lines and paste them at the end of the file. Edit the last three lines and change the DATABASE prefix for each variable to POSTGRES. Finally change the POSTGRES_NAME variable to read POSTGRES_DB.

      Your final secret.yaml file should contain the following:

      ~/rails_project/k8s-manifests/secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: database-secret
      data:
        DATABASE_NAME: your_database_name
        DATABASE_PASSWORD: your_encoded_password
        DATABASE_USER: your_encoded_username
        POSTGRES_DB: your_database_name
        POSTGRES_PASSWORD: your_encoded_password
        POSTGRES_USER: your_encoded_username
      

      Save and close this file when you are finished editing. As you did with your .env file, be sure to add secret.yaml to your .gitignore file to keep it out of version control.

      With secret.yaml written, our next step will be to ensure that our application and database Deployments both use the values that we added to the file. Let’s start by adding references to the Secret to our application Deployment.

      Open the file called app-deployment.yaml:

      The file’s container specifications include the following environment variables defined under the env key:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: DATABASE_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_HOST
                        name: env
                  - name: DATABASE_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: DATABASE_PORT
                        name: env
                  - name: RAILS_ENV
                    value: development
                  - name: REDIS_HOST
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_HOST
                        name: env
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
      . . .
      

      We will need to add references to our Secret so that our application will have access to those values. Instead of including a configMapKeyRef key to point to our env ConfigMap, as is the case with the existing values, we’ll include a secretKeyRef key to point to the values in our database-secret secret.

      Add the following Secret references after the - name: REDIS_PORT variable section:

      ~/rails_project/k8s-manifests/app-deployment.yaml

      . . .
          spec:
            containers:
              - env:
              . . .  
                  - name: REDIS_PORT
                    valueFrom:
                      configMapKeyRef:
                        key: REDIS_PORT
                        name: env
                  - name: DATABASE_NAME
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_NAME
                  - name: DATABASE_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_PASSWORD
                  - name: DATABASE_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: DATABASE_USER
      . . .
      
      

      Save and close the file when you are finished editing. As with your secrets.yaml file, be sure to validate your edits using kubectl to ensure there are no issues with spaces, tabs, and indentation:

      • kubectl create -f app-deployment.yaml --dry-run --validate=true

      Output

      deployment.apps/app created (dry run)

      Next, we’ll add the same values to the database-deployment.yaml file.

      Open the file for editing:

      • nano database-deployment.yaml

      In this file, we will add references to our Secret for following variable keys: POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD. The postgres image makes these variables available so that you can modify the initialization of your database instance. The POSTGRES_DB creates a default database that is available when the container starts. The POSTGRES_USER and POSTGRES_PASSWORD together create a privileged user that can access the created database.

      Using the these values means that the user we create has access to all of the administrative and operational privileges of that role in PostgreSQL. When working in production, you will want to create a dedicated application user with appropriately scoped privileges.

      Under the POSTGRES_DB, POSTGRES_USER and POSTGRES_PASSWORD variables, add references to the Secret values:

      ~/rails_project/k8s-manifests/database-deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      . . .
          spec:
            containers:
              - env:
                  - name: PGDATA
                    value: /var/lib/postgresql/data/pgdata
                  - name: POSTGRES_DB
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_DB
                  - name: POSTGRES_PASSWORD
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_PASSWORD        
                  - name: POSTGRES_USER
                    valueFrom:
                      secretKeyRef:
                        name: database-secret
                        key: POSTGRES_USER
      . . .
      

      Save and close the file when you are finished editing. Again be sure to lint your edited file using kubectl with the --dry-run --validate=true arguments.

      With your Secret in place, you can move on to creating the database Service and ensuring that your application container only attempts to connect to the database once it is fully set up and initialized.

      Step 5 — Modifying the PersistentVolumeClaim and Exposing the Application Frontend

      Before running our application, we will make two final changes to ensure that our database storage will be provisioned properly and that we can expose our application frontend using a LoadBalancer.

      First, let’s modify the storage resource defined in the PersistentVolumeClaim that kompose created for us. This Claim allows us to dynamically provision storage to manage our application’s state.

      To work with PersistentVolumeClaims, you must have a StorageClass created and configured to provision storage resources. In our case, because we are working with DigitalOcean Kubernetes, our default StorageClass provisioner is set to dobs.csi.digitalocean.com — DigitalOcean Block Storage.

      We can check this by typing:

      If you are working with a DigitalOcean cluster, you will see the following output:

      Output

      NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE do-block-storage (default) dobs.csi.digitalocean.com Delete Immediate true 76m

      If you are not working with a DigitalOcean cluster, you will need to create a StorageClass and configure a provisioner of your choice. For details about how to do this, please see the official documentation.

      When kompose created db-data-persistentvolumeclaim.yaml, it set the storage resource to a size that does not meet the minimum size requirements of our provisioner. We will therefore need to modify our PersistentVolumeClaim to use the minimum viable DigitalOcean Block Storage unit: 1GB. Please feel free to modify this to meet your storage requirements.

      Open db-data-persistentvolumeclaim.yaml:

      • nano db-data-persistentvolumeclaim.yaml

      Replace the storage value with 1Gi:

      ~/rails_project/k8s-manifests/db-data-persistentvolumeclaim.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        creationTimestamp: null
        labels:
          io.kompose.service: db-data
        name: db-data
      spec:
        accessModes:
          - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      status: {}
      

      Also note the accessMode: ReadWriteOnce means that the volume provisioned as a result of this Claim will be read-write only by a single node. Please see the documentation for more information about different access modes.

      Save and close the file when you are finished.

      Next, open app-service.yaml:

      We are going to expose this Service externally using a DigitalOcean Load Balancer. If you are not using a DigitalOcean cluster, please consult the relevant documentation from your cloud provider for information about their load balancers. Alternatively, you can follow the official Kubernetes documentation on setting up a highly available cluster with kubeadm, but in this case you will not be able to use PersistentVolumeClaims to provision storage.

      Within the Service spec, specify LoadBalancer as the Service type:

      ~/rails_project/k8s-manifests/app-service.yaml

      apiVersion: v1
      kind: Service
      . . .
      spec:
        type: LoadBalancer
        ports:
      . . .
      

      When we create the app Service, a load balancer will be automatically created, providing us with an external IP where we can access our application.

      Save and close the file when you are finished editing.

      With all of our files in place, we are ready to start and test the application.

      Note:
      If you would like to compare your edited Kubernetes manifests to a set of reference files to be certain that your changes match this tutorial, the companion Github repository contains a set of tested manifests. You can compare each file individually, or you can also switch your local git branch to use the kubernetes-workflow branch.

      If you opt to switch branches, be sure to copy your secrets.yaml file into the new checked out version since we added it to .gitignore earlier in the tutorial.

      Step 6 — Starting and Accessing the Application

      It’s time to create our Kubernetes objects and test that our application is working as expected.

      To create the objects we’ve defined, we’ll use kubectl create with the -f flag, which will allow us to specify the files that kompose created for us, along with the files we wrote. Run the following command to create the Rails application and PostgreSQL database, Redis cache, and Sidekiq Services and Deployments, along with your Secret, ConfigMap, and PersistentVolumeClaim:

      • kubectl create -f app-deployment.yaml,app-service.yaml,database-deployment.yaml,database-service.yaml,db-data-persistentvolumeclaim.yaml,env-configmap.yaml,redis-deployment.yaml,redis-service.yaml,secret.yaml,sidekiq-deployment.yaml

      You receive the following output, indicating that the objects have been created:

      Output

      deployment.apps/app created service/app created deployment.apps/database created service/database created persistentvolumeclaim/db-data created configmap/env created deployment.apps/redis created service/redis created secret/database-secret created deployment.apps/sidekiq created

      To check that your Pods are running, type:

      You don’t need to specify a Namespace here, since we have created our objects in the default Namespace. If you are working with multiple Namespaces, be sure to include the -n flag when running this kubectl create command, along with the name of your Namespace.

      You will see output similar to the following while your database container is starting (the status will be either Pending or ContainerCreating):

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 23s database-c77d55fbb-bmfm8 0/1 Pending 0 23s redis-7d65467b4d-9hcxk 1/1 Running 0 23s sidekiq-867f6c9c57-mcwks 1/1 Running 0 23s

      Once the database container is started, you will have output like this:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      The Running STATUS indicates that your Pods are bound to nodes and that the containers associated with those Pods are running. READY indicates how many containers in a Pod are running. For more information, please consult the documentation on Pod lifecycles.

      Note:
      If you see unexpected phases in the STATUS column, remember that you can troubleshoot your Pods with the following commands:

      • kubectl describe pods your_pod
      • kubectl logs your_pod

      Now that your application is up and running, the last step that is required is to run Rails’ database migrations. This step will load a schema into the PostgreSQL database for the demo application.

      To run pending migrations you’ll exec into the running application pod and then call the rake db:migrate command.

      First, find the name of the application pod with the following command:

      Find the pod that corresponds to your application like the highlighted pod name in the following output:

      Output

      NAME READY STATUS RESTARTS AGE app-854d645fb9-9hv7w 1/1 Running 0 30s database-c77d55fbb-bmfm8 1/1 Running 0 30s redis-7d65467b4d-9hcxk 1/1 Running 0 30s sidekiq-867f6c9c57-mcwks 1/1 Running 0 30s

      With that pod name noted down, you can now run the kubectl exec command to complete the database migration step.

      Run the migrations with this command:

      • kubectl exec your_app_pod_name -- rake db:migrate

      You should receive output similar to the following, which indicates that the database schema has been loaded:

      Output

      == 20190927142853 CreateSharks: migrating ===================================== -- create_table(:sharks) -> 0.0190s == 20190927142853 CreateSharks: migrated (0.0208s) ============================ == 20190927143639 CreatePosts: migrating ====================================== -- create_table(:posts) -> 0.0398s == 20190927143639 CreatePosts: migrated (0.0421s) ============================= == 20191120132043 CreateEndangereds: migrating ================================ -- create_table(:endangereds) -> 0.8359s == 20191120132043 CreateEndangereds: migrated (0.8367s) =======================

      With your containers running and data loaded, you can now access the application. To get the IP for the app LoadBalancer, type:

      You will receive output like the following:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE app LoadBalancer 10.245.73.142 your_lb_ip 3000:31186/TCP 21m database ClusterIP 10.245.155.87 <none> 5432/TCP 21m kubernetes ClusterIP 10.245.0.1 <none> 443/TCP 21m redis ClusterIP 10.245.119.67 <none> 6379/TCP 21m

      The EXTERNAL_IP associated with the app service is the IP address where you can access the application. If you see a <pending> status in the EXTERNAL_IP column, this means that your load balancer is still being created.

      Once you see an IP in that column, navigate to it in your browser: http://your_lb_ip:3000.

      You should see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will have a page with a button to create a new shark:

      Shark Info Form

      Click it and when prompted, enter the username and password from earlier in the tutorial series. If you did not change these values then the defaults are sammy and shark respectively.

      In the form, add a shark of your choosing. To demonstrate, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You now have a single instance setup of a Rails application with a PostgreSQL database running on a Kubernetes cluster. You also have a Redis cache and a Sidekiq worker to process data that users submit.

      Conclusion

      The files you have created in this tutorial are a good starting point to build from as you move toward production. As you develop your application, you can work on implementing the following:



      Source link