One place for hosting & domains

      Application

      How To Integrate MongoDB with Your Node Application


      Introduction

      As you work with Node.js, you may find yourself developing a project that stores and queries data. In this case, you will need to choose a database solution that makes sense for your application’s data and query types.

      In this tutorial, you will integrate a MongoDB database with an existing Node application. NoSQL databases like MongoDB can be useful if your data requirements include scalability and flexibility. MongoDB also integrates well with Node since it is designed to work asynchronously with JSON objects.

      To integrate MongoDB into your project, you will use the Object Document Mapper (ODM) Mongoose to create schemas and models for your application data. This will allow you to organize your application code following the model-view-controller (MVC) architectural pattern, which lets you separate the logic of how your application handles user input from how your data is structured and rendered to the user. Using this pattern can facilitate future testing and development by introducing a separation of concerns into your codebase.

      At the end of the tutorial, you will have a working shark information application that will take a user’s input about their favorite sharks and display the results in the browser:

      Shark Output

      Prerequisites

      Step 1 — Creating a Mongo User

      Before we begin working with the application code, we will create an administrative user that will have access to our application’s database. This user will have administrative privileges on any database, which will give you the flexibility to switch and create new databases as needed.

      First, check that MongoDB is running on your server:

      • sudo systemctl status mongodb

      The following output indicates that MongoDB is running:

      Output

      ● mongodb.service - An object/document-oriented database Loaded: loaded (/lib/systemd/system/mongodb.service; enabled; vendor preset: enabled) Active: active (running) since Thu 2019-01-31 21:07:25 UTC; 21min ago ...

      Next, open the Mongo shell to create your user:

      This will drop you into an administrative shell:

      Output

      MongoDB shell version v3.6.3 connecting to: mongodb://127.0.0.1:27017 MongoDB server version: 3.6.3 ... >

      You will see some administrative warnings when you open the shell due to your unrestricted access to the admin database. You can learn more about restricting this access by reading How To Install and Secure MongoDB on Ubuntu 16.04, for when you move into a production setup.

      For now, you can use your access to the admin database to create a user with userAdminAnyDatabase privileges, which will allow password-protected access to your application's databases.

      In the shell, specify that you want to use the admin database to create your user:

      Next, create a role and password by adding a username and password with the db.createUser command. After you type this command, the shell will prepend three dots before each line until the command is complete. Be sure to replace the user and password provided here with your own username and password:

      • db.createUser(
      • {
      • user: "sammy",
      • pwd: "your_password",
      • roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
      • }
      • )

      This creates an entry for the user sammy in the admin database. The username you select and the admin database will serve as identifiers for your user.

      The output for the entire process will look like this, including the message indicating that the entry was successful:

      Output

      > db.createUser( ... { ... user: "sammy", ... pwd: "your_password", ... roles: [ { role: "userAdminAnyDatabase", db: "admin" } ] ... } ...) Successfully added user: { "user" : "sammy", "roles" : [ { "role" : "userAdminAnyDatabase", "db" : "admin" } ] }

      With your user and password created, you can now exit the Mongo shell:

      Now that you have created your database user, you can move on to cloning the starter project code and adding the Mongoose library, which will allow you to implement schemas and models for the collections in your databases.

      Step 2 — Adding Mongoose and Database Information to the Project

      Our next steps will be to clone the application starter code and add Mongoose and our MongoDB database information to the project.

      In your non-root user's home directory, clone the nodejs-image-demo repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git node_project

      Change to the node_project directory:

      Before modifying the project code, let's take a look at the project's structure using the tree command.

      Tip: tree is a useful command for viewing file and directory structures from the command line. You can install it with the following command:

      To use it, cd into a given directory and type tree. You can also provide the path to the starting point with a command like:

      • tree /home/sammy/sammys-project

      Type the following to look at the node_project directory:

      The structure of the current project looks like this:

      Output

      ├── Dockerfile ├── README.md ├── app.js ├── package-lock.json ├── package.json └── views ├── css │ └── styles.css ├── index.html └── sharks.html

      We will be adding directories to this project as we move through the tutorial, and tree will be a useful command to help us track our progress.

      Next, add the mongoose npm package to the project with the npm install command:

      This command will create a node_modules directory in your project directory, using the dependencies listed in the project's package.json file, and will add mongoose to that directory. It will also add mongoose to the dependencies listed in your package.json file. For a more detailed discussion of package.json, please see Step 1 in How To Build a Node.js Application with Docker.

      Before creating any Mongoose schemas or models, we will add our database connection information so that our application will be able to connect to our database.

      In order to separate your application's concerns as much as possible, create a separate file for your database connection information called db.js. You can open this file with nano or your favorite editor:

      First, import the mongoose module using the require function:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      

      This will give you access to Mongoose's built-in methods, which you will use to create the connection to your database.

      Next, add the following constants to define information for Mongo's connection URI. Though the username and password are optional, we will include them so that we can require authentication for our database. Be sure to replace the username and password listed below with your own information, and feel free to call the database something other than 'sharkinfo' if you would prefer:

      ~/node_project/db.js

      const mongoose = require('mongoose');
      
      const MONGO_USERNAME = 'sammy';
      const MONGO_PASSWORD = 'your_password';
      const MONGO_HOSTNAME = '127.0.0.1';
      const MONGO_PORT = '27017';
      const MONGO_DB = 'sharkinfo';
      

      Because we are running our database locally, we have used 127.0.0.1 as the hostname. This would change in other development contexts: for example, if you are using a separate database server or working with multiple nodes in a containerized workflow.

      Finally, define a constant for the URI and create the connection using the mongoose.connect() method:

      ~/node_project/db.js

      ...
      const url = `mongodb://${MONGO_USERNAME}:${MONGO_PASSWORD}@${MONGO_HOSTNAME}:${MONGO_PORT}/${MONGO_DB}?authSource=admin`;
      
      mongoose.connect(url, {useNewUrlParser: true});
      

      Note that in the URI we've specified the authSource for our user as the admin database. This is necessary since we have specified a username in our connection string. Using the useNewUrlParser flag with mongoose.connect() specifies that we want to use Mongo's new URL parser.

      Save and close the file when you are finished editing.

      As a final step, add the database connection information to the app.js file so that the application can use it. Open app.js:

      The first lines of the file will look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      
      const path = __dirname + '/views/';
      ...
      

      Below the router constant definition, located near the top of the file, add the following line:

      ~/node_project/app.js

      ...
      const router = express.Router();
      const db = require('./db');
      
      const path = __dirname + '/views/';
      ...
      

      This tells the application to use the database connection information specified in db.js.

      Save and close the file when you are finished editing.

      With your database information in place and Mongoose added to your project, you are ready to create the schemas and models that will shape the data in your sharks collection.

      Step 3 — Creating Mongoose Schemas and Models

      Our next step will be to think about the structure of the sharks collection that users will be creating in the sharkinfo database with their input. What structure do we want these created documents to have? The shark information page of our current application includes some details about different sharks and their behaviors:

      Shark Info Page

      In keeping with this theme, we can have users add new sharks with details about their overall character. This goal will shape how we create our schema.

      To keep your schemas and models distinct from the other parts of your application, create a models directory in the current project directory:

      Next, open a file called sharks.js to create your schema and model:

      Import the mongoose module at the top of the file:

      ~/node_project/models/sharks.js

      const mongoose = require('mongoose');
      

      Below this, define a Schema object to use as the basis for your shark schema:

      ~/node_project/models/sharks.js

      const mongoose = require('mongoose');
      const Schema = mongoose.Schema;
      

      You can now define the fields you would like to include in your schema. Because we want to create a collection with individual sharks and information about their behaviors, let's include a name key and a character key. Add the following Shark schema below your constant definitions:

      ~/node_project/models/sharks.js

      ...
      const Shark = new Schema ({
              name: { type: String, required: true },
              character: { type: String, required: true },
      });
      

      This definition includes information about the type of input we expect from users — in this case, a string — and whether or not that input is required.

      Finally, create the Shark model using Mongoose's model() function. This model will allow you to query documents from your collection and validate new documents. Add the following line at the bottom of the file:

      ~/node_project/models/sharks.js

      ...
      module.exports = mongoose.model('Shark', Shark)
      

      This last line makes our Shark model available as a module using the module.exports property. This property defines the values that the module will export, making them available for use elsewhere in the application.

      The finished models/sharks.js file looks like this:

      ~/node_project/models/sharks.js

      const mongoose = require('mongoose');
      const Schema = mongoose.Schema;
      
      const Shark = new Schema ({
              name: { type: String, required: true },
              character: { type: String, required: true },
      });
      
      module.exports = mongoose.model('Shark', Shark)
      

      Save and close the file when you are finished editing.

      With the Shark schema and model in place, you can start working on the logic that will determine how your application will handle user input.

      Step 4 — Creating Controllers

      Our next step will be to create the controller component that will determine how user input gets saved to our database and returned to the user.

      First, create a directory for the controller:

      Next, open a file in that folder called sharks.js:

      • nano controllers/sharks.js

      At the top of the file, we'll import the module with our Shark model so that we can use it in our controller's logic. We'll also import the path module to access utilities that will allow us to set the path to the form where users will input their sharks.

      Add the following require functions to the beginning of the file:

      ~/node_project/controllers/sharks.js

      const path = require('path');
      const Shark = require('../models/sharks');
      

      Next, we'll write a sequence of functions that we will export with the controller module using Node's exports shortcut. These functions will include the three tasks related to our user's shark data:

      • Sending users the shark input form.
      • Creating a new shark entry.
      • Displaying the sharks back to users.

      To begin, create an index function to display the sharks page with the input form. Add this function below your imports:

      ~/node_project/controllers/sharks.js

      ...
      exports.index = function (req, res) {
          res.sendFile(path.resolve('views/sharks.html'));
      };
      

      Next, below the index function, add a function called create to make a new shark entry in your sharks collection:

      ~/node_project/controllers/sharks.js

      ...
      exports.create = function (req, res) {
          var newShark = new Shark(req.body);
          console.log(req.body);
          newShark.save(function (err) {
                  if(err) {
                  res.status(400).send('Unable to save shark to database');
              } else {
                  res.redirect('/sharks/getshark');
              }
        });
                     };
      

      This function will be called when a user posts shark data to the form on the sharks.html page. We will create the route with this POST endpoint later in the tutorial when we create our application's routes. With the body of the POST request, our create function will make a new shark document object, here called newShark, using the Shark model that we've imported. We've added a console.log method to output the shark entry to the console in order to check that our POST method is working as intended, but you should feel free to omit this if you would prefer.

      Using the newShark object, the create function will then call Mongoose's model.save() method to make a new shark document using the keys you defined in the Shark model. This callback function follows the standard Node callback pattern: callback(error, results). In the case of an error, we will send a message reporting the error to our users, and in the case of success, we will use the res.redirect() method to send users to the endpoint that will render their shark information back to them in the browser.

      Finally, the list function will display the collection's contents back to the user. Add the following code below the create function:

      ~/node_project/controllers/sharks.js

      ...
      exports.list = function (req, res) {
              Shark.find({}).exec(function (err, sharks) {
                      if (err) {
                              return res.send(500, err);
                      }
                      res.render('getshark', {
                              sharks: sharks
                   });
              });
      };
      

      This function uses the Shark model with Mongoose's model.find() method to return the sharks that have been entered into the sharks collection. It does this by returning the query object — in this case, all of the entries in the sharks collection — as a promise, using Mongoose's exec() function. In the case of an error, the callback function will send a 500 error.

      The returned query object with the sharks collection will be rendered in a getshark page that we will create in the next step using the EJS templating language.

      The finished file will look like this:

      ~/node_project/controllers/sharks.js

      const path = require('path');
      const Shark = require('../models/sharks');
      
      exports.index = function (req, res) {
          res.sendFile(path.resolve('views/sharks.html'));
      };
      
      exports.create = function (req, res) {
          var newShark = new Shark(req.body);
          console.log(req.body);
          newShark.save(function (err) {
                  if(err) {
                  res.status(400).send('Unable to save shark to database');
              } else {
                  res.redirect('/sharks/getshark');
              }
        });
                     };
      
      exports.list = function (req, res) {
              Shark.find({}).exec(function (err, sharks) {
                      if (err) {
                              return res.send(500, err);
                      }
                      res.render('getshark', {
                              sharks: sharks
                   });
              });
      };
      

      Keep in mind that though we are not using arrow functions here, you may wish to include them as you iterate on this code in your own development process.

      Save and close the file when you are finished editing.

      Before moving on to the next step, you can run tree again from your node_project directory to view the project's structure at this point. This time, for the sake of brevity, we'll tell tree to omit the node_modules directory using the -I option:

      With the additions you've made, your project's structure will look like this:

      Output

      ├── Dockerfile ├── README.md ├── app.js ├── controllers │ └── sharks.js ├── db.js ├── models │ └── sharks.js ├── package-lock.json ├── package.json └── views ├── css │ └── styles.css ├── index.html └── sharks.html

      Now that you have a controller component to direct how user input gets saved and returned to the user, you can move on to creating the views that will implement your controller's logic.

      Step 5 — Using EJS and Express Middleware to Collect and Render Data

      To enable our application to work with user data, we will do two things: first, we will include a built-in Express middleware function, urlencoded(), that will enable our application to parse our user's entered data. Second, we will add template tags to our views to enable dynamic interaction with user data in our code.

      To work with Express's urlencoded() function, first open your app.js file:

      Above your express.static() function, add the following line:

      ~/node_project/app.js

      ...
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      ...
      

      Adding this function will enable access to the parsed POST data from our shark information form. We are specifying true with the extended option to enable greater flexibility in the type of data our application will parse (including things like nested objects). Please see the function documentation for more information about options.

      Save and close the file when you are finished editing.

      Next, we will add template functionality to our views. First, install the ejs package with npm install:

      Next, open the sharks.html file in the views folder:

      In Step 3, we looked at this page to determine how we should write our Mongoose schema and model:

      Shark Info Page

      Now, rather than having a two column layout, we will introduce a third column with a form where users can input information about sharks.

      As a first step, change the dimensions of the existing columns to 4 to create three equal-sized columns. Note that you will need to make this change on the two lines that currently read <div class="col-lg-6">. These will both become <div class="col-lg-4">:

      ~/node_project/views/sharks.html

      ...
      <div class="container">
          <div class="row">
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          </div>
        </div>
      
       </html> 
      

      For an introduction to Bootstrap's grid system, including its row and column layouts, please see this introduction to Bootstrap.

      Next, add another column that includes the named endpoint for the POST request with the user's shark data and the EJS template tags that will capture that data. This column will go below the closing </p> and </div> tags from the preceding column and above the closing tags for the row, container, and HTML document. These closing tags are already in place in your code; they are also marked below with comments. Leave them in place as you add the following code to create the new column:

      ~/node_project/views/sharks.html

      ...
             </p> <!-- closing p from previous column -->
         </div> <!-- closing div from previous column -->
      <div class="col-lg-4">
                  <p>
                      <form action="/sharks/addshark" method="post">
                          <div class="caption">Enter Your Shark</div>
                          <input type="text" placeholder="Shark Name" name="name" <%=sharks[i].name; %>
                          <input type="text" placeholder="Shark Character" name="character" <%=sharks[i].character; %>
                          <button type="submit">Submit</button>
                      </form>
                  </p>
              </div> 
          </div> <!-- closing div for row -->
      </div> <!-- closing div for container -->
      
      </html> <!-- closing html tag -->
      

      In the form tag, you are adding a "/sharks/addshark" endpoint for the user's shark data and specifying the POST method to submit it. In the input fields, you are specifying fields for "Shark Name" and "Shark Character", aligning with the Shark model you defined earlier.

      To add the user input to your sharks collection, you are using EJS template tags (<%=, %>) along with JavaScript syntax to map the user's entries to the appropriate fields in the newly created document. For more about JavaScript objects, please see our article on Understanding JavaScript Objects. For more on EJS template tags, please see the EJS documentation.

      The entire container with all three columns, including the column with your shark input form, will look like this when finished:

      ~/node_project/views/sharks.html

      ...
      <div class="container">
          <div class="row">
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          <div class="col-lg-4">
                  <p>
                      <form action="/sharks/addshark" method="post">
                          <div class="caption">Enter Your Shark</div>
                          <input type="text" placeholder="Shark Name" name="name" <%=sharks[i].name; %>
                          <input type="text" placeholder="Shark Character" name="character" <%=sharks[i].character; %>
                          <button type="submit">Submit</button>
                      </form>
                  </p>
              </div>
          </div>
        </div>
      
      </html>
      

      Save and close the file when you are finished editing.

      Now that you have a way to collect your user's input, you can create an endpoint to display the returned sharks and their associated character information.

      Copy the newly modified sharks.html file to a file called getshark.html:

      • cp views/sharks.html views/getshark.html

      Open getshark.html:

      Inside the file, we will modify the column that we used to create our sharks input form by replacing it with a column that will display the sharks in our sharks collection. Again, your code will go between the existing </p> and </div> tags from the preceding column and the closing tags for the row, container, and HTML document. Remember to leave these tags in place as you add the following code to create the column:

      ~/node_project/views/getshark.html

      ...
             </p> <!-- closing p from previous column -->
         </div> <!-- closing div from previous column -->
      <div class="col-lg-4">
                 <p>
                    <div class="caption">Your Sharks</div>
                        <ul>
                           <% sharks.forEach(function(shark) { %>
                              <p>Name: <%= shark.name %></p>
                              <p>Character: <%= shark.character %></p>
                           <% }); %>
                        </ul>
                  </p>
              </div>
          </div> <!-- closing div for row -->
      </div> <!-- closing div for container -->
      
      </html> <!-- closing html tag -->
      

      Here you are using EJS template tags and the forEach() method to output each value in your sharks collection, including information about the most recently added shark.

      The entire container with all three columns, including the column with your sharks collection, will look like this when finished:

      ~/node_project/views/getshark.html

      ...
      <div class="container">
          <div class="row">
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Some sharks are known to be dangerous to humans, though many more are not. The sawshark, for example, is not considered a threat to humans.
                      </div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sawshark.jpg" alt="Sawshark">
                  </p>
              </div>
              <div class="col-lg-4">
                  <p>
                      <div class="caption">Other sharks are known to be friendly and welcoming!</div>
                      <img src="https://assets.digitalocean.com/articles/docker_node_image/sammy.png" alt="Sammy the Shark">
                  </p>
              </div>
          <div class="col-lg-4">
                  <p>
                    <div class="caption">Your Sharks</div>
                        <ul>
                           <% sharks.forEach(function(shark) { %>
                              <p>Name: <%= shark.name %></p>
                              <p>Character: <%= shark.character %></p>
                           <% }); %>
                        </ul>
                  </p>
              </div>
          </div>
        </div>
      
      </html>
      

      Save and close the file when you are finished editing.

      In order for the application to use the templates you've created, you will need to add a few lines to your app.js file. Open it again:

      Above where you added the express.urlencoded() function, add the following lines:

      ~/node_project/app.js

      ...
      app.engine('html', require('ejs').renderFile);
      app.set('view engine', 'html');
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      
      ...
      

      The app.engine method tells the application to map the EJS template engine to HTML files, while app.set defines the default view engine.

      Your app.js file should now look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      const db = require('./db');
      
      const path = __dirname + '/views/';
      const port = 8080;
      
      router.use(function (req,res,next) {
        console.log('/' + req.method);
        next();
      });
      
      router.get('/',function(req,res){
        res.sendFile(path + 'index.html');
      });
      
      router.get('/sharks',function(req,res){
        res.sendFile(path + 'sharks.html');
      });
      
      app.engine('html', require('ejs').renderFile);
      app.set('view engine', 'html');
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      app.use('/', router);
      
      app.listen(port, function () {
        console.log('Example app listening on port 8080!')
      })
      

      Now that you have created views that can work dynamically with user data, it's time to create your project's routes to bring together your views and controller logic.

      Step 6 — Creating Routes

      The final step in bringing the application's components together will be creating routes. We will separate our routes by function, including a route to our application's landing page and another route to our sharks page. Our sharks route will be where we integrate our controller's logic with the views we created in the previous step.

      First, create a routes directory:

      Next, open a file called index.js in this directory:

      This file will first import the express, router, and path objects, allowing us to define the routes we want to export with the router object, and making it possible to work dynamically with file paths. Add the following code at the top of the file:

      ~/node_project/routes/index.js

      const express = require('express');
      const router = express.Router();
      const path = require('path');
      

      Next, add the following router.use function, which loads a middleware function that will log the router's requests and pass them on to the application's route:

      ~/node_project/routes/index.js

      ...
      
      router.use (function (req,res,next) {
        console.log('/' + req.method);
        next();
      });
      

      Requests to our application's root will be directed here first, and from here users will be directed to our application's landing page, the route we will define next. Add the following code below the router.use function to define the route to the landing page:

      ~/node_project/routes/index.js

      ...
      
      router.get('/',function(req,res){
        res.sendFile(path.resolve('views/index.html'));
      });
      

      When users visit our application, the first place we want to send them is to the index.html landing page that we have in our views directory.

      Finally, to make these routes accessible as importable modules elsewhere in the application, add a closing expression to the end of the file to export the router object:

      ~/node_project/routes/index.js

      ...
      
      module.exports = router;
      

      The finished file will look like this:

      ~/node_project/routes/index.js

      const express = require('express');
      const router = express.Router();
      const path = require('path');
      
      router.use (function (req,res,next) {
        console.log('/' + req.method);
        next();
      });
      
      router.get('/',function(req,res){
        res.sendFile(path.resolve('views/index.html'));
      });
      
      module.exports = router;
      

      Save and close this file when you are finished editing.

      Next, open a file called sharks.js to define how the application should use the different endpoints and views we've created to work with our user's shark input:

      At the top of the file, import the express and router objects:

      ~/node_project/routes/sharks.js

      const express = require('express');
      const router = express.Router();
      

      Next, import a module called shark that will allow you to work with the exported functions you defined with your controller:

      ~/node_project/routes/sharks.js

      const express = require('express');
      const router = express.Router();
      const shark = require('../controllers/sharks');
      

      Now you can create routes using the index, create, and list functions you defined in your sharks controller file. Each route will be associated with the appropriate HTTP method: GET in the case of rendering the main sharks information landing page and returning the list of sharks to the user, and POST in the case of creating a new shark entry:

      ~/node_project/routes/sharks.js

      ...
      
      router.get('/', function(req, res){
          shark.index(req,res);
      });
      
      router.post('/addshark', function(req, res) {
          shark.create(req,res);
      });
      
      router.get('/getshark', function(req, res) {
          shark.list(req,res);
      });
      

      Each route makes use of the related function in controllers/sharks.js, since we have made that module accessible by importing it at the top of this file.

      Finally, close the file by attaching these routes to the router object and exporting them:

      ~/node_project/routes/index.js

      ...
      
      module.exports = router;
      

      The finished file will look like this:

      ~/node_project/routes/sharks.js

      const express = require('express');
      const router = express.Router();
      const shark = require('../controllers/sharks');
      
      router.get('/', function(req, res){
          shark.index(req,res);
      });
      
      router.post('/addshark', function(req, res) {
          shark.create(req,res);
      });
      
      router.get('/getshark', function(req, res) {
          shark.list(req,res);
      });
      
      module.exports = router;
      

      Save and close the file when you are finished editing.

      The last step in making these routes accessible to your application will be to add them to app.js. Open that file again:

      Below your db constant, add the following import for your routes:

      ~/node_project/app.js

      ...
      const db = require('./db');
      const sharks = require('./routes/sharks');
      

      Next, replace the app.use function that currently mounts your router object with the following line, which will mount the sharks router module:

      ~/node_project/app.js

      ...
      app.use(express.static(path));
      app.use('/sharks', sharks);
      
      app.listen(port, function () {
              console.log("Example app listening on port 8080!")
      })
      

      You can now delete the routes that were previously defined in this file, since you are importing your application's routes using the sharks router module.

      The final version of your app.js file will look like this:

      ~/node_project/app.js

      const express = require('express');
      const app = express();
      const router = express.Router();
      const db = require('./db');
      const sharks = require('./routes/sharks');
      
      const path = __dirname + '/views/';
      const port = 8080;
      
      app.engine('html', require('ejs').renderFile);
      app.set('view engine', 'html');
      app.use(express.urlencoded({ extended: true }));
      app.use(express.static(path));
      app.use('/sharks', sharks);
      
      app.listen(port, function () {
        console.log('Example app listening on port 8080!')
      })
      

      Save and close the file when you are finished editing.

      You can now run tree again to see the final structure of your project:

      Your project structure will now look like this:

      Output

      ├── Dockerfile ├── README.md ├── app.js ├── controllers │ └── sharks.js ├── db.js ├── models │ └── sharks.js ├── package-lock.json ├── package.json ├── routes │ ├── index.js │ └── sharks.js └── views ├── css │ └── styles.css ├── getshark.html ├── index.html └── sharks.html

      With all of your application components created and in place, you are now ready to add a test shark to your database!

      If you followed the initial server setup tutorial in the prerequisites, you will need to modify your firewall, since it currently only allows SSH traffic. To permit traffic to port 8080 run:

      Start the application:

      Next, navigate your browser to http://your_server_ip:8080. You will see the following landing page:

      Application Landing Page

      Click on the Get Shark Info button. You will see the following information page, with the shark input form added:

      Shark Info Form

      In the form, add a shark of your choosing. For the purpose of this demonstration, we will add Megalodon Shark to the Shark Name field, and Ancient to the Shark Character field:

      Filled Shark Form

      Click on the Submit button. You will see a page with this shark information displayed back to you:

      Shark Output

      You will also see output in your console indicating that the shark has been added to your collection:

      Output

      Example app listening on port 8080! { name: 'Megalodon Shark', character: 'Ancient' }

      If you would like to create a new shark entry, head back to the Sharks page and repeat the process of adding a shark.

      You now have a working shark information application that allows users to add information about their favorite sharks.

      Conclusion

      In this tutorial, you built out a Node application by integrating a MongoDB database and rewriting the application's logic using the MVC architectural pattern. This application can act as a good starting point for a fully-fledged CRUD application.

      For more resources on the MVC pattern in other contexts, please see our Django Development series or How To Build a Modern Web Application to Manage Customer Information with Django and React on Ubuntu 18.04.

      For more information on working with MongoDB, please see our library of tutorials on MongoDB.



      Source link

      How To Deploy a PHP Application with Kubernetes on Ubuntu 16.04


      The author selected the Open Internet/Free Speech to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes is an open source container orchestration system. It allows you to create, update, and scale containers without worrying about downtime.

      To run a PHP application, Nginx acts as a proxy to PHP-FPM. Containerizing this setup in a single container can be a cumbersome process, but Kubernetes will help manage both services in separate containers. Using Kubernetes will allow you to keep your containers reusable and swappable, and you will not have to rebuild your container image every time there’s a new version of Nginx or PHP.

      In this tutorial, you will deploy a PHP 7 application on a Kubernetes cluster with Nginx and PHP-FPM running in separate containers. You will also learn how to keep your configuration files and application code outside the container image using DigitalOcean’s Block Storage system. This approach will allow you to reuse the Nginx image for any application that needs a web/proxy server by passing a configuration volume, rather than rebuilding the image.

      Prerequisites

      Step 1 — Creating the PHP-FPM and Nginx Services

      In this step, you will create the PHP-FPM and Nginx services. A service allows access to a set of pods from within the cluster. Services within a cluster can communicate directly through their names, without the need for IP addresses. The PHP-FPM service will allow access to the PHP-FPM pods, while the Nginx service will allow access to the Nginx pods.

      Since Nginx pods will proxy the PHP-FPM pods, you will need to tell the service how to find them. Instead of using IP addresses, you will take advantage of Kubernetes’ automatic service discovery to use human-readable names to route requests to the appropriate service.

      To create the service, you will create an object definition file. Every Kubernetes object definition is a YAML file that contains at least the following items:

      • apiVersion: The version of the Kubernetes API that the definition belongs to.
      • kind: The Kubernetes object this file represents. For example, a pod or service.
      • metadata: This contains the name of the object along with any labels that you may wish to apply to it.
      • spec: This contains a specific configuration depending on the kind of object you are creating, such as the container image or the ports on which the container will be accessible from.

      First you will create a directory to hold your Kubernetes object definitions.

      SSH to your master node and create the definitions directory that will hold your Kubernetes object definitions.

      Navigate to the newly created definitions directory:

      Make your PHP-FPM service by creating a php_service.yaml file:

      Set kind as Service to specify that this object is a service:

      php_service.yaml

      ...
      apiVersion: v1
      kind: Service
      

      Name the service php since it will provide access to PHP-FPM:

      php_service.yaml

      ...
      metadata:
        name: php
      

      You will logically group different objects with labels. In this tutorial, you will use labels to group the objects into "tiers", such as frontend or backend. The PHP pods will run behind this service, so you will label it as tier: backend.

      php_service.yaml

      ...
        labels:
          tier: backend
      

      A service determines which pods to access by using selector labels. A pod that matches these labels will be serviced, independent of whether the pod was created before or after the service. You will add labels for your pods later in the tutorial.

      Use the tier: backend label to assign the pod into the backend tier. You will also add the app: php label to specify that this pod runs PHP. Add these two labels after the metadata section.

      php_service.yaml

      ...
      spec:
        selector:
          app: php
          tier: backend
      

      Next, specify the port used to access this service. You will use port 9000 in this tutorial. Add it to the php_service.yaml file under spec:

      php_service.yaml

      ...
        ports:
          - protocol: TCP
            port: 9000
      

      Your completed php_service.yaml file will look like this:

      php_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: php
        labels:
          tier: backend
      spec:
        selector:
          app: php
          tier: backend
        ports:
        - protocol: TCP
          port: 9000
      

      Hit CTRL + o to save the file, and then CTRL + x to exit nano.

      Now that you've created the object definition for your service, to run the service you will use the kubectl apply command along with the -f argument and specify your php_service.yaml file.

      Create your service:

      • kubectl apply -f php_service.yaml

      This output confirms the service creation:

      Output

      service/php created

      Verify that your service is running:

      You will see your PHP-FPM service running:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 10m php ClusterIP 10.100.59.238 <none> 9000/TCP 5m

      There are various service types that Kubernetes supports. Your php service uses the default service type, ClusterIP. This service type assigns an internal IP and makes the service reachable only from within the cluster.

      Now that the PHP-FPM service is ready, you will create the Nginx service. Create and open a new file called nginx_service.yaml with the editor:

      This service will target Nginx pods, so you will name it nginx. You will also add a tier: backend label as it belongs in the backend tier:

      nginx_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          tier: backend
      

      Similar to the php service, target the pods with the selector labels app: nginx and tier: backend. Make this service accessible on port 80, the default HTTP port.

      nginx_service.yaml

      ...
      spec:
        selector:
          app: nginx
          tier: backend
        ports:
        - protocol: TCP
          port: 80
      

      The Nginx service will be publicly accessible to the internet from your Droplet's public IP address. your_public_ip can be found from your DigitalOcean Cloud Panel. Under spec.externalIPs, add:

      nginx_service.yaml

      ...
      spec:
        externalIPs:
        - your_public_ip
      

      Your nginx_service.yaml file will look like this:

      nginx_service.yaml

      apiVersion: v1
      kind: Service
      metadata:
        name: nginx
        labels:
          tier: backend
      spec:
        selector:
          app: nginx
          tier: backend
        ports:
        - protocol: TCP
          port: 80
        externalIPs:
        - your_public_ip    
      

      Save and close the file. Create the Nginx service:

      • kubectl apply -f nginx_service.yaml

      You will see the following output when the service is running:

      Output

      service/nginx created

      You can view all running services by executing:

      You will see both the PHP-FPM and Nginx services listed in the output:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 13m nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 50s php ClusterIP 10.100.59.238 <none> 9000/TCP 8m

      Please note, if you want to delete a service you can run:

      • kubectl delete svc/service_name

      Now that you've created your PHP-FPM and Nginx services, you will need to specify where to store your application code and configuration files.

      Step 2 — Installing the DigitalOcean Storage Plug-In

      Kubernetes provides different storage plug-ins that can create the storage space for your environment. In this step, you will install the DigitalOcean storage plug-in to create block storage on DigitalOcean. Once the installation is complete, it will add a storage class named do-block-storage that you will use to create your block storage.

      You will first configure a Kubernetes Secret object to store your DigitalOcean API token. Secret objects are used to share sensitive information, like SSH keys and passwords, with other Kubernetes objects within the same namespace. Namespaces provide a way to logically separate your Kubernetes objects.

      Open a file named secret.yaml with the editor:

      You will name your Secret object digitalocean and add it to the kube-system namespace. The kube-system namespace is the default namespace for Kubernetes’ internal services and is also used by the DigitalOcean storage plug-in to launch various components.

      secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: digitalocean
        namespace: kube-system
      

      Instead of a spec key, a Secret uses a data or stringData key to hold the required information. The data parameter holds base64 encoded data that is automatically decoded when retrieved. The stringData parameter holds non-encoded data that is automatically encoded during creation or updates, and does not output the data when retrieving Secrets. You will use stringData in this tutorial for convenience.

      Add the access-token as stringData:

      secret.yaml

      ...
      stringData:
        access-token: your-api-token
      

      Save and exit the file.

      Your secret.yaml file will look like this:

      secret.yaml

      apiVersion: v1
      kind: Secret
      metadata:
        name: digitalocean
        namespace: kube-system
      stringData:
        access-token: your-api-token
      

      Create the secret:

      • kubectl apply -f secret.yaml

      You will see this output upon Secret creation:

      Output

      secret/digitalocean created

      You can view the secret with the following command:

      • kubectl -n kube-system get secret digitalocean

      The output will look similar to this:

      Output

      NAME TYPE DATA AGE digitalocean Opaque 1 41s

      The Opaque type means that this Secret is read-only, which is standard for stringData Secrets. You can read more about it on the Secret design spec. The DATA field shows the number of items stored in this Secret. In this case, it shows 1 because you have a single key stored.

      Now that your Secret is in place, install the DigitalOcean block storage plug-in:

      • kubectl apply -f https://raw.githubusercontent.com/digitalocean/csi-digitalocean/master/deploy/kubernetes/releases/csi-digitalocean-v0.3.0.yaml

      You will see output similar to the following:

      Output

      storageclass.storage.k8s.io/do-block-storage created serviceaccount/csi-attacher created clusterrole.rbac.authorization.k8s.io/external-attacher-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-attacher-role created service/csi-attacher-doplug-in created statefulset.apps/csi-attacher-doplug-in created serviceaccount/csi-provisioner created clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created service/csi-provisioner-doplug-in created statefulset.apps/csi-provisioner-doplug-in created serviceaccount/csi-doplug-in created clusterrole.rbac.authorization.k8s.io/csi-doplug-in created clusterrolebinding.rbac.authorization.k8s.io/csi-doplug-in created daemonset.apps/csi-doplug-in created

      Now that you have installed the DigitalOcean storage plug-in, you can create block storage to hold your application code and configuration files.

      Step 3 — Creating the Persistent Volume

      With your Secret in place and the block storage plug-in installed, you are now ready to create your Persistent Volume. A Persistent Volume, or PV, is block storage of a specified size that lives independently of a pod’s life cycle. Using a Persistent Volume will allow you to manage or update your pods without worrying about losing your application code. A Persistent Volume is accessed by using a PersistentVolumeClaim, or PVC, which mounts the PV at the required path.

      Open a file named code_volume.yaml with your editor:

      Name the PVC code by adding the following parameters and values to your file:

      code_volume.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: code
      

      The spec for a PVC contains the following items:

      • accessModes which vary by the use case. These are:
        • ReadWriteOnce – mounts the volume as read-write by a single node
        • ReadOnlyMany – mounts the volume as read-only by many nodes
        • ReadWriteMany – mounts the volume as read-write by many nodes
      • resources – the storage space that you require

      DigitalOcean block storage is only mounted to a single node, so you will set the accessModes to ReadWriteOnce. This tutorial will guide you through adding a small amount of application code, so 1GB will be plenty in this use case. If you plan on storing a larger amount of code or data on the volume, you can modify the storage parameter to fit your requirements. You can increase the amount of storage after volume creation, but shrinking the disk is not supported.

      code_volume.yaml

      ...
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
      

      Next, specify the storage class that Kubernetes will use to provision the volumes. You will use the do-block-storage class created by the DigitalOcean block storage plug-in.

      code_volume.yaml

      ...
        storageClassName: do-block-storage
      

      Your code_volume.yaml file will look like this:

      code_volume.yaml

      apiVersion: v1
      kind: PersistentVolumeClaim
      metadata:
        name: code
      spec:
        accessModes:
        - ReadWriteOnce
        resources:
          requests:
            storage: 1Gi
        storageClassName: do-block-storage
      

      Save and exit the file.

      Create the code PersistentVolumeClaim using kubectl:

      • kubectl apply -f code_volume.yaml

      The following output tells you that the object was successfully created, and you are ready to mount your 1GB PVC as a volume.

      Output

      persistentvolumeclaim/code created

      To view available Persistent Volumes (PV):

      You will see your PV listed:

      Output

      NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE pvc-ca4df10f-ab8c-11e8-b89d-12331aa95b13 1Gi RWO Delete Bound default/code do-block-storage 2m

      The fields above are an overview of your configuration file, except for Reclaim Policy and Status. The Reclaim Policy defines what is done with the PV after the PVC accessing it is deleted. Delete removes the PV from Kubernetes as well as the DigitalOcean infrastructure. You can learn more about the Reclaim Policy and Status from the Kubernetes PV documentation.

      You've successfully created a Persistent Volume using the DigitalOcean block storage plug-in. Now that your Persistent Volume is ready, you will create your pods using a Deployment.

      Step 4 — Creating a PHP-FPM Deployment

      In this step, you will learn how to use a Deployment to create your PHP-FPM pod. Deployments provide a uniform way to create, update, and manage pods by using ReplicaSets. If an update does not work as expected, a Deployment will automatically rollback its pods to a previous image.

      The Deployment spec.selector key will list the labels of the pods it will manage. It will also use the template key to create the required pods.

      This step will also introduce the use of Init Containers. Init Containers run one or more commands before the regular containers specified under the pod's template key. In this tutorial, your Init Container will fetch a sample index.php file from GitHub Gist using wget. These are the contents of the sample file:

      index.php

      <?php
      echo phpinfo(); 
      

      To create your Deployment, open a new file called php_deployment.yaml with your editor:

      This Deployment will manage your PHP-FPM pods, so you will name the Deployment object php. The pods belong to the backend tier, so you will group the Deployment into this group by using the tier: backend label:

      php_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: php
        labels:
          tier: backend
      

      For the Deployment spec, you will specify how many copies of this pod to create by using the replicas parameter. The number of replicas will vary depending on your needs and available resources. You will create one replica in this tutorial:

      php_deployment.yaml

      ...
      spec:
        replicas: 1
      

      This Deployment will manage pods that match the app: php and tier: backend labels. Under selector key add:

      php_deployment.yaml

      ...
        selector:
          matchLabels:
            app: php
            tier: backend
      

      Next, the Deployment spec requires the template for your pod's object definition. This template will define specifications to create the pod from. First, you will add the labels that were specified for the php service selectors and the Deployment’s matchLabels. Add app: php and tier: backend under template.metadata.labels:

      php_deployment.yaml

      ...
        template:
          metadata:
            labels:
              app: php
              tier: backend
      

      A pod can have multiple containers and volumes, but each will need a name. You can selectively mount volumes to a container by specifying a mount path for each volume.

      First, specify the volumes that your containers will access. You created a PVC named code to hold your application code, so name this volume code as well. Under spec.template.spec.volumes, add the following:

      php_deployment.yaml

      ...
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
      

      Next, specify the container you want to run in this pod. You can find various images on the Docker store, but in this tutorial you will use the php:7-fpm image.

      Under spec.template.spec.containers, add the following:

      php_deployment.yaml

      ...
            containers:
            - name: php
              image: php:7-fpm
      

      Next, you will mount the volumes that the container requires access to. This container will run your PHP code, so it will need access to the code volume. You will also use mountPath to specify /code as the mount point.

      Under spec.template.spec.containers.volumeMounts, add:

      php_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      Now that you have mounted your volume, you need to get your application code on the volume. You may have previously used FTP/SFTP or cloned the code over an SSH connection to accomplish this, but this step will show you how to copy the code using an Init Container.

      Depending on the complexity of your setup process, you can either use a single initContainer to run a script that builds your application, or you can use one initContainer per command. Make sure that the volumes are mounted to the initContainer.

      In this tutorial, you will use a single Init Container with busybox to download the code. busybox is a small image that contains the wget utility that you will use to accomplish this.

      Under spec.template.spec, add your initContainer and specify the busybox image:

      php_deployment.yaml

      ...
            initContainers:
            - name: install
              image: busybox
      

      Your Init Container will need access to the code volume so that it can download the code in that location. Under spec.template.spec.initContainers, mount the volume code at the /code path:

      php_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      Each Init Container needs to run a command. Your Init Container will use wget to download the code from Github into the /code working directory. The -O option gives the downloaded file a name, and you will name this file index.php.

      Note: Be sure to trust the code you’re pulling. Before pulling it to your server, inspect the source code to ensure you are comfortable with what the code does.

      Under the install container in spec.template.spec.initContainers, add these lines:

      php_deployment.yaml

      ...
              command:
              - wget
              - "-O"
              - "/code/index.php"
              - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php
      

      Your completed php_deployment.yaml file will look like this:

      php_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: php
        labels:
          tier: backend
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: php
            tier: backend
        template:
          metadata:
            labels:
              app: php
              tier: backend
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
            containers:
            - name: php
              image: php:7-fpm
              volumeMounts:
              - name: code
                mountPath: /code
            initContainers:
            - name: install
              image: busybox
              volumeMounts:
              - name: code
                mountPath: /code
              command:
              - wget
              - "-O"
              - "/code/index.php"
              - https://raw.githubusercontent.com/do-community/php-kubernetes/master/index.php
      

      Save the file and exit the editor.

      Create the PHP-FPM Deployment with kubectl:

      • kubectl apply -f php_deployment.yaml

      You will see the following output upon Deployment creation:

      Output

      deployment.apps/php created

      To summarize, this Deployment will start by downloading the specified images. It will then request the PersistentVolume from your PersistentVolumeClaim and serially run your initContainers. Once complete, the containers will run and mount the volumes to the specified mount point. Once all of these steps are complete, your pod will be up and running.

      You can view your Deployment by running:

      You will see the output:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE php 1 1 1 0 19s

      This output can help you understand the current state of the Deployment. A Deployment is one of the controllers that maintains a desired state. The template you created specifies that the DESIRED state will have 1 replicas of the pod named php. The CURRENT field indicates how many replicas are running, so this should match the DESIRED state. You can read about the remaining fields in the Kubernetes Deployments documentation.

      You can view the pods that this Deployment started with the following command:

      The output of this command varies depending on how much time has passed since creating the Deployment. If you run it shortly after creation, the output will likely look like this:

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-bf8zd 0/1 Init:0/1 0 9s

      The columns represent the following information:

      • Ready: The number of replicas running this pod.
      • Status: The status of the pod. Init indicates that the Init Containers are running. In this output, 0 out of 1 Init Containers have finished running.
      • Restarts: How many times this process has restarted to start the pod. This number will increase if any of your Init Containers fail. The Deployment will restart it until it reaches a desired state.

      Depending on the complexity of your startup scripts, it can take a couple of minutes for the status to change to podInitializing:

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 0/1 podInitializing 0 39s

      This means the Init Containers have finished and the containers are initializing. If you run the command when all of the containers are running, you will see the pod status change to Running.

      Output

      NAME READY STATUS RESTARTS AGE php-86d59fd666-lkwgn 1/1 Running 0 1m

      You now see that your pod is running successfully. If your pod doesn't start, you can debug with the following commands:

      • View detailed information of a pod:
      • kubectl describe pods pod-name
      • View logs generated by a pod:
      • View logs for a specific container in a pod:
      • kubectl logs pod-name container-name

      Your application code is mounted and the PHP-FPM service is now ready to handle connections. You can now create your Nginx Deployment.

      Step 5 — Creating the Nginx Deployment

      In this step, you will use a ConfigMap to configure Nginx. A ConfigMap holds your configuration in a key-value format that you can reference in other Kubernetes object definitions. This approach will grant you the flexibility to reuse or swap the image with a different Nginx version if needed. Updating the ConfigMap will automatically replicate the changes to any pod mounting it.

      Create a nginx_configMap.yaml file for your ConfigMap with your editor:

      • nano nginx_configMap.yaml

      Name the ConfigMap nginx-config and group it into the tier: backend micro-service:

      nginx_configMap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nginx-config
        labels:
          tier: backend
      

      Next, you will add the data for the ConfigMap. Name the key config and add the contents of your Nginx configuration file as the value. You can use the example Nginx configuration from this tutorial.

      Because Kubernetes can route requests to the appropriate host for a service, you can enter the name of your PHP-FPM service in the fastcgi_pass parameter instead of its IP address. Add the following to your nginx_configMap.yaml file:

      nginx_configMap.yaml

      ...
      data:
        config : |
          server {
            index index.php index.html;
            error_log  /var/log/nginx/error.log;
            access_log /var/log/nginx/access.log;
            root ^/code^;
      
            location / {
                try_files $uri $uri/ /index.php?$query_string;
            }
      
            location ~ .php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_pass php:9000;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_path_info;
              }
          }
      

      Your nginx_configMap.yaml file will look like this:

      nginx_configMap.yaml

      apiVersion: v1
      kind: ConfigMap
      metadata:
        name: nginx-config
        labels:
          tier: backend
      data:
        config : |
          server {
            index index.php index.html;
            error_log  /var/log/nginx/error.log;
            access_log /var/log/nginx/access.log;
            root /code;
      
            location / {
                try_files $uri $uri/ /index.php?$query_string;
            }
      
            location ~ .php$ {
                try_files $uri =404;
                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_pass php:9000;
                fastcgi_index index.php;
                include fastcgi_params;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                fastcgi_param PATH_INFO $fastcgi_path_info;
              }
          }
      

      Save the file and exit the editor.

      Create the ConfigMap:

      • kubectl apply -f nginx_configMap.yaml

      You will see the following output:

      Output

      configmap/nginx-config created

      You've finished creating your ConfigMap and can now build your Nginx Deployment.

      Start by opening a new nginx_deployment.yaml file in the editor:

      • nano nginx_deployment.yaml

      Name the Deployment nginx and add the label tier: backend:

      nginx_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          tier: backend
      

      Specify that you want one replicas in the Deployment spec. This Deployment will manage pods with labels app: nginx and tier: backend. Add the following parameters and values:

      nginx_deployment.yaml

      ...
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            tier: backend
      

      Next, add the pod template. You need to use the same labels that you added for the Deployment selector.matchLabels. Add the following:

      nginx_deployment.yaml

      ...
        template:
          metadata:
            labels:
              app: nginx
              tier: backend
      

      Give Nginx access to the code PVC that you created earlier. Under spec.template.spec.volumes, add:

      nginx_deployment.yaml

      ...
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
      

      Pods can mount a ConfigMap as a volume. Specifying a file name and key will create a file with its value as the content. To use the ConfigMap, set path to name of the file that will hold the contents of the key. You want to create a file site.conf from the key config. Under spec.template.spec.volumes, add the following:

      nginx_deployment.yaml

      ...
            - name: config
              configMap:
                name: nginx-config
                items:
                - key: config
                  path: site.conf
      

      Warning: If a file is not specified, the contents of the key will replace the mountPath of the volume. This means that if a path is not explicitly specified, you will lose all content in the destination folder.

      Next, you will specify the image to create your pod from. This tutorial will use the nginx:1.7.9 image for stability, but you can find other Nginx images on the Docker store. Also, make Nginx available on the port 80. Under spec.template.spec add:

      nginx_deployment.yaml

      ...
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
      

      Nginx and PHP-FPM need to access the file at the same path, so mount the code volume at /code:

      nginx_deployment.yaml

      ...
              volumeMounts:
              - name: code
                mountPath: /code
      

      The nginx:1.7.9 image will automatically load any configuration files under the /etc/nginx/conf.d directory. Mounting the config volume in this directory will create the file /etc/nginx/conf.d/site.conf. Under volumeMounts add the following:

      nginx_deployment.yaml

      ...
              - name: config
                mountPath: /etc/nginx/conf.d
      

      Your nginx_deployment.yaml file will look like this:

      nginx_deployment.yaml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: nginx
        labels:
          tier: backend
      spec:
        replicas: 1
        selector:
          matchLabels:
            app: nginx
            tier: backend
        template:
          metadata:
            labels:
              app: nginx
              tier: backend
          spec:
            volumes:
            - name: code
              persistentVolumeClaim:
                claimName: code
            - name: config
              configMap:
                name: nginx-config
                items:
                - key: config
                  path: site.conf
            containers:
            - name: nginx
              image: nginx:1.7.9
              ports:
              - containerPort: 80
              volumeMounts:
              - name: code
                mountPath: /code
              - name: config
                mountPath: /etc/nginx/conf.d
      

      Save the file and exit the editor.

      Create the Nginx Deployment:

      • kubectl apply -f nginx_deployment.yaml

      The following output indicates that your Deployment is now created:

      Output

      deployment.apps/nginx created

      List your Deployments with this command:

      You will see the Nginx and PHP-FPM Deployments:

      Output

      NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx 1 1 1 0 16s php 1 1 1 1 7m

      List the pods managed by both of the Deployments:

      You will see the pods that are running:

      Output

      NAME READY STATUS RESTARTS AGE nginx-7bf5476b6f-zppml 1/1 Running 0 32s php-86d59fd666-lkwgn 1/1 Running 0 7m

      Now that all of the Kubernetes objects are active, you can visit the Nginx service on your browser.

      List the running services:

      • kubectl get services -o wide

      Get the External IP for your Nginx service:

      Output

      NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39m <none> nginx ClusterIP 10.102.160.47 your_public_ip 80/TCP 27m app=nginx,tier=backend php ClusterIP 10.100.59.238 <none> 9000/TCP 34m app=php,tier=backend

      On your browser, visit your server by typing in http://your_public_ip. You will see the output of php_info() and have confirmed that your Kubernetes services are up and running.

      Conclusion

      In this guide, you containerized the PHP-FPM and Nginx services so that you can manage them independently. This approach will not only improve the scalability of your project as you grow, but will also allow you to efficiently use resources as well. You also stored your application code on a volume so that you can easily update your services in the future.



      Source link

      How To Secure a Containerized Node.js Application with Nginx, Let’s Encrypt, and Docker Compose


      Introduction

      There are multiple ways to enhance the flexibility and security of your Node.js application. Using a reverse proxy like Nginx offers you the ability to load balance requests, cache static content, and implement Transport Layer Security (TLS). Enabling encrypted HTTPS on your server ensures that communication to and from your application remains secure.

      Implementing a reverse proxy with TLS/SSL on containers involves a different set of procedures from working directly on a host operating system. For example, if you were obtaining certificates from Let’s Encrypt for an application running on a server, you would install the required software directly on your host. Containers allow you to take a different approach. Using Docker Compose, you can create containers for your application, your web server, and the Certbot client that will enable you to obtain your certificates. By following these steps, you can take advantage of the modularity and portability of a containerized workflow.

      In this tutorial, you will deploy a Node.js application with an Nginx reverse proxy using Docker Compose. You will obtain TLS/SSL certificates for the domain associated with your application and ensure that it receives a high security rating from SSL Labs. Finally, you will set up a cron job to renew your certificates so that your domain remains secure.

      Prerequisites

      To follow this tutorial, you will need:

      • An Ubuntu 18.04 server, a non-root user with sudo privileges, and an active firewall. For guidance on how to set these up, please see this Initial Server Setup guide.
      • Docker and Docker Compose installed on your server. For guidance on installing Docker, follow Steps 1 and 2 of How To Install and Use Docker on Ubuntu 18.04. For guidance on installing Compose, follow Step 1 of How To Install Docker Compose on Ubuntu 18.04.
      • A registered domain name. This tutorial will use example.com throughout. You can get one for free at Freenom, or use the domain registrar of your choice.
      • Both of the following DNS records set up for your server. You can follow this introduction to DigitalOcean DNS for details on how to add them to a DigitalOcean account, if that’s what you’re using:

        • An A record with example.com pointing to your server’s public IP address.
        • An A record with www.example.com pointing to your server’s public IP address.

      Step 1 — Cloning and Testing the Node Application

      As a first step, we will clone the repository with the Node application code, which includes the Dockerfile that we will use to build our application image with Compose. We can first test the application by building and running it with the docker run command, without a reverse proxy or SSL.

      In your non-root user’s home directory, clone the nodejs-image-demo repository from the DigitalOcean Community GitHub account. This repository includes the code from the setup described in How To Build a Node.js Application with Docker.

      Clone the repository into a directory called node_project:

      • git clone https://github.com/do-community/nodejs-image-demo.git node_project

      Change to the node_project directory:

      In this directory, there is a Dockerfile that contains instructions for building a Node application using the Docker node:10 image and the contents of your current project directory. You can look at the contents of the Dockerfile by typing:

      Output

      FROM node:10 RUN mkdir -p /home/node/app/node_modules && chown -R node:node /home/node/app WORKDIR /home/node/app COPY package*.json ./ RUN npm install COPY . . COPY --chown=node:node . . USER node EXPOSE 8080 CMD [ "node", "app.js" ]

      These instructions build a Node image by copying the project code from the current directory to the container and installing dependencies with npm install. They also take advantage of Docker's caching and image layering by separating the copy of package.json and package-lock.json, containing the project's listed dependencies, from the copy of the rest of the application code. Finally, the instructions specify that the container will be run as the non-root node user with the appropriate permissions set on the application code and node_modules directories.

      For more information about this Dockerfile and Node image best practices, please see the complete discussion in Step 3 of How To Build a Node.js Application with Docker.

      To test the application without SSL, you can build and tag the image using docker build and the -t flag. We will call the image node-demo, but you are free to name it something else:

      • docker build -t node-demo .

      Once the build process is complete, you can list your images with docker images:

      You will see the following output, confirming the application image build:

      Output

      REPOSITORY TAG IMAGE ID CREATED SIZE node-demo latest 23961524051d 7 seconds ago 896MB node 10 8a752d5af4ce 10 days ago 894MB

      Next, create the container with docker run. We will include three flags with this command:

      • -p: This publishes the port on the container and maps it to a port on our host. We will use port 80 on the host, but you should feel free to modify this as necessary if you have another process running on that port. For more information about how this works, see this discussion in the Docker docs on port binding.
      • -d: This runs the container in the background.
      • --name: This allows us to give the container a memorable name.

      Run the following command to build the container:

      • docker run --name node-demo -p 80:8080 -d node-demo

      Inspect your running containers with docker ps:

      You will see output confirming that your application container is running:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

      You can now visit your domain to test your setup: http://example.com. Remember to replace example.com with your own domain name. Your application will display the following landing page:

      Application Landing Page

      Now that you have tested the application, you can stop the container and remove the images. Use docker ps again to get your CONTAINER ID:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 4133b72391da node-demo "node app.js" 17 seconds ago Up 16 seconds 0.0.0.0:80->8080/tcp node-demo

      Stop the container with docker stop. Be sure to replace the CONTAINER ID listed here with your own application CONTAINER ID:

      You can now remove the stopped container and all of the images, including unused and dangling images, with docker system prune and the -a flag:

      Type y when prompted in the output to confirm that you would like to remove the stopped container and images. Be advised that this will also remove your build cache.

      With your application image tested, you can move on to building the rest of your setup with Docker Compose.

      Step 2 — Defining the Web Server Configuration

      With our application Dockerfile in place, we can create a configuration file to run our Nginx container. We will start with a minimal configuration that will include our domain name, document root, proxy information, and a location block to direct Certbot's requests to the .well-known directory, where it will place a temporary file to validate that the DNS for our domain resolves to our server.

      First, create a directory in the current project directory for the configuration file:

      Open the file with nano or your favorite editor:

      • nano nginx-conf/nginx.conf

      Add the following server block to proxy user requests to your Node application container and to direct Certbot's requests to the .well-known directory. Be sure to replace example.com with your own domain name:

      ~/node_project/nginx-conf/nginx.conf

      server {
              listen 80;
              listen [::]:80;
      
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
      
              server_name example.com www.example.com;
      
              location / {
                      proxy_pass http://nodejs:8080;
              }
      
              location ~ /.well-known/acme-challenge {
                      allow all;
                      root /var/www/html;
              }
      }
      

      This server block will allow us to start the Nginx container as a reverse proxy, which will pass requests to our Node application container. It will also allow us to use Certbot's webroot plugin to obtain certificates for our domain. This plugin depends on the HTTP-01 validation method, which uses an HTTP request to prove that Certbot can access resources from a server that responds to a given domain name.

      Once you have finished editing, save and close the file. To learn more about Nginx server and location block algorithms, please refer to this article on Understanding Nginx Server and Location Block Selection Algorithms.

      With the web server configuration details in place, we can move on to creating our docker-compose.yml file, which will allow us to create our application services and the Certbot container we will use to obtain our certificates.

      Step 3 — Creating the Docker Compose File

      The docker-compose.yml file will define our services, including the Node application and web server. It will specify details like named volumes, which will be critical to sharing SSL credentials between containers, as well as network and port information. It will also allow us to specify specific commands to run when our containers are created. This file is the central resource that will define how our services will work together.

      Open the file in your current directory:

      First, define the application service:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
      

      The nodejs service definition includes the following:

      • build: This defines the configuration options, including the context and dockerfile, that will be applied when Compose builds the application image. If you wanted to use an existing image from a registry like Docker Hub, you could use the image instruction instead, with information about your username, repository, and image tag.
      • context: This defines the build context for the application image build. In this case, it's the current project directory.
      • dockerfile: This specifies the Dockerfile that Compose will use for the build — the Dockerfile you looked at in Step 1.
      • image, container_name: These apply names to the image and container.
      • restart: This defines the restart policy. The default is no, but we have set the container to restart unless it is stopped.

      Note that we are not including bind mounts with this service, since our setup is focused on deployment rather than development. For more information, please see the Docker documentation on bind mounts and volumes.

      To enable communication between the application and web server containers, we will also add a bridge network called app-network below the restart definition:

      ~/node_project/docker-compose.yml

      services:
        nodejs:
      ...
          networks:
            - app-network
      

      A user-defined bridge network like this enables communication between containers on the same Docker daemon host. This streamlines traffic and communication within your application, since it opens all ports between containers on the same bridge network, while exposing no ports to the outside world. Thus, you can be selective about opening only the ports you need to expose your frontend services.

      Next, define the webserver service:

      ~/node_project/docker-compose.yml

      ...
       webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
          depends_on:
            - nodejs
          networks:
            - app-network
      

      Some of the settings we defined for the nodejs service remain the same, but we've also made the following changes:

      • image: This tells Compose to pull the latest Nginx image from Docker Hub.
      • ports: This exposes port 80 to enable the configuration options we've defined in our Nginx configuration.

      We have also specified the following named volumes and bind mounts:

      • web-root:/var/www/html: This will add our site's static assets, copied to a volume called web-root, to the the /var/www/html directory on the container.
      • ./nginx-conf:/etc/nginx/conf.d: This will bind mount the Nginx configuration directory on the host to the relevant directory on the container, ensuring that any changes we make to files on the host will be reflected in the container.
      • certbot-etc:/etc/letsencrypt: This will mount the relevant Let's Encrypt certificates and keys for our domain to the appropriate directory on the container.
      • certbot-var:/var/lib/letsencrypt: This mounts Let's Encrypt's default working directory to the appropriate directory on the container.

      Next, add the configuration options for the certbot container. Be sure to replace the domain and email information with your own domain name and contact email:

      ~/node_project/docker-compose.yml

      ...
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com 
      

      This definition tells Compose to pull the certbot/certbot image from Docker Hub. It also uses named volumes to share resources with the Nginx container, including the domain certificates and key in certbot-etc, the Let's Encrypt working directory in certbot-var, and the application code in web-root.

      Again, we've used depends_on to specify that the certbot container should be started once the webserver service is running.

      We've also included a command option that specifies the command to run when the container is started. It includes the certonly subcommand with the following options:

      • --webroot: This tells Certbot to use the webroot plugin to place files in the webroot folder for authentication.
      • --webroot-path: This specifies the path of the webroot directory.
      • --email: Your preferred email for registration and recovery.
      • --agree-tos: This specifies that you agree to ACME's Subscriber Agreement.
      • --no-eff-email: This tells Certbot that you do not wish to share your email with the Electronic Frontier Foundation (EFF). Feel free to omit this if you would prefer.
      • --staging: This tells Certbot that you would like to use Let's Encrypt's staging environment to obtain test certificates. Using this option allows you to test your configuration options and avoid possible domain request limits. For more information about these limits, please see Let's Encrypt's rate limits documentation.
      • -d: This allows you to specify domain names you would like to apply to your request. In this case, we've included example.com and www.example.com. Be sure to replace these with your own domain preferences.

      As a final step, add the volume and network definitions. Be sure to replace the username here with your own non-root user:

      ~/node_project/docker-compose.yml

      ...
      volumes:
        certbot-etc:
        certbot-var:
        web-root:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/views/
            o: bind
      
      networks:
        app-network:
          driver: bridge
      

      Our named volumes include our Certbot certificate and working directory volumes, and the volume for our site's static assets, web-root. In most cases, the default driver for Docker volumes is the local driver, which on Linux accepts options similar to the mount command. Thanks to this, we are able to specify a list of driver options with driver_opts that mount the views directory on the host, which contains our application's static assets, to the volume at runtime. The directory contents can then be shared between containers. For more information about the contents of the views directory, please see Step 2 of How To Build a Node.js Application with Docker.

      The docker-compose.yml file will look like this when finished:

      ~/node_project/docker-compose.yml

      version: '3'
      
      services:
        nodejs:
          build:
            context: .
            dockerfile: Dockerfile
          image: nodejs
          container_name: nodejs
          restart: unless-stopped
          networks:
            - app-network
      
        webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
          depends_on:
            - nodejs
          networks:
            - app-network
      
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --staging -d example.com  -d www.example.com 
      
      volumes:
        certbot-etc:
        certbot-var:
        web-root:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/views/
            o: bind
      
      networks:
        app-network:
          driver: bridge  
      

      With the service definitions in place, you are ready to start the containers and test your certificate requests.

      Step 4 — Obtaining SSL Certificates and Credentials

      We can start our containers with docker-compose up, which will create and run our containers and services in the order we have specified. If our domain requests are successful, we will see the correct exit status in our output and the right certificates mounted in the /etc/letsencrypt/live folder on the webserver container.

      Create the services with docker-compose up and the -d flag, which will run the nodejs and webserver containers in the background:

      You will see output confirming that your services have been created:

      Output

      Creating nodejs ... done Creating webserver ... done Creating certbot ... done

      Using docker-compose ps, check the status of your services:

      If everything was successful, your nodejs and webserver services should be Up and the certbot container will have exited with a 0 status message:

      Output

      Name Command State Ports ------------------------------------------------------------------------ certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:80->80/tcp

      If you see anything other than Up in the State column for the nodejs and webserver services, or an exit status other than 0 for the certbot container, be sure to check the service logs with the docker-compose logs command:

      • docker-compose logs service_name

      You can now check that your credentials have been mounted to the webserver container with docker-compose exec:

      • docker-compose exec webserver ls -la /etc/letsencrypt/live

      If your request was successful, you will see output like this:

      Output

      total 16 drwx------ 3 root root 4096 Dec 23 16:48 . drwxr-xr-x 9 root root 4096 Dec 23 16:48 .. -rw-r--r-- 1 root root 740 Dec 23 16:48 README drwxr-xr-x 2 root root 4096 Dec 23 16:48 example.com

      Now that you know your request will be successful, you can edit the certbot service definition to remove the --staging flag.

      Open docker-compose.yml:

      Find the section of the file with the certbot service definition, and replace the --staging flag in the command option with the --force-renewal flag, which will tell Certbot that you want to request a new certificate with the same domains as an existing certificate. The certbot service definition should now look like this:

      ~/node_project/docker-compose.yml

      ...
        certbot:
          image: certbot/certbot
          container_name: certbot
          volumes:
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - web-root:/var/www/html
          depends_on:
            - webserver
          command: certonly --webroot --webroot-path=/var/www/html --email sammy@example.com --agree-tos --no-eff-email --force-renewal -d example.com -d www.example.com
      ...
      

      You can now run docker-compose up to recreate the certbot container and its relevant volumes. We will also include the --no-deps option to tell Compose that it can skip starting the webserver service, since it is already running:

      • docker-compose up --force-recreate --no-deps certbot

      You will see output indicating that your certificate request was successful:

      Output

      certbot | IMPORTANT NOTES: certbot | - Congratulations! Your certificate and chain have been saved at: certbot | /etc/letsencrypt/live/example.com/fullchain.pem certbot | Your key file has been saved at: certbot | /etc/letsencrypt/live/example.com/privkey.pem certbot | Your cert will expire on 2019-03-26. To obtain a new or tweaked certbot | version of this certificate in the future, simply run certbot certbot | again. To non-interactively renew *all* of your certificates, run certbot | "certbot renew" certbot | - Your account credentials have been saved in your Certbot certbot | configuration directory at /etc/letsencrypt. You should make a certbot | secure backup of this folder now. This configuration directory will certbot | also contain certificates and private keys obtained by Certbot so certbot | making regular backups of this folder is ideal. certbot | - If you like Certbot, please consider supporting our work by: certbot | certbot | Donating to ISRG / Let's Encrypt: https://letsencrypt.org/donate certbot | Donating to EFF: https://eff.org/donate-le certbot | certbot exited with code 0

      With your certificates in place, you can move on to modifying your Nginx configuration to include SSL.

      Step 5 — Modifying the Web Server Configuration and Service Definition

      Enabling SSL in our Nginx configuration will involve adding an HTTP redirect to HTTPS and specifying our SSL certificate and key locations. It will also involve specifying our Diffie-Hellman group, which we will use for Perfect Forward Secrecy.

      Since you are going to recreate the webserver service to include these additions, you can stop it now:

      • docker-compose stop webserver

      Next, create a directory in your current project directory for your Diffie-Hellman key:

      Generate your key with the openssl command:

      • sudo openssl dhparam -out /home/sammy/node_project/dhparam/dhparam-2048.pem 2048

      It will take a few moments to generate the key.

      To add the relevant Diffie-Hellman and SSL information to your Nginx configuration, first remove the Nginx configuration file you created earlier:

      Open another version of the file:

      • nano nginx-conf/nginx.conf

      Add the following code to the file to redirect HTTP to HTTPS and to add SSL credentials, protocols, and security headers. Remember to replace example.com with your own domain:

      ~/node_project/nginx-conf/nginx.conf

      
      server {
              listen 80;
              listen [::]:80;
              server_name example.com www.example.com;
      
              location ~ /.well-known/acme-challenge {
                allow all;
                root /var/www/html;
              }
      
              location / {
                      rewrite ^ https://$host$request_uri? permanent;
              }
      }
      
      server {
              listen 443 ssl http2;
              listen [::]:443 ssl http2;
              server_name example.com www.example.com;
      
              server_tokens off;
      
              ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
              ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
      
              ssl_buffer_size 8k;
      
              ssl_dhparam /etc/ssl/certs/dhparam-2048.pem;
      
              ssl_protocols TLSv1.2 TLSv1.1 TLSv1;
              ssl_prefer_server_ciphers on;
      
              ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5;
      
              ssl_ecdh_curve secp384r1;
              ssl_session_tickets off;
      
              ssl_stapling on;
              ssl_stapling_verify on;
              resolver 8.8.8.8;
      
              location / {
                      try_files $uri @nodejs;
              }
      
              location @nodejs {
                      proxy_pass http://nodejs:8080;
                      add_header X-Frame-Options "SAMEORIGIN" always;
                      add_header X-XSS-Protection "1; mode=block" always;
                      add_header X-Content-Type-Options "nosniff" always;
                      add_header Referrer-Policy "no-referrer-when-downgrade" always;
                      add_header Content-Security-Policy "default-src * data: 'unsafe-eval' 'unsafe-inline'" always;
                      # add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload" always;
                      # enable strict transport security only if you understand the implications
              }
      
              root /var/www/html;
              index index.html index.htm index.nginx-debian.html;
      }
      

      The HTTP server block specifies the webroot for Certbot renewal requests to the .well-known/acme-challenge directory. It also includes a rewrite directive that directs HTTP requests to the root directory to HTTPS.

      The HTTPS server block enables ssl and http2. To read more about how HTTP/2 iterates on HTTP protocols and the benefits it can have for website performance, please see the introduction to How To Set Up Nginx with HTTP/2 Support on Ubuntu 18.04. This block also includes a series of options to ensure that you are using the most up-to-date SSL protocols and ciphers and that OSCP stapling is turned on. OSCP stapling allows you to offer a time-stamped response from your certificate authority during the initial TLS handshake, which can speed up the authentication process.

      The block also specifies your SSL and Diffie-Hellman credentials and key locations.

      Finally, we've moved the proxy pass information to this block, including a location block with a try_files directive, pointing requests to our aliased Node.js application container, and a location block for that alias, which includes security headers that will enable us to get A ratings on things like the SSL Labs and Security Headers server test sites. These headers include X-Frame-Options, X-Content-Type-Options, Referrer Policy, Content-Security-Policy, and X-XSS-Protection. The HTTP Strict Transport Security (HSTS) header is commented out — enable this only if you understand the implications and have assessed its "preload" functionality.

      Once you have finished editing, save and close the file.

      Before recreating the webserver service, you will need to add a few things to the service definition in your docker-compose.yml file, including relevant port information for HTTPS and a Diffie-Hellman volume definition.

      Open the file:

      In the webserver service definition, add the following port mapping and the dhparam named volume:

      ~/node_project/docker-compose.yml

      ...
       webserver:
          image: nginx:latest
          container_name: webserver
          restart: unless-stopped
          ports:
            - "80:80"
            - "443:443"
          volumes:
            - web-root:/var/www/html
            - ./nginx-conf:/etc/nginx/conf.d
            - certbot-etc:/etc/letsencrypt
            - certbot-var:/var/lib/letsencrypt
            - dhparam:/etc/ssl/certs
          depends_on:
            - nodejs
          networks:
            - app-network
      

      Next, add the dhparam volume to your volumes definitions:

      ~/node_project/docker-compose.yml

      ...
      volumes:
        ...
        dhparam:
          driver: local
          driver_opts:
            type: none
            device: /home/sammy/node_project/dhparam/
            o: bind
      

      Similarly to the web-root volume, the dhparam volume will mount the Diffie-Hellman key stored on the host to the webserver container.

      Save and close the file when you are finished editing.

      Recreate the webserver service:

      • docker-compose up -d --force-recreate --no-deps webserver

      Check your services with docker-compose ps:

      You should see output indicating that your nodejs and webserver services are running:

      Output

      Name Command State Ports ---------------------------------------------------------------------------------------------- certbot certbot certonly --webroot ... Exit 0 nodejs node app.js Up 8080/tcp webserver nginx -g daemon off; Up 0.0.0.0:443->443/tcp, 0.0.0.0:80->80/tcp

      Finally, you can visit your domain to ensure that everything is working as expected. Navigate your browser to https://example.com, making sure to substitute example.com with your own domain name. You will see the following landing page:

      Application Landing Page

      You should also see the lock icon in your browser's security indicator. If you would like, you can navigate to the SSL Labs Server Test landing page or the Security Headers server test landing page. The configuration options we've included should earn your site an A rating on both.

      Step 6 — Renewing Certificates

      Let's Encrypt certificates are valid for 90 days, so you will want to set up an automated renewal process to ensure that they do not lapse. One way to do this is to create a job with the cron scheduling utility. In this case, we will schedule a cron job using a script that will renew our certificates and reload our Nginx configuration.

      Open a script called ssl_renew.sh in your project directory:

      Add the following code to the script to renew your certificates and reload your web server configuration:

      ~/node_project/ssl_renew.sh

      #!/bin/bash
      
      /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew --dry-run 
      && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver
      

      In addition to specifying the location of our docker-compose binary, we also specify the location of our docker-compose.yml file in order to run docker-compose commands. In this case, we are using docker-compose run to start a certbot container and to override the command provided in our service definition with another: the renew subcommand, which will renew certificates that are close to expiring. We've included the --dry-run option here to test our script.

      The script then uses docker-compose kill to send a SIGHUP signal to the webserver container to reload the Nginx configuration. For more information on using this process to reload your Nginx configuration, please see this Docker blog post on deploying the official Nginx image with Docker.

      Close the file when you are finished editing. Make it executable:

      Next, open your root crontab file to run the renewal script at a specified interval:

      If this is your first time editing this file, you will be asked to choose an editor:

      crontab

      no crontab for root - using an empty one
      Select an editor.  To change later, run 'select-editor'.
        1. /bin/ed
        2. /bin/nano        <---- easiest
        3. /usr/bin/vim.basic
        4. /usr/bin/vim.tiny
      Choose 1-4 [2]: 
      ...
      

      At the bottom of the file, add the following line:

      crontab

      ...
      */5 * * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      This will set the job interval to every five minutes, so you can test whether or not your renewal request has worked as intended. We have also created a log file, cron.log, to record relevant output from the job.

      After five minutes, check cron.log to see whether or not the renewal request has succeeded:

      • tail -f /var/log/cron.log

      You should see output confirming a successful renewal:

      Output

      - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates below have not been saved.) Congratulations, all renewals succeeded. The following certs have been renewed: /etc/letsencrypt/live/example.com/fullchain.pem (success) ** DRY RUN: simulating 'certbot renew' close to cert expiry ** (The test certificates above have not been saved.) - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Killing webserver ... done

      You can now modify the crontab file to set a daily interval. To run the script every day at noon, for example, you would modify the last line of the file to look like this:

      crontab

      ...
      0 12 * * * /home/sammy/node_project/ssl_renew.sh >> /var/log/cron.log 2>&1
      

      You will also want to remove the --dry-run option from your ssl_renew.sh script:

      ~/node_project/ssl_renew.sh

      #!/bin/bash
      
      /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml run certbot renew 
      && /usr/local/bin/docker-compose -f /home/sammy/node_project/docker-compose.yml kill -s SIGHUP webserver
      

      Your cron job will ensure that your Let's Encrypt certificates don't lapse by renewing them when they are eligible.

      Conclusion

      You have used containers to set up and run a Node application with an Nginx reverse proxy. You have also secured SSL certificates for your application's domain and set up a cron job to renew these certificates when necessary.

      If you are interested in learning more about Let's Encrypt plugins, please see our articles on using the Nginx plugin or the standalone plugin.

      You can also learn more about Docker Compose by looking at the following resources:

      The Compose documentation is also a great resource for learning more about multi-container applications.



      Source link