One place for hosting & domains

      Delivery

      How To Build and Deploy a Node.js Application To DigitalOcean Kubernetes Using Semaphore Continuous Integration and Delivery


      The author selected the Open Internet / Free Speech fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes allows users to create resilient and scalable services with a single command. Like anything that sounds too good to be true, it has a catch: you must first prepare a suitable Docker image and thoroughly test it.

      Continuous Integration (CI) is the practice of testing the application on each update. Doing this manually is tedious and error-prone, but a CI platform runs the tests for you, catches errors early, and locates the point at which the errors were introduced. Release and deployment procedures are often complicated, time-consuming, and require a reliable build environment. With Continuous Delivery (CD) you can build and deploy your application on each update without human intervention.

      To automate the whole process, you’ll use Semaphore, a Continuous Integration and Delivery (CI/CD) platform.

      In this tutorial, you’ll build an address book API service with Node.js. The API exposes a simple RESTful API interface to create, delete, and find people in the database. You’ll use Git to push the code to GitHub. Then you’ll use Semaphore to test the application, build a Docker image, and deploy it to a DigitalOcean Kubernetes cluster. For the database, you’ll create a PostgreSQL cluster using DigitalOcean Managed Databases.

      Prerequisites

      Before reading on, ensure you have the following:

      • A DigitalOcean account and a Personal Access Token. Follow Create a Personal Access Token to set one up for your account.
      • A Docker Hub account.
      • A GitHub account.
      • A Semaphore account; you can sign up with your GitHub account.
      • A new GitHub repository called addressbook for the project. When creating the repository, select the Initialize this repository with a README checkbox and select Node in the Add .gitignore menu. Follow GitHub’s Create a Repo help page for more details.
      • Git installed on your local machine and set up to work with your GitHub account. If you are unfamiliar or need a refresher, consider reading the How to use Git reference guide.
      • curl installed on your local machine.
      • Node.js installed on your local machine. In this tutorial, you’ll use Node.js version 10.16.0.

      Step 1 — Creating the Database and the Kubernetes Cluster

      Start by provisioning the services that will power the application: the DigitalOcean Database Cluster and the DigitalOcean Kubernetes Cluster.

      Log in to your DigitalOcean account and create a project. A project lets you organize all the resources that make up the application. Call the project addressbook.

      Next, create a PostgreSQL cluster. The PostgreSQL database service will hold the application’s data. You can pick the latest version available. It should take a few minutes before the service is ready.

      Once the PostgreSQL service is ready, create a database and a user. Set the database name to addessbook_db and set the username to addressbook_user. Take note of the password that’s generated for your new user. Databases are PostgreSQL’s way of organizing data. Usually, each application has its own database, although there are no hard rules about this. The application will use the username and password to get access to the database so it can save and retrieve its data.

      Finally, create a Kubernetes Cluster. Choose the same region in which the database is running. Name the cluser addressbook-server and set the number of nodes to 3.

      While the nodes are provisioning, you can start building your application.

      Step 2 — Writing the Application

      Let’s build the address book application you’re going to deploy. To start, clone the GitHub repository you created in the prerequisites so you have a local copy of the .gitignore file GitHub created for you, and you’ll be able to commit your application code quickly without having to manually create a repository. Open your browser and go to your new GitHub repository. Click on the Clone or download button and copy the provided URL. Use Git to clone the empty repository to your machine:

      • git clone https://github.com/your_github_username/addressbook

      Enter the project directory:

      With the repository cloned, you can start writing the app. You’ll build two components: a module that interacts with the database, and a module that provides the HTTP service. The database module will know how to save and retrieve persons from the address book database, and the HTTP module will receive requests and respond accordingly.

      While not strictly mandatory, it’s good practice to test your code while you write it, so you’ll also create a testing module. This is the planned layout for the application:

      • database.js: database module. It handles database operations.
      • app.js: the end user module and the main application. It provides an HTTP service for the users to connect to.
      • database.test.js: tests for the database module.

      In addition, you’ll want a package.json file for your project, which describes the project and its required dependencies. You can either create it manually with your editor, or interactively using npm. Run the npm init command to create the file interactively:

      The command will ask for some information to get started. Fill in the values as shown in the example. If you don’t see an answer listed, leave the answer blank, which uses the default value in parentheses:

      npm output

      package name: (addressbook) addressbook version: (1.0.0) 1.0.0 description: Addressbook API and database entry point: (index.js) app.js test command: git repository: URL for your GitHub repository keywords: author: Sammy the Shark <sammy@example.com>" license: (ISC) About to write to package.json: { "name": "addressbook", "version": "1.0.0", "description": "Addressbook API and database", "main": "app.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "", "license": "ISC" } Is this OK? (yes) yes

      Now you can start writing the code. The database is at the core of the service you’re developing. It’s essential to have a well-designed database model before writing any other components. Consequently, it makes sense to start with the database code.

      You don’t have to code all the bits of the application; Node.js has a large library of reusable modules. For instance, you don’t have to write any SQL queries if you have the Sequelize ORM module in the project. This module provides an interface that handles databases as JavaScript objects and methods. It can also create tables in your database. Sequelize needs the pg module to work with PostgreSQL.

      Install modules using the npm install command with the --save option, which tells npm to save the module in package.json. Execute this command to install both sequelize and pg:

      • npm install --save sequelize pg

      Create a new JavaScript file to hold the database code:

      Import the sequelize module by adding this line to the file:

      database.js

      const Sequelize = require('sequelize');
      
      . . .
      

      Then, below that line, initialize a sequelize object with the database connection parameters, which you’ll retrieve from the system environment. This keeps the credentials out of your code so you don’t accidentally share your credentials when you push your code to GitHub. You can use process.env to access environment variables, and JavaScripts’s || operator to set defaults for undefined variables:

      database.js

      . . .
      
      const sequelize = new Sequelize(process.env.DB_SCHEMA || 'postgres',
                                      process.env.DB_USER || 'postgres',
                                      process.env.DB_PASSWORD || '',
                                      {
                                          host: process.env.DB_HOST || 'localhost',
                                          port: process.env.DB_PORT || 5432,
                                          dialect: 'postgres',
                                          dialectOptions: {
                                              ssl: process.env.DB_SSL == "true"
                                          }
                                      });
      
      . . .
      

      Now define the Person model. To keep the example from getting too complex, you’ll only create two fields: firstName and lastName, both storing string values. Add the following code to define the model:

      database.js

      . . .
      
      const Person = sequelize.define('Person', {
          firstName: {
              type: Sequelize.STRING,
              allowNull: false
          },
          lastName: {
              type: Sequelize.STRING,
              allowNull: true
          },
      });
      
      . . .
      

      This defines the two fields, making firstName mandatory with allowNull: false. Sequelize’s model definition documentation shows the available data types and options.

      Finally, export the sequelize object and the Person model so other modules can use them:

      database.js

      . . .
      
      module.exports = {
          sequelize: sequelize,
          Person: Person
      };
      

      It’s handy to have a table-creation script in a separate file that you can call at any time during development. These types of files are called migrations. Create a new file to hold this code:

      Add these lines to the file to import the database model you defined, and call the sync() function to initialize the database, which creates the table for your model:

      migrate.js

      var db = require('./database.js');
      db.sequelize.sync();
      

      The application is looking for database connection information in system environment variables. Create a file called .env to hold those values, which you will load into the environment during development:

      Add the following variable declarations to the file. Ensure that you set DB_HOST, DB_PORT, and DB_PASSWORD to those associated with your DigitalOcean PostgreSQL cluster:

      .env

      export DB_SCHEMA=addressbook_db
      export DB_USER=addressbook_user
      export DB_PASSWORD=your_db_user_password
      export DB_HOST=your_db_cluster_host
      export DB_PORT=your_db_cluster_port
      export DB_SSL=true
      export PORT=3000
      

      Save the file.

      Warning: never check environment files into source control. They usually have sensitive information.

      Since you defined a default .gitignore file when you created the repository, this file is already ignored.

      You are ready to initialize the database. Import the environment file and run migrate.js:

      • source ./.env
      • node migrate.js

      This creates the database table:

      Output

      Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;

      The output shows two commands. The first one creates the People table as per your definition. The second command checks that the table was indeed created by looking it up in the PostgreSQL catalog.

      It’s good practice to create tests for your code. With tests, you can validate the code’s behavior. You can write a check for each function, method, or any other part of your system and verify that it works the way you’d expect, without having to test things manually.

      The jest testing framework is a great fit for writing tests against Node.js applications. Jest scans the files in the project for test files and executes them one a time. Install Jest with the --save-dev option, which tells npm that the module is not required to run the program, but it is a dependency for developing the application:

      • npm install --save-dev jest

      You’ll write tests to verify that you can insert, read, and delete records from your database. These tests will verify that your database connection and permissions are configured properly, and will also provide some tests you can use in your CI/CD pipeline later.

      Create the database.test.js file:

      Add the following content. Start by importing the database code:

      database.test.js

      const db = require('./database');
      
      . . .
      

      To ensure the database is ready to use, call sync() inside the beforeAll function:

      database.test.js

      . . .
      
      beforeAll(async () => {
          await db.sequelize.sync();
      });
      
      . . .
      

      The first test creates a person record in the database. The sequelize library executes all queries asynchronously, which means it doesn’t wait for the results of the query. To make the test wait for results so you can verify them, you must use the async and await keywords. This test calls the create() method to insert a new row in the database. Use expect to compare the person.id column with 1. The test will fail if you get a different value:

      database.test.js

      . . .
      
      test('create person', async () => {
          expect.assertions(1);
          const person = await db.Person.create({
              id: 1,
              firstName: 'Sammy',
              lastName: 'Davis Jr.',
              email: 'sammy@example.com'
          });
          expect(person.id).toEqual(1);
      });
      
      . . .
      

      In the next test, use the findByPk() method to retrieve the row with id=1. Then, validate the firstName and lastName values. Once again, use async and await:

      database.test.js

      . . .
      
      test('get person', async () => {
          expect.assertions(2);
          const person = await db.Person.findByPk(1);
          expect(person.firstName).toEqual('Sammy');
          expect(person.lastName).toEqual('Davis Jr.');
      });
      
      . . .
      

      Finally, test removing a person from the database. The destroy() method deletes the person with id=1. To ensure that it worked, try retrieving the person a second time and checking that the returned value is null:

      database.test.js

      . . .
      
      test('delete person', async () => {
          expect.assertions(1);
          await db.Person.destroy({
              where: {
                  id: 1
              }
          });
          const person = await db.Person.findByPk(1);
          expect(person).toBeNull();
      });
      
      . . .
      

      Finally, add this code to close the connection to the database with close() once all tests have finished:

      app.js

      . . .
      
      afterAll(async () => {
          await db.sequelize.close();
      });
      

      Save the file.

      The jest command runs the test suite for your program, but you can also store commands in package.json. Open this file in your editor:

      Locate the scripts keyword and replace the existing test line (which was just a placeholder). The test command is jest:

      . . .
      
        "scripts": {
          "test": "jest"
        },
      
      . . .
      

      Now you can call npm run test to invoke the test suite. This may be a longer command, but if you need to modify the jest command later, external services won’t have to change; they can continue calling npm run test.

      Run the tests:

      Then, check the results:

      Output

      console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): INSERT INTO "People" ("id","firstName","lastName","createdAt","updatedAt") VALUES ($1,$2,$3,$4,$5) RETURNING *; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): DELETE FROM "People" WHERE "id" = 1 console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; PASS ./database.test.js ✓ create person (344ms) ✓ get person (173ms) ✓ delete person (323ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 5.315s Ran all test suites.

      With the database code tested, you can build the API service to manage the people in the address book.

      To serve HTTP requests, you’ll use the Express web framework. Install Express and save it as a dependency using npm install:

      • npm install --save express

      You’ll also need the body-parser module, which you’ll use to access the HTTP request body. Install this as a dependency as well:

      • npm install --save body-parser

      Create the main application file app.js:

      Import the express, body-parser, and database modules. Then create an instance of the express module called app to control and configure the service. You use app.use() to add features such as middleware. Use this to add the body-parser module so the application can read url-encoded strings:

      app.js

      var express = require('express');
      var bodyParser = require('body-parser');
      var db = require('./database');
      var app = express();
      app.use(bodyParser.urlencoded({ extended: true }));
      
      . . .
      

      Next, add routes to the application. Routes are similar to buttons in an app or website; they trigger some action in your application. Routes link unique URLs to actions in the application. Each route will serve a specific path and support a different operation.

      The first route you’ll define handles GET requests for the /person/$ID path, which will display the database record for the person with the specified ID. Express automatically sets the value of the requested $ID in the req.params.id variable.

      The application must reply with the person data encoded as a JSON string. As you did in the database tests, use the findByPk() method to retrieve the person by id and reply to the request with HTTP status 200 (OK) and send the person record as JSON. Add the following code:

      app.js

      . . .
      
      app.get("/person/:id", function(req, res) {
          db.Person.findByPk(req.params.id)
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Errors cause the code in catch() to be executed. For instance, if the database is down, the connection will fail, and this will execute instead. In case of trouble, set the HTTP status to 500 (Internal Server Error) and send the error message back to the user:

      Add another route to create a person in the database. This route will handle PUT requests and access the person’s data from the req.body. Use the create() method to insert a row in the database:

      app.js

      . . .
      
      app.put("/person", function(req, res) {
          db.Person.create({
              firstName: req.body.firstName,
              lastName: req.body.lastName,
              id: req.body.id
          })
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Add another route to handle DELETE requests, which will remove records from the address book. First, use the ID to locate the record and then use the destroy method to remove it:

      app.js

      . . .
      
      app.delete("/person/:id", function(req, res) {
          db.Person.destroy({
              where: {
                  id: req.params.id
              }
          })
              .then( () => {
                  res.status(200).send();
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      And for convenience, add a route that retrieves all people in the database using the /all path:

      app.js

      . . .
      
      app.get("/all", function(req, res) {
          db.Person.findAll()
              .then( persons => {
                  res.status(200).send(JSON.stringify(persons));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      One last route left. If the request did not match any of the previous routes, send status code 404 (Not Found):

      app.js

      . . .
      
      app.use(function(req, res) {
          res.status(404).send("404 - Not Found");
      });
      
      . . .
      

      Finally, add the listen() method, which starts up the service. If the environment variable PORT is defined, then the service listens in that port; otherwise, it defaults to port 3000:

      app.js

      . . .
      
      var server = app.listen(process.env.PORT || 3000, function() {
          console.log("app is running on port", server.address().port);
      });
      

      As you’ve learned, the package.json file lets you define various commands to run tests, start your apps, and other tasks, which often lets you run common commands with much less typing. Add a new command on package.json to start the application. Edit the file:

      Add the start command, so it looks like this:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js"
        },
      
      . . .
      

      Don’t forget to add a comma to the previous line, as the scripts section needs its entries separated by commas.

      Save the file and start the application for the first time. First, load the environment file with source; this imports the variables into the session and makes them available to the application. Then, start the application with npm run start:

      • source ./.env
      • npm run start

      The app starts on port 3000:

      Output

      app is running on port 3000

      Open a browser and navigate to http://localhost:3000/all. You’ll see a page showing [].

      Switch back to your terminal and press CTRL-C to stop the application.

      Now is an excellent time to add code quality tests. Code quality tools, also known as linters, scan the project for issues in the code. Bad coding practices like leaving unused variables, not ending statements with a semicolon, or missing curly braces can cause bugs that are difficult to find.

      Install jshint tool, a JavaScript linter, as a development dependency:

      • npm install --save-dev jshint

      Over the years, JavaScript has received of updates, features, and syntax changes. The language has been standardized by ECMA International under the name of “ECMAScript”. About once a year, ECMA releases a new version of ECMAScript with new features.

      By default, jshint assumes that your code is compatible with ES6 (ECMAScript Version 6), and will throw an error if it finds any keywords not supported in that version. You’ll want to find the version that is compatible with your code. If you look at the feature table for all the recent versions, you’ll find that the async/await keywords were not introduced until ES8. You used both keywords in the database test code, so that sets the minimum compatible version to ES8.

      To tell jshint the version you’re using, create a file called .jshintrc:

      In the file, specify esversion. The jshintrc file uses JSON, so create a new JSON object in the file:

      .jshintrc

      { "esversion": 8 }
      

      Save the file and exit the editor.

      Add a command to run jshint. Edit package.json:

      Add a lint command to your project in the scripts section of package.json. The command calls the lint tool against all the JavaScript files you created so far:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js",
          "lint": "jshint app.js database*.js migrate.js"
        },
      
      . . .
      

      Now you can run the linter to find any issues:

      There should not be any error messages:

      Output

      > jshint app.js database*.js migrate.js

      If there are any errors, jshint will show the line that has the problem.

      You’ve completed the project and ensured it works. Add the files to the repository, commit, and push the changes:

      • git add *.js
      • git add package*.json
      • git add .jshintrc
      • git commit -m 'initial commit'
      • git push origin master

      Now you can configure Semaphore to test, build, and deploy the application, starting by configuring Semaphore with your DigitalOcean Personal Access Token and database credentials.

      Step 3 — Creating Secrets in Semaphore

      There is some information that doesn’t belong in a GitHub repository. Passwords and API Tokens are good examples of this. You’ve stored this sensitive data in a separate file and loaded it into your environment, When using Semaphore, you can use Secrets to store sensitive data.

      There are three kinds of secrets in the project:

      • Docker Hub: the username and password of your Docker Hub account.
      • DigitalOcean Personal Access Token: to deploy the application to your Kubernetes cluster.
      • Environment Variables: for database username and password connection parameters.

      To create the first secret, open your browser and log in to the Semaphore website. On the left navigation menu, click Secrets under the CONFIGURATION heading. Click the Create New Secret button.

      For Name of the Secret, enter dockerhub. Then under Environment Variables, create two environment variables:

      • DOCKER_USERNAME: your DockerHub username.
      • DOCKER_PASSWORD: your DockerHub password.

      Docker Hub Secret

      Click Save Changes.

      Create a second secret for your DigitalOcean Personal Access Token. Once again, click on Secrets on the left navigation menu, then on Create New Secret. Call this secret do-access-token and create an environment value called DO_ACCESS_TOKEN with the value set to your Personal Access Token:

      DigitalOcean Token Secret

      Save the secret.

      For the next secret, instead of setting environment variables directly, you’ll upload the .env file from the project’s root.

      Create a new secret called env-production. Under the Files section, press the Upload file link to locate and upload your .env file, and tell Semaphore to place it at /home/semaphore/env-production.

      Environment Secret

      Note: Because the file is hidden, you may have trouble finding it on your computer. There is usually a menu item or a key combination to view hidden files, such as CTRL+H. If all else fails, you can try copying the file with a non-hidden name:

      Then upload the file and rename it back:

      The environment variables are all configured. Now you can begin the Continuous Integration setup.

      Step 4 — Adding your Project to Semaphore

      In this step you will add your project to Semaphore and start the Continuous Integration (CI) pipeline.

      First, link your GitHub repository with Semaphore:

      1. Log in to your Semaphore account.
      2. Click the + icon next to PROJECTS.
      3. Click the Add Repository button next to your repository.

      Add Repository to Semaphore

      Now that Semaphore is connected, it will pick up any changes in the repository automatically.

      You are now ready to create the Continuous Integration pipeline for the application. A pipeline defines the path your code must travel to get built, tested, and deployed. The pipeline is automatically run each time there is a change in the GitHub repository.

      First, you should ensure that Semaphore uses the same version of Node you’ve been using during development. You can check which version is running on your machine:

      Output

      v10.16.0

      You can tell Semaphore which version of Node.js to use by creating a file called .nvmrc in your repository. Internally, Semaphore uses node version manager to switch between Node.js versions. Create the .nvmrc file and set the version to 10.16.0:

      Semaphore pipelines go in the .semaphore directory. Create the directory:

      Create a new pipeline file. The initial pipeline is always called semaphore.yml. In this file, you’ll define all the steps required to build and test the application.

      • nano .semaphore/semaphore.yml

      Note: You are creating a file in the YAML format. You must preserve the leading spaces as shown in the tutorial.

      The first line must set the Semaphore file version; the current stable is v1.0. Also, the pipeline needs a name. Add these lines to your file:

      .semaphore/semaphore.yml

      version: v1.0
      name: Addressbook
      
      . . .
      

      Semaphore automatically provisions virtual machines to run the tasks. There are various machines to choose from. For the integration jobs, use the e1-standard-2 (2 CPUs 4 GB RAM) along with an Ubuntu 18.04 OS. Add these lines to the file:

      .semaphore/semaphore.yml

      . . .
      
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      Semaphore uses blocks to organize the tasks. Each block can have one or more jobs. All jobs in a block run in parallel, each one in an isolated machine. Semaphore waits for all jobs in a block to pass before starting the next one.

      Start by defining the first block, which installs all the JavaScript dependencies to test and run the application:

      .semaphore/semaphore.yml

      . . .
      
      blocks:
        - name: Install dependencies
          task:
      
      . . .
      

      You can define environment variables that are common for all jobs, like setting NODE_ENV to test, so Node.js knows this is a test environment. Add this code after task:

      .semaphore/semaphore.yml

      . . .
          task:
            env_vars:
              - name: NODE_ENV
                value: test
      
      . . .
      

      Commands in the prologue section are executed before each job in the block. It’s a convenient place to define setup tasks. You can use checkout to clone the GitHub repository. Then, nvm use activates the appropriate Node.js version you specified in .nvmrc. Add the prologue section:

      .semaphore/semaphore.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - nvm use
      
      . . .
      

      Next add this code to install the project’s dependencies. To speed up jobs, Semaphore provides the cache tool. You can run cache store to save node_modules directory in Semaphore’s cache. cache automatically figures out which files and directories should be stored. The second time the job is executed, cache restore restores the directory.

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: npm install and cache
                commands:
                  - cache restore
                  - npm install
                  - cache store 
      
      . . .
      

      Add another block which will run two jobs. One to run the lint test, and another to run the application’s test suite.

      .semaphore/semaphore.yml

      . . .
      
        - name: Tests
          task:
            env_vars:
              - name: NODE_ENV
                value: test
            prologue:
              commands:
                - checkout
                - nvm use
                - cache restore 
      
      . . .
      

      The prologue repeats the same commands as in the previous block and restores node_module from the cache. Since this block will run tests, you set the NODE_ENV environment variable to test.

      Now add the jobs. The first job performs the code quality check with jshint:

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: Static test
                commands:
                  - npm run lint
      
      . . .
      

      The next job executes the unit tests. You’ll need a database to run them, as you don’t want to use your production database. Semaphore’s sem-service can start a local PostgreSQL database in the test environment that is completely isolated. The database is destroyed when the job ends. Start this service and run the tests:

      .semaphore/semaphore.yml

      . . .
      
              - name: Unit test
                commands:
                  - sem-service start postgres
                  - npm run test
      

      Save the .semaphore/semaphore.yml file.

      Now add and commit the changes to the GitHub repository:

      • git add .nvmrc
      • git add .semaphore/semaphore.yml
      • git commit -m "continuous integration pipeline"
      • git push origin master

      As soon as the code is pushed to GitHub, Semaphore starts the CI pipeline:

      Running Workflow

      You can click on the pipeline to show the blocks and jobs, and their output.

      Integration Pipeline

      Next you will create a new pipeline that builds a Docker image for the application.

      Step 5 — Building Docker Images for the Application

      A Docker image is the basic unit of a Kubernetes deployment. The image should have all the binaries, libraries, and code required to run the application. A Docker container is not a lightweight virtual machine, but it behaves like one. The Docker Hub registry contains hundreds of ready-to-use images, but we’re going to build our own.

      In this step, you’ll add a new pipeline to build a custom Docker image for your app and push it to Docker Hub.

      To build a custom image, create a Dockerfile:

      The Dockerfile is a recipe to create the image. You can use the official Node.js distribution as a starting point instead of starting from scratch. Add this to your Dockerfile:

      Dockerfile

      FROM node:10.16.0-alpine
      
      . . .
      

      Then add a command which copies package.json and package-lock.json, and then install the node modules inside the image:

      Dockerfile

      . . .
      
      COPY package*.json ./
      RUN npm install
      
      . . .
      

      Installing the dependencies first will speed up subsequent builds, as Docker will cache this step.

      Now add this command which copies all the application files in the project root into the image:

      Dockerfile

      . . .
      
      COPY *.js ./
      
      . . .
      

      Finally, EXPOSE specifies that the container listens for connections on port 3000, where the application is listening, and CMD sets the command that should run when the container starts. Add these lines to your file:

      Dockerfile

      . . .
      
      EXPOSE 3000
      CMD [ "npm", "run", "start" ]
      

      Save the file.

      With the Dockerfile complete, you can create a new pipeline so Semaphore can build the image for you when you push your code to GitHub. Create a new file called docker-build.yml:

      • nano .semaphore/docker-build.yml

      Start the pipeline with the same boilerplate as the the CI pipline, but with the name Docker build:

      .semaphore/docker-build.yml

      version: v1.0
      name: Docker build
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have only one block and one job. In Step 3, you created a secret named dockerhub with your Docker Hub username and password. Here, you’ll import these values using the secrets keyword. Add this code:

      .semaphore/docker-build.yml

      . . .
      
      blocks:
        - name: Build
          task:
            secrets:
              - name: dockerhub
      
      . . .
      

      Docker images are stored in repositories. We’ll use the official Docker Hub which allows for an unlimited number of public images. Add these lines to check out the code from GitHub and use the docker login command to authenticate with Docker Hub.

      .semaphore/docker-build.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
      
      . . .
      

      Each Docker image is fully identified by the combination of name and tag. The name usually corresponds to the product or software, and the tag corresponds to the particular version of the software. For example, node.10.16.0. When no tag is supplied, Docker defaults to the special latest tag. Hence, it’s considered good practice to use the latest tag to refer to the most current image.

      Add the following code to build the image and push it to Docker Hub:

      .semaphore/docker-build.yml

      . . .
      
            jobs:
            - name: Docker build
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:latest" || true
                - docker build --cache-from "${DOCKER_USERNAME}/addressbook:latest" -t "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" .
                - docker push "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID"
      

      When Docker builds the image, it reuses parts of existing images to speed up the process. The first command tries to pull the latest image from Docker Hub so it may be reused. Semaphore stops the pipeline if any of the commands return a status code different than zero. For example, if the repository doesn’t have any latest image, as it won’t on the first try, the pipeline will stop. You can force Semaphore to ignore failed commands by appending || true to the command.

      The second command builds the image. To reference this particular image later, you can tag it with a unique string. Semaphore provides several environment variables for jobs. One of them, $SEMAPHORE_WORKFLOW_ID is unique and shared among all the pipelines in the workflow. It’s handy for referencing this image later in the deployment.

      The third command pushes the image to Docker Hub.

      The build pipeline is ready, but Semaphore will not start it unless you connect it to the main CI pipeline. You can chain multiple pipelines to create complex, multi-branch workflows using promotions.

      Edit the main pipeline file .semaphore/semaphore.yml:

      • nano .semaphore/semaphore.yml

      Add the following lines at the end of the file:

      .semaphore/semaphore.yml

      . . .
      
      promotions:
        - name: Dockerize
          pipeline_file: docker-build.yml
          auto_promote_on:
            - result: passed
      

      auto_promote_on defines the condition to start the docker build pipeline. In this case, it runs when all jobs defined in the semaphore.yml file have passed.

      To test the new pipeline, you need to add, commit, and push all the modified files to GitHub:

      • git add Dockerfile
      • git add .semaphore/docker-build.yml
      • git add .semaphore/semaphore.yml
      • git commit -m "docker build pipeline"
      • git push origin master

      After the CI pipeline is complete, the Docker build pipeline starts.

      Build Pipeline

      When it finishes, you’ll see your new image in your Docker Hub repository.

      You’ve got your build process testing and creating the image. Now you’ll create the final pipeline to deploy the application to your Kubernetes cluster.

      Step 6 — Setting up Continuous Deployment to Kubernetes

      The building block of a Kubernetes deployment is the pod. A pod is a group of containers that are managed as a single unit. The containers inside a pod start and stop in unison and always run on the same machine, sharing its resources. Each pod has an IP address. In this case, the pods will only have one container.

      Pods are ephemeral; they are created and destroyed frequently. You can’t tell which IP address is going to be assigned to each pod until it’s started. To solve this, you’ll use services, which have fixed public IP addresses so incoming connections can be load-balanced and forwarded to the pods.

      You could manage pods directly, but it’s better to let Kubernetes handle that by using a deployment. In this section, you will create a declarative manifest that describes the final desired state for your cluster. The manifest has two resources:

      • Deployment: starts the pods in the cluster nodes as required and keeps track of their status. Since in this tutorial we’re using a 3-node cluster, we’ll deploy 3 pods.
      • Service: acts as an entry point for our users. Listens to traffic on port 80 (HTTP) and forwards the connection to the pods.

      Create a manifest file called deployment.yml:

      Start the manifest with the Deployment resource. Add the following contents to the new file to define the deployment:

      deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: addressbook
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: addressbook
        template:
          metadata:
            labels:
              app: addressbook
          spec:
            containers:
              - name: addressbook
                image: ${DOCKER_USERNAME}/addressbook:${SEMAPHORE_WORKFLOW_ID}
                env:
                  - name: NODE_ENV
                    value: "production"
                  - name: PORT
                    value: "$PORT"
                  - name: DB_SCHEMA
                    value: "$DB_SCHEMA"
                  - name: DB_USER
                    value: "$DB_USER"
                  - name: DB_PASSWORD
                    value: "$DB_PASSWORD"
                  - name: DB_HOST
                    value: "$DB_HOST"
                  - name: DB_PORT
                    value: "$DB_PORT"
                  - name: DB_SSL
                    value: "$DB_SSL"
      
      
      . . .
      

      For each resource in the manifest, you need to set an apiVersion. For deployments, use apiVersion: apps/v1, a stable version. Then, tell Kubernetes that this resource is a Deployment with kind: Deployment. Each definition should have a name defined in metadata.name.

      In the spec section you tell Kubernetes what the desired final state is. This definition requests that Kubernetes should create 3 pods with replicas: 3.

      Labels are key-value pairs used to organize and cross-reference Kubernetes resources. You define labels with metadata.labels, and you can look for matching labels with selector.matchLabels. This is how you connect elements togther.

      The key spec.template defines a model that Kubernetes will use to create each pod. Inside spec.template.metadata.labels you set one label for the pods: app: addressbook.

      With spec.selector.matchLabels you make the deployment manage any pods with the label app: addressbook. In this case you are making this deployment responsible for all the pods.

      Finally, you define the image that runs in the pods. In spec.template.spec.containers you set the image name. Kubernetes will pull the image from the registry as needed. In this case, it will pull from Docker Hub). You can also set environment variables for the containers, which is fortunate because you need to supply several values for the database connection.

      To keep the deployment manifest flexible, you’ll be relying on variables. The YAML format, however, doesn’t allow variables, so the file isn’t valid yet. You’ll solve that problem when you define the deployment pipeline for Semaphore.

      That’s it for the deployment. But this only defines the pods. You still need a service that will allow traffic to flow to your pods. You can add another Kubernetes resource in the same file as long as you use three hyphens (---) as a separator.

      Add the following code to define a load balancer service that connects to pods with the addressbook label:

      deployment.yml

      . . .
      
      ---
      
      apiVersion: v1
      kind: Service
      metadata:
        name: addressbook-lb
      spec:
        selector:
          app: addressbook
        type: LoadBalancer
        ports:
          - port: 80
            targetPort: 3000
      

      The load balancer will receive connections on port 80 and forward them to the pods’ port 3000 where the application is listening.

      Save the file.

      Now, create a deployment pipeline for Semaphore that will deploy the app using the manifest. Create a new file in the .semaphore directory:

      • nano .semaphore/deploy-k8s.yml

      Begin the pipeline as usual, specifying the version, name, and image:

      .semaphore/deploy-k8s.yml

      version: v1.0
      name: Deploy to Kubernetes
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have two blocks. The first block deploys the application to the Kubernetes cluster.

      Define the block and import all the secrets:

      .semaphore/deploy-k8s.yml

      . . .
      
      blocks:
        - name: Deploy to Kubernetes
          task:
            secrets:
              - name: dockerhub
              - name: do-access-token
              - name: env-production
      
      . . .
      

      Store your DigitalOcean Kubernetes cluster name in an environment variable so you can reference it later:

      .semaphore/deploy-k8s.yml

      . . .
      
            env_vars:
              - name: CLUSTER_NAME
                value: addressbook-server
      
      . . .
      

      DigitalOcean Kubernetes clusters are managed with a combination of two programs: kubectl and doctl. The former is already included in Semaphore’s image, but the latter isn’t, so you need to install it. You can use the prologue section to do it.

      Add this prologue section:

      .semaphore/deploy-k8s.yml

      . . .
      
            prologue:
              commands:
                - wget https://github.com/digitalocean/doctl/releases/download/v1.20.0/doctl-1.20.0-linux-amd64.tar.gz
                - tar xf doctl-1.20.0-linux-amd64.tar.gz 
                - sudo cp doctl /usr/local/bin
                - doctl auth init --access-token $DO_ACCESS_TOKEN
                - doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
                - checkout
      
      . . .
      

      The first command downloads the doctl official release with wget. The second command decompresses it with tar and copies it into the local path. Once doctl is installed, it can be used to authenticate with the DigitalOcean API and request the Kubernetes config file for our cluster. After checking out our code, we are done with the prologue:

      Next comes the final piece of our pipeline: deploying to the cluster.

      Remember that there were some environment variables in deployment.yml, and YAML does not allow that. As a result, deployment.yml in its current state, won’t work. To get around that, source the environment file to load the variables, then use the envsubst command to expand the variables in-place with the actual values. The result, a file called deploy.yml, is entirely valid YAML with the values inserted. With the file in place, you can start the deployment with kubectl apply:

      .semaphore/deploy-k8s.yml

      . . .
      
            jobs:
            - name: Deploy
              commands:
                - source $HOME/env-production
                - envsubst < deployment.yml | tee deploy.yml
                - kubectl apply -f deploy.yml
      
      . . .
      

      The second block adds the latest tag to the image on Docker Hub to denote that this is the most current version deployed. Repeat the Docker login steps, then pull, retag, and push to Docker Hub:

      .semaphore/deploy-k8s.yml

      . . .
      
        - name: Tag latest release
          task:
            secrets:
              - name: dockerhub
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
                - checkout
            jobs:
            - name: docker tag latest
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" 
                - docker tag "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" "${DOCKER_USERNAME}/addressbook:latest"
                - docker push "${DOCKER_USERNAME}/addressbook:latest"
      

      Save the file.

      This pipeline performs the deployment, but it can only start if the Docker image was successfully generated and pushed to Docker Hub. As a result, you must connect the build and deployment pipelines with a promotion. Edit the Docker build pipeline to add it:

      • nano .semaphore/docker-build.yml

      Add the promotion to the end of the file:

      .semaphore/docker-build.yml

      . . .
      
      promotions:
        - name: Deploy to Kubernetes
          pipeline_file: deploy-k8s.yml
          auto_promote_on:
            - result: passed
      

      You are done setting up the CI/CD workflow.

      All that remains is pushing the modified files and letting Semaphore do the work. Add, commit, and push your repository’s changes:

      • git add .semaphore/deploy-k8s.yml
      • git add .semaphore/docker-build.yml
      • git add deployment.yml
      • git commit -m "kubernetes deploy pipeline"
      • git push origin master

      It’ll take a few minutes for the deployment to complete.

      Deploy Pipeline

      Let’s test the application next.

      Step 7 — Testing the Application

      At this point, the application is up and running. In this step, you’ll use curl to test the API endpoint.

      You’ll need to know the public IP that DigitalOcean has given to your cluster. Follow these steps to find it:

      1. Log in to your DigitalOcean account.
      2. Select the addressbook project
      3. Go to Networking.
      4. Click on Load Balancers.
      5. The IP Address is shown. Copy the IP address.

      Load Balancer IP

      Let’s check the /all route using curl:

      • curl -w "n" YOUR_CLUSTER_IP/all

      You can use the -w "n" option to ensure curl prints all lines:

      Since there are no records in the database yet, you get an empty JSON array as the result:

      Output

      []

      Create a new person record by making a PUT request to the /person endpoint:

      • curl -w "n" -X PUT
      • -d "firstName=Sammy&lastName=the Shark" YOUR_CLUSTER_IP/person

      The API returns the JSON object for the person:

      Output

      { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "updatedAt": "2019-07-04T23:51:00.548Z", "createdAt": "2019-07-04T23:51:00.548Z" }

      Create a second person:

      • curl -w "n" -X PUT
      • -d "firstName=Tommy&lastName=the Octopus" YOUR_CLUSTER_IP/person

      The output indicates that a second person was created:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "updatedAt": "2019-07-04T23:52:08.724Z", "createdAt": "2019-07-04T23:52:08.724Z" }

      Now make a GET request to get the person with the id of 2:

      • curl -w "n" YOUR_CLUSTER_IP/person/2

      The server replies with the data you requested:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "createdAt": "2019-07-04T23:52:08.724Z", "updatedAt": "2019-07-04T23:52:08.724Z" }

      To delete the person, send a DELETE request:

      • curl -w "n" -X DELETE YOUR_CLUSTER_IP/person/2

      No output is returned by this command.

      You should only have one person in your database, the one with the id of 1. Try getting /all again:

      • curl -w "n" YOUR_CLUSTER_IP/all

      The server replies with an array of persons containing only one record:

      Output

      [ { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "createdAt": "2019-07-04T23:51:00.548Z", "updatedAt": "2019-07-04T23:51:00.548Z" } ]

      At this point, there’s only one person left in the database.

      This completes the tests for all the endpoints in our application and marks the end of the tutorial.

      Conclusion

      In this tutorial, you wrote a complete Node.js application from scratch which used DigitalOcean’s managed PostgreSQL database service. You then used Semaphore’s CI/CD pipelines to fully automate a workflow that tested and built a container image, uploaded it to Docker Hub, and deployed it to DigitalOcean Kubernetes.

      To learn more about Kubernetes, you can read An Introduction to Kubernetes and the rest of DigitalOcean’s Kubernetes tutorials.

      Now that your application is deployed, you may consider adding a domain name, securing your database cluster, or setting up alerts for your database.



      Source link

      How to Speed Up WordPress Asset Delivery Using DigitalOcean Spaces CDN


      Introduction

      Implementing a CDN, or Content Delivery Network, to deliver your WordPress site’s static assets can greatly decrease your servers’ bandwidth usage as well as speed up page load times for geographically dispersed users. WordPress static assets include images, CSS stylesheets, and JavaScript files. Leveraging a system of edge servers distributed worldwide, a CDN caches copies of your site’s static assets across its network to reduce the distance between end users and this bandwidth-intensive content.

      In a previous Solutions guide, How to Store WordPress Assets on DigitalOcean Spaces, we covered offloading a WordPress site’s Media Library (where images and other site content gets stored) to DigitalOcean Spaces, a highly redundant object storage service. We did this using the DigitalOcean Spaces Sync plugin, which automatically syncs WordPress uploads to your Space, allowing you to delete these files from your server and free up disk space.

      In this Solutions guide, we’ll extend this procedure by rewriting Media Library asset URLs. This forces users’ browsers to download static assets directly from the DigitalOcean Spaces CDN, a geographically distributed set of cache servers optimized for delivering static content. We’ll go over how to enable the CDN for Spaces, how to rewrite links to serve your WordPress assets from the CDN, and finally how to test that your website’s assets are being correctly delivered by the CDN.

      Additionally, we’ll demonstrate how to implement Media Library offload and link rewriting using two popular paid WordPress plugins: WP Offload Media and Media Library Folders Pro. You should choose the plugin that suits your production needs best.

      Prerequisites

      Before you begin this tutorial, you should have a running WordPress installation on top of a LAMP or LEMP stack. You should also have WP-CLI installed on your WordPress server, which you can learn to set up by following these instructions.

      To offload your Media Library, you’ll need a DigitalOcean Space and an access key pair:

      • To learn how to create a Space, consult the Spaces product documentation.
      • To learn how to create an access key pair and upload files to your Space using the open source s3cmd tool, consult s3cmd 2.x Setup, also on the DigitalOcean product documentation site.

      There are a few WordPress plugins that you can use to offload your WordPress assets:

      • DigitalOcean Spaces Sync is a free and open-source WordPress plugin for offloading your Media Library to a DigitalOcean Space. You can learn how to do this in How To Store WordPress Assets on DigitalOcean Spaces.
      • WP Offload Media is a paid plugin that copies files from your WordPress Media Library to DigitalOcean Spaces and rewrites URLs to serve the files from the CDN. With the Assets Pull addon, it can identify assets (CSS, JS, images, etc) used by your site (for example by WordPress themes) and also serve these from CDN.
      • Media Library Folders Pro is another paid plugin that helps you organize your Media Library assets, as well as offload them to DigitalOcean Spaces.

      For testing purposes, be sure to have a modern web browser such as Google Chrome or Firefox installed on your client (e.g. laptop) computer.

      Once you have a running WordPress installation and have created a DigitalOcean Space, you’re ready to enable the CDN for your Space and begin with this guide.

      Enabling Spaces CDN

      We’ll begin this guide by enabling the CDN for your DigitalOcean Space. This will not affect the availability of existing objects. With the CDN enabled, objects in your Space will be “pushed out” to edge caches across the content delivery network, and a new CDN endpoint URL will be made available to you. To learn more about how CDNs work, consult Using a CDN to Speed Up Static Content Delivery.

      First, enable the CDN for your Space by following How to Enable the Spaces CDN.

      Navigate back to your Space and reload the page. You should see a new Endpoints link under your Space name:

      Endpoints Link

      These endpoints should contain your Space name. We’re using wordpress-offload in this tutorial.

      Notice the addition of the new Edge endpoint. This endpoint routes requests for Spaces objects through the CDN, serving them from the edge cache as much as possible. Note down this Edge endpoint, which you’ll use to configure your WordPress plugin in future steps.

      Now that you have enabled the CDN for your Space, you’re ready to begin configuring your asset offload and link rewriting plugin.

      If you’re using DigitalOcean Spaces Sync and continuing from How to Store WordPress Assets on DigitalOcean Spaces, begin reading from the following section. If you’re not using Spaces Sync, skip to either the WP Offload Media section or the Media Library Folders Pro section, depending on the plugin you choose to use.

      Spaces Sync Plugin

      If you’d like to use the free and open-source DigitalOcean Spaces Sync and CDN Enabler plugins to serve your files from the CDN’s edge caches, follow the steps outlined in this section.

      We’ll begin by ensuring that our WordPress installation and Spaces Sync plugin are configured correctly and are serving assets from DigitalOcean Spaces.

      Modifying Spaces Sync Plugin Configuration

      Continuing from How To Store WordPress Assets on DigitalOcean Spaces, your Media Library should be offloaded to your DigitalOcean Space and your Spaces Sync plugin settings should look as follows:

      Sync Cloud Only

      We are going to make some minor changes to ensure that our configuration allows us to offload WordPress themes and other directories, beyond the wp-content/uploads Media Library folder.

      First, we’re going to modify the Full URL-path to files field so that the Media Library files are served from our Space’s CDN and not locally from the server. This setting essentially rewrites links to Media Library assets, changing them from file links hosted locally on your WordPress server, to file links hosted on the DigitalOcean Spaces CDN.

      Recall the Edge endpoint you noted down in the Enabling Spaces CDN step.

      In this tutorial, the Space’s name is wordpress-offload and the Space’s CDN endpoint is:

      https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com
      

      Now, in the Spaces Sync plugin settings page, replace the URL in the Full URL-path to files field with your Spaces CDN endpoint, followed by /wp-content/uploads.

      In this tutorial, using the above Spaces CDN endpoint, the full URL would be:

      https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com/wp-content/uploads
      

      Next, for the Local path field, enter the full path to the wp-content/uploads directory on your WordPress server. In this tutorial, the path to the WordPress installation on the server is /var/www/html/, so the full path to uploads would be /var/www/html/wp-content/uploads.

      Note: If you’re continuing from How To Store WordPress Assets on DigitalOcean Spaces, this guide will slightly modify the path to files in your Space to enable you to optionally offload themes and other wp-content assets. You should clear out your Space before doing this, or alternatively you can transfer existing files into the correct wp-content/uploads Space directory using s3cmd.

      In the Storage prefix field, we’re going to enter /wp-content/uploads, which will ensure that we build the correct wp-content directory hierarchy so that we can offload other WordPress directories to this Space.

      Filemask can remain wildcarded with *, unless you’d like to exclude certain files.

      It’s not necessary to check the Store files only in the cloud and delete… option; only check this box if you’d like to delete the Media Library assets from your server after they’ve been successfully uploaded to your DigitalOcean Space.

      Your final settings should look something like this:

      Final Spaces Sync Settings

      Be sure to replace the above values with the values corresponding to your WordPress installation and Spaces configuration.

      Finally, hit Save Changes.

      You should see a Settings saved box appear at the top of your screen, confirming that the Spaces Sync plugin settings have successfully been updated.

      Future WordPress Media Library uploads should now be synced to your DigitalOcean Space, and served using the Spaces Content Delivery Network.

      In this step, we did not offload the WordPress theme or other wp-content assets. To learn how to transfer these assets to Spaces and serve them using the Spaces CDN, skip to Offload Additional Assets.

      To verify and test that your Media Library uploads are being delivered from the Spaces CDN, skip to Test CDN Caching.

      The DeliciousBrains WordPress Offload Media plugin allows you to quickly and automatically upload your Media Library assets to DigitalOcean Spaces and rewrite links to these assets so that you can deliver them directly from Spaces or via the Spaces CDN. In addition, the Assets Pull addon allows you to quickly offload additional WordPress assets like JS, CSS, and font files in combination with a pull CDN. Setting up this addon is beyond the scope of this guide but to learn more you can consult the DeliciousBrains documentation.

      We’ll begin by installing and configuring the WP Offload Media plugin for a sample WordPress site.

      Installing WP Offload Media Plugin

      To begin, you must purchase a copy of the plugin on the DeliciousBrains plugin site. Choose the appropriate version depending on the number of assets in your Media Library, and support and feature requirements for your site.

      After going through checkout, you’ll be brought to a post-purchase site with a download link for the plugin and a license key. The download link and license key will also be sent to you at the email address you provided when purchasing the plugin.

      Download the plugin and navigate to your WordPress site’s admin interface (https://your_site_url/wp-admin). Log in if necessary. From here, hover over Plugins and click on Add New.

      Click Upload Plugin and the top of the page, Choose File, and then select the zip archive you just downloaded.

      Click Install Now, and then Activate Plugin. You’ll be brought to WordPress’s plugin admin interface.

      From here, navigate to the WP Offload Media plugin’s settings page by clicking Settings under the plugin name.

      You’ll be brought to the following screen:

      WP Offload Media Configuration

      Click the radio button next to DigitalOcean Spaces. You’ll now be prompted to either configure your Spaces Access Key in the wp-config.php file (recommended), or directly in the web interface (the latter will store your Spaces credentials in the WordPress database).

      We’ll configure our Spaces Access Key in wp-config.php.

      Log in to your WordPress server via the command line, and navigate to your WordPress root directory (in this tutorial, this is /var/www/html). From here, open up wp-config.php in your favorite editor:

      Scroll down to the line that says /* That's all, stop editing! Happy blogging. */, and before it insert the following lines containing your Spaces Access Key pair (to learn how to generate an access key pair, consult the Spaces product docs):

      wp-config.php

      . . . 
      define( 'AS3CF_SETTINGS', serialize( array(
          'provider' => 'do',
          'access-key-id' => 'your_access_key_here',
          'secret-access-key' => 'your_secret_key_here',
      ) ) );
      
      /* That's all, stop editing! Happy blogging. */
      . . .
      

      Once you're done editing, save and close the file. The changes will take effect immediately.

      Back in the WordPress Offload Media plugin admin interface, select the radio button next to Define access keys in wp-config.php and hit Save Changes.

      You should be brought to the following interface:

      WP Offload Bucket Selection

      On this configuration page, select the appropriate region for your Space using the Region dropdown and enter your Space name next to Bucket (in this tutorial, our Space is called wordpress-offload).

      Then, hit Save Bucket.

      You'll be brought to the main WP Offload Media configuration page. At the top you should see the following warning box:

      WP Offload License

      Click on enter your license key, and on the subsequent page enter the license key found in your email receipt or on the checkout page and hit Activate License.

      If you entered your license key correctly, you should see License activated successfully.

      Now, navigate back to main WP Offload Media configuration page by clicking on Media Library at the top of the window.

      At this point, WP Offload Media has successfully been configured for use with your DigitalOcean Space. You can now begin offloading assets and delivering them using the Spaces CDN.

      Configuring WP Offload Media

      Now that you've linked WP Offload Media with your DigitalOcean Space, you can begin offloading assets and configuring URL rewriting to deliver media from the Spaces CDN.

      You should see the following configuration options on the main WP Offload Media configuration page:

      WP Offload Main Nav

      These defaults should work fine for most use cases. If your Media Library exists at a nonstandard path within your WordPress directory, enter the path in the text box under the Path option.

      If you'd like to change asset URLs so that they are served directly from Spaces and not your WordPress server, ensure the toggle is set to On next to Rewrite Media URLs.

      To deliver Media Library assets using the Spaces CDN, ensure you've enabled the CDN for your Space (see Enable Spaces CDN to learn how) and have noted down the URL for the Edge endpoint. Hit the toggle next to Custom Domain (CNAME), and In the text box that appears, enter the CDN Edge endpoint URL, without the https:// prefix.

      In this guide the Spaces CDN endpoint is:

      https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com
      

      So here we enter:

       wordpress-offload.nyc3.cdn.digitaloceanspaces.com
      

      To improve security, we'll force HTTPS for requests to Media Library assets (now served using the CDN) by setting the toggle to On.

      You can optionally clear out files that have been offloaded to Spaces from your WordPress server to free up disk space. To do this, hit On next to Remove Files From Server.

      Once you've finished configuring WP Offload Media, hit Save Changes at the bottom of the page to save your settings.

      The URL Preview box should display a URL containing your Spaces CDN endpoint. It should look something like the following:

      https://wordpress‑offload.nyc3.cdn.digitaloceanspaces.com/wp‑content/uploads/2018/09/21211354/photo.jpg

      This URL indicates that WP Offload Media has been successfully configured to deliver Media Library assets using the Spaces CDN. If the path doesn't contain cdn, ensure that you correctly entered the Edge endpoint URL and not the Origin URL.

      At this point, WP Offload Media has been set up to deliver your Media Library using Spaces CDN. Any future uploads to your Media Library will be automatically copied over to your DigitalOcean Space and served using the CDN.

      You can now bulk offload existing assets in your Media Library using the built-in upload tool.

      Offloading Media Library

      We'll use the plugin's built-in "Upload Tool" to offload existing files in our WordPress Media Library.

      On the right-hand side of the main WP Offload Media configuration page, you should see the following box:

      WP Offload Upload Tool

      Click Offload Now to upload your Media Library files to your DigitalOcean Space.

      If the upload procedure gets interrupted, the box will change to display the following:

      WP Offload Upload Tool 2

      Hit Offload Remaining Now to transfer the remaining files to your DigitalOcean Space.

      Once you've offloaded the remaining items from your Media Library, you should see the following new boxes:

      WP Offload Success

      At this point you've offloaded your Media Library to your Space and are delivering the files to users using the Spaces CDN.

      At any point in time, you can download the files back to your WordPress server from your Space by hitting Download Files.

      You can also clear out your DigitalOcean Space by hitting Remove Files. Before doing this, ensure that you’ve first downloaded the files back to your WordPress server from Spaces.

      In this step, we learned how to offload our WordPress Media Library to DigitalOcean Spaces and rewrite links to these Library assets using the WP Offload Media plugin.

      To offload additional WordPress assets like themes and JavaScript files, you can use the Asset Pull addon or consult the Offload Additional Assets section of this guide.

      To verify and test that your Media Library uploads are being delivered from the Spaces CDN, skip to Test CDN Caching.

      The MaxGalleria Media Library Folders Pro plugin is a convenient WordPress plugin that allows you to better organize your WordPress Media Library assets. In addition, the free Spaces addon allows you to bulk offload your Media Library assets to DigitalOcean Spaces, and rewrite URLs to those assets to serve them directly from object storage. You can then enable the Spaces CDN and use the Spaces CDN endpoint to serve your library assets from the distributed delivery network. To accomplish this last step, you can use the CDN Enabler plugin to rewrite CDN endpoint URLs for your Media Library assets.

      We'll begin by installing and configuring the Media Library Folders Pro (MLFP) plugin, as well as the MLFP Spaces addon. We’ll then install and configure the CDN Enabler plugin to deliver Media Library assets using the Spaces CDN.

      Installing MLFP Plugin

      After purchasing the MLFP plugin, you should have received an email containing your MaxGalleria account credentials as well as a plugin download link. Click on the plugin download link to download the MLFP plugin zip archive to your local computer.

      Once you've downloaded the archive, log in to your WordPress site's administration interface (https://your_site_url/wp-admin), and navigate to Plugins and then Add New in the left-hand sidebar.

      From the Add Plugins page, click Upload Plugin and then select the zip archive you just downloaded.

      Click Install Now to complete the plugin installation, and from the Installing Plugin screen, click Activate Plugin to activate MLFP.

      You should then see a Media Library Folders Pro menu item appear in the left-hand sidebar. Click it to go to the Media Library Folders Pro interface. Covering the plugin's various features is beyond the scope of this guide, but to learn more, you can consult the MaxGalleria site and forums.

      We'll now activate the plugin. Click into Settings under the MLFP menu item, and enter your license key next to the License Key text box. You can find your MLFP license key in the email sent to you when you purchased the plugin. Hit Save Changes and then Activate License. Next, hit Update Settings.

      Your MLFP plugin is now active, and you can use it to organize existing or new Media Library assets for your WordPress site.

      We'll now install and configure the Spaces addon plugin so that you can offload and serve these assets from DigitalOcean Spaces.

      Installing MLFP Spaces Addon Plugin and Offload Media Library

      To install the Spaces Addon, log in to your MaxGalleria account. You can find your account credentials in an email sent to you when you purchased the MLFP plugin.

      Navigate to the Addons page in the top menu bar and scroll down to Media Sources. From here, click into the Media Library Folders Pro S3 and Spaces option.

      From this page, scroll down to the Pricing section and select the option that suits the size of your WordPress Media Library (for Media Libraries with 3000 images or less, the addon is free).

      After completing the addon "purchase," you can navigate back to your account page (by clicking the Account link in the top menu bar), from which the addon plugin will now be available.

      Click on the Media Library Folders Pro S3 image and the plugin download should begin.

      Once the download completes, navigate back to your WordPress administration interface, and install the downloaded plugin using the same method as above, by clicking Upload Plugin. Once again, hit Activate Plugin to activate the plugin.

      You will likely receive a warning about configuring access keys in your wp-config.php file. We'll configure these now.

      Log in to your WordPress server using the console or SSH, and navigate to your WordPress root directory (in this tutorial, this is /var/www/html). From here, open up wp-config.php in your favorite editor:

      Scroll down to the line that says /* That's all, stop editing! Happy blogging. */, and before it insert the following lines containing your Spaces Access Key pair and a plugin configuration option (to learn how to generate an access key pair, consult the Spaces product docs):

      wp-config.php

      . . . 
      define('MF_AWS_ACCESS_KEY_ID', 'your_access_key_here');
      define( 'MF_AWS_SECRET_ACCESS_KEY', 'your_secret_key_here');
      define('MF_CLOUD_TYPE', 'do')
      
      /* That's all, stop editing! Happy blogging. */
      . . .
      

      Once you're done editing, save and close the file.

      Now, navigate to your DigitalOcean Space from the Cloud Control Panel, and create a folder called wp-content by clicking on New Folder.

      From here, navigate back to the WordPress administration interface, and click into Media Library Folders Pro and then S3 & Spaces Settings in the sidebar.

      The warning banner about configuring access keys should now have disappeared. If it's still present, you should double check your wp-config.php file for any typos or syntax errors.

      In the License Key text box, enter the license key that was emailed to you after purchasing the Spaces addon. Note that this license key is different from the MLFP license key. Hit Save Changes and then Activate License.

      Once activated, you should see the following configuration pane:

      MLFP Spaces Addon Configuration

      From here, click Select Image Bucket & Region to select your DigitalOcean Space. Then select the correct region for your Space and hit Save Bucket Selection.

      You've now successfully connected the Spaces offload plugin to your DigitalOcean Space. You can begin offloading your WordPress Media Library assets.

      The Use files on the cloud server checkbox allows you to specify where Media Library assets will be served from. If you check the box, assets will be served from DigitalOcean Spaces, and URLs to images and other Media Library objects will be correspondingly rewritten. If you plan on using the Spaces CDN to serve your Media Library assets, do not check this box, as the plugin will use the Spaces Origin endpoint and not the CDN Edge endpoint. We will configure CDN link rewriting in a future step.

      Click the Remove files from local server box to delete local Media Library assets once they've been successfully uploaded to DigitalOcean Spaces.

      The Remove individual downloaded files from the cloud server checkbox should be used when bulk downloading files from Spaces to your WordPress server. If checked, these files will be deleted from Spaces after successfully downloading to your WordPress server. We can ignore this option for now.

      Since we're configuring the plugin for use with the Spaces CDN, leave the Use files on the cloud server box unchecked, and hit Copy Media Library to the cloud server to sync your site's WordPress Media Library to your DigitalOcean Space.

      You should see a progress box appear, and then Upload complete. indicating the Media Library sync has concluded successfully.

      Navigate to your DigitalOcean Space to confirm that your Media Library files have been copied to your Space. They should be available in the uploads subdirectory of the wp-content directory you created earlier in this step.

      Once your files are available in your Space, you're ready to move on to configuring the Spaces CDN.

      Installing CDN Enabler Plugin to Deliver Assets from Spaces CDN

      To use the Spaces CDN to serve your now offloaded files, first ensure that you've enabled the CDN for your Space.

      Once the CDN has been enabled for your Space, you can now install and configure the CDN Enabler WordPress plugin to rewrite links to your Media Library assets. The plugin will rewrite links to these assets so that they are served from the Spaces CDN endpoint.

      To install CDN Enabler, you can either use the Plugins menu from the WordPress administration interface, or install the plugin directly from the command line. We'll demonstrate the latter procedure here.

      First, log in to your WordPress server. Then, navigate to your plugins directory:

      • cd /var/www/html/wp-content/plugins

      Be sure to replace the above path with the path to your WordPress installation.

      From the command line, use the wp-cli interface to install the plugin:

      • wp plugin install cdn-enabler

      Now, activate the plugin:

      • wp plugin activate cdn-enabler

      Back in the WordPress Admin Area, under Settings, you should see a new link to CDN Enabler settings. Click into CDN Enabler.

      You should see the following settings screen:

      CDN Enabler Settings

      Modify the displayed fields as follows:

      • CDN URL: Enter the Spaces Edge endpoint, which you can find from the Spaces Dashboard. In this tutorial, this is https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com
      • Included Directories: Enter wp-content/uploads. We'll learn how to serve other wp-content directories in the Offload Additional Assets section.
      • Exclusions: Leave the default .php
      • Relative Path: Leave the box checked
      • CDN HTTPS: Enable it by checking the box
      • Leave the remaining two fields blank

      Then, hit Save Changes to save these settings and enable them for your WordPress site.

      At this point you've successfully offloaded your WordPress site's Media Library to DigitalOcean Spaces and are serving them to end users using the CDN.

      In this step, we did not offload the WordPress theme or other wp-content assets. To learn how to transfer these assets to Spaces and serve them using the Spaces CDN, skip to Offload Additional Assets.

      To verify and test that your Media Library uploads are being delivered from the Spaces CDN, skip to Test CDN Caching.

      Offloading Additional Assets (Optional)

      In previous sections of this guide, we’ve learned how to offload our site’s WordPress Media Library to Spaces and serve these files using the Spaces CDN. In this section, we’ll cover offloading and serving additional WordPress assets like themes, JavaScript files, and fonts.

      Most of these static assets live inside of the wp-content directory (which contains wp-themes). To offload and rewrite URLs for this directory, we’ll use CDN Enabler, an open-source plugin developed by KeyCDN.

      If you’re using the WP Offload Media plugin, you can use the Asset Pull addon to serve these files using a pull CDN. Installing and configuring this addon is beyond the scope of this guide. To learn more, consult the DeliciousBrains product page.

      First, we’ll install CDN Enabler. We’ll then copy our WordPress themes over to Spaces, and finally configure CDN Enabler to deliver these using the Spaces CDN.

      If you’ve already installed CDN Enabler in a previous step, skip to Step 2.

      Step 1 — Installing CDN Enabler

      To install CDN Enabler, log in to your WordPress server. Then, navigate to your plugins directory:

      • cd /var/www/html/wp-content/plugins

      Be sure to replace the above path with the path to your WordPress installation.

      From the command line, use the wp-cli interface to install the plugin:

      • wp plugin install cdn-enabler

      Now, activate the plugin:

      • wp plugin activate cdn-enabler

      Back in the WordPress Admin Area, under Settings, you should see a new link to CDN Enabler settings. Click into CDN Enabler.

      You should see the following settings screen:

      CDN Enabler Settings

      At this point you’ve successfully installed CDN Enabler. We’ll now upload our WordPress themes to Spaces.

      Step 2 — Uploading Static WordPress Assets to Spaces

      In this tutorial, to demonstrate a basic plugin configuration, we're only going to serve wp-content/themes, the WordPress directory containing WordPress themes' PHP, JavaScript, HTML, and image files. You can optionally extend this process to other WordPress directories, like wp-includes, and even the entire wp-content directory.

      The theme used by the WordPress installation in this tutorial is twentyseventeen, the default theme for a fresh WordPress installation at the time of writing. You can repeat these steps for any other theme or WordPress content.

      First, we'll upload our theme to our DigitalOcean Space using s3cmd. If you haven't yet configured s3cmd, consult the DigitalOcean Spaces Product Documentation.

      Navigate to your WordPress installation's wp-content directory:

      • cd /var/www/html/wp-content

      From here, upload the themes directory to your DigitalOcean Space using s3cmd. Note that at this point you can choose to upload only a single theme, but for simplicity and to offload as much content as possible from our server, we will upload all the themes in the themes directory to our Space.

      We'll use find to build a list of non-PHP (therefore cacheable) files, which we'll then pipe to s3cmd to upload to Spaces. We’ll exclude CSS stylesheets as well in this first command as we need to set the text/css MIME type when uploading them.

      • find themes/ -type f -not ( -name '*.php' -or -name '*.css' ) | xargs -I{} s3cmd put --acl-public {} s3://wordpress-offload/wp-content/{}

      Here, we instruct find to search for files within the themes/ directory, and ignore .php and .css files. We then use xargs -I{} to iterate over this list, executing s3cmd put for each file, and set the file's permissions in Spaces to public using --acl-public.

      Next, we’ll do the same for CSS stylesheets, adding the --mime-type="text/css" flag to set the text/css MIME type for the stylesheets on Spaces. This will ensure that Spaces serves your theme's CSS files using the correct Content-Type: text/css HTTP header:

      • find themes/ -type f -name '*.css' | xargs -I{} s3cmd put --acl-public --mime-type="text/css" {} s3://wordpress-offload/wp-content/{}

      Again, be sure to replace wordpress-offload in the above command with your Space name.

      Now that we’ve uploaded our theme, let’s verify that it can be found at the correct path in our Space. Navigate to your Space using the DigitalOcean Cloud Control Panel.

      Enter the wp-content directory, followed by the themes directory. You should see your theme's directory here. If you don't, verify your s3cmd configuration and re-upload your theme to your Space.

      Now that our theme lives in our Space, and we've set the correct metadata, we can begin serving its files using CDN Enabler and the DigitalOcean Spaces CDN.

      Navigate back to the WordPress Admin Area and click into Settings and then CDN Enabler.

      Here, modify the displayed fields as follows:

      • CDN URL: Enter the Spaces Edge endpoint, as done in Step 1. In this tutorial, this is https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com
      • Included Directories: If you’re not using the MLFP plugin, this should be wp-content/themes. If you are, this should be wp-content/uploads,wp-content/themes
      • Exclusions: Leave the default .php
      • Relative Path: Leave the box checked
      • CDN HTTPS: Enable it by checking the box
      • Leave the remaining two fields blank

      Your final settings should look something like this:

      CDN Enabler Final Settings

      Hit Save Changes to save these settings and enable them for your WordPress site.

      At this point you've successfully offloaded your WordPress site's theme assets to DigitalOcean Spaces and are serving them to end users using the CDN. We can confirm this using Chrome's DevTools, following the procedure described below.

      Using the CDN Enabler plugin, you can repeat this process for other WordPress directories, like wp-includes, and even the entire wp-content directory.

      Testing CDN Caching

      In this section, we’ll demonstrate how to determine where your WordPress assets are being served from (e.g. your host server or the CDN) using Google Chrome’s DevTools.

      Step 1 — Adding Sample Image to Media Library to Test Syncing

      To begin, we’ll first upload a sample image to our Media Library, and verify that it's being served from the DigitalOcean Spaces CDN servers. You can upload an image using the WordPress Admin web interface, or using the wp-cli command-line tool. In this guide, we’ll use wp-cli to upload the sample image.

      Log in to your WordPress server using the command line, and navigate to the home directory for the non-root user you've configured. In this tutorial, we’ll use the user sammy.

      From here, use curl to download the DigitalOcean logo to your Droplet (if you already have an image you'd like to test with, skip this step):

      • curl https://assets.digitalocean.com/logos/DO_Logo_horizontal_blue.png > do_logo.png

      Now, use wp-cli to import the image to your Media Library:

      • wp media import --path=/var/www/html/ /home/sammy/do_logo.png

      Be sure to replace /var/www/html with the correct path to the directory containing your WordPress files.

      You may see some warnings, but the output should end in the following:

      Output

      Imported file '/home/sammy/do_logo.png' as attachment ID 10. Success: Imported 1 of 1 items.

      Which indicates that our test image has successfully been copied to the WordPress Media Library, and also uploaded to our DigitalOcean Space, using your preferred offload plugin.

      Navigate to your DigitalOcean Space to confirm:

      Spaces Upload Success

      This indicates that your offload plugin is functioning as expected and automatically syncing WordPress uploads to your DigitalOcean Space. Note that the exact path to your Media Library uploads in the Space will depend on the plugin you’re using to offload your WordPress files.

      Next, we will verify that this file is being served using the Spaces CDN, and not from the server running WordPress.

      Step 2 — Inspecting Asset URL

      From the WordPress admin area (https://your_domain/wp-admin), navigate to Pages in the left-hand side navigation menu.

      We will create a sample page containing our uploaded image to determine where it's being served from. You can also run this test by adding the image to an existing page on your WordPress site.

      From the Pages screen, click into Sample Page, or any existing page. You can alternatively create a new page.

      In the page editor, click on Add Media, and select the DigitalOcean logo (or other image you used to test this procedure).

      An Attachment Details pane should appear on the right-hand side of your screen. From this pane, add the image to the page by clicking on Insert into page.

      Now, back in the page editor, click on either Publish (if you created a new sample page) or Update (if you added the image to an existing page) in the Publish box on the right-hand side of your screen.

      Now that the page has successfully been updated to contain the image, navigate to it by clicking on the Permalink under the page title. You'll be brought to this page in your web browser.

      For the purposes of this tutorial, the following steps will assume that you're using Google Chrome, but you can use most modern web browsers to run a similar test.

      From the rendered page preview in your browser, right click on the image and click on Inspect:

      Inspect Menu

      A DevTools window should pop up, highlighting the img asset in the page's HTML:

      DevTools Output

      You should see the CDN endpoint for your DigitalOcean Space in this URL (in this tutorial, our Spaces CDN endpoint is https://wordpress-offload.nyc3.cdn.digitaloceanspaces.com), indicating that the image asset is being served from the DigitalOcean Spaces CDN edge cache.

      This confirms that your Media Library uploads are being synced to your DigitalOcean Space and served using the Spaces CDN.

      From the DevTools window, we'll run one final test. Click on Network in the toolbar at the top of the window.

      Once in the blank Network window, follow the displayed instructions to reload the page.

      The page assets should populate in the window. Locate your test image in the list of page assets:

      Chrome DevTools Asset List

      Once you've located your test image, click into it to open an additional information pane. Within this pane, click on Headers to show the response headers for this asset:

      Response Headers

      You should see the Cache-Control HTTP header, which is a CDN response header. This confirms that this image was served from the Spaces CDN.

      Step 4 — Inspecting URLs for Theme Assets (Optional)

      If you offloaded your wp-themes (or other) directory as described in Offload Additional Assets, you should perform the following brief check to verify that your theme’s assets are being served from the Spaces CDN.

      Navigate to your WordPress site in Google Chrome, and right-click anywhere in the page. In the menu that appears, click on Inspect.

      You'll once again be brought to the Chrome DevTools interface.

      Chrome DevTools Interface

      From here, click into Sources.

      In the left-hand pane, you should see a list of your WordPress site's assets. Scroll down to your CDN endpoint, and expand the list by clicking the small arrow next to the endpoint name:

      DevTools Site Asset List

      Observe that your WordPress theme's header image, JavaScript, and CSS stylesheet are now being served from the Spaces CDN.

      Conclusion

      In this tutorial, we've shown how to offload static content from your WordPress server to DigitalOcean Spaces, and serve this content using the Spaces CDN. In most cases, this should reduce bandwidth on your host infrastructure and speed up page loads for end users, especially those located further away geographically from your WordPress server.

      We demonstrated how to offload and serve both Media Library and themes assets using the Spaces CDN, but these steps can be extended to further unload the entire wp-content directory, as well as wp-includes.

      Implementing a CDN to deliver static assets is just one way to optimize your WordPress installation. Other plugins like W3 Total Cache can further speed up page loads and improve the SEO of your site. A helpful tool to measure your page load speed and improve it is Google's PageSpeed Insights. Another helpful tool that provides a waterfall breakdown of request and response times as well as suggested optimizations is Pingdom.

      To learn more about Content Delivery Networks and how they work, consult Using a CDN to Speed Up Static Content Delivery.



      Source link

      Using a CDN to Speed Up Static Content Delivery


      Introduction

      Modern websites and applications must often deliver a significant amount of static content to end users. This content includes images, stylesheets, JavaScript, and video. As these static assets grow in number and size, bandwidth usage swells and page load times increase, deteriorating the browsing experience for your users and reducing your servers’ available capacity.

      To dramatically reduce page load times, improve performance, and reduce your bandwidth and infrastructure costs, you can implement a CDN, or content delivery network, to cache these assets across a set of geographically distributed servers.

      In this tutorial, we’ll provide a high-level overview of CDNs and how they work, as well as the benefits they can provide for your web applications.

      What is a CDN?

      A content delivery network is a geographically distributed group of servers optimized to deliver static content to end users. This static content can be almost any sort of data, but CDNs are most commonly used to deliver web pages and their related files, streaming video and audio, and large software packages.

      Diagram of content delivery without a CDN

      A CDN consists of multiple points of presence (PoPs) in various locations, each consisting of several edge servers that cache assets from your origin, or host server. When a user visits your website and requests static assets like images or JavaScript files, their requests are routed by the CDN to the nearest edge server, from which the content is served. If the edge server does not have the assets cached or the cached assets have expired, the CDN will fetch and cache the latest version from either another nearby CDN edge server or your origin servers. If the CDN edge does have a cache entry for your assets (which occurs the majority of the time if your website receives a moderate amount of traffic), it will return the cached copy to the end user.

      Content Delivery Network (CDN) diagram

      This allows geographically dispersed users to minimize the number of hops needed to receive static content, fetching the content directly from a nearby edge’s cache. The result is significantly decreased latencies and packet loss, faster page load times, and drastically reduced load on your origin infrastructure.

      CDN providers often offer additional features such as DDoS mitigation and rate-limiting, user analytics, and optimizations for streaming or mobile use cases at additional cost.

      How Does a CDN Work?

      When a user visits your website, they first receive a response from a DNS server containing the IP address of your host web server. Their browser then requests the web page content, which often consists of a variety of static files, such as HTML pages, CSS stylesheets, JavaScript code, and images.

      Once you roll out a CDN and offload these static assets onto CDN servers, either by “pushing” them out manually or having the CDN “pull” the assets automatically (both mechanisms are covered in the next section), you then instruct your web server to rewrite links to static content such that these links now point to files hosted by the CDN. If you’re using a CMS such as WordPress, this link rewriting can be implemented using a third-party plugin like CDN Enabler.

      Many CDNs provide support for custom domains, allowing you to create a CNAME record under your domain pointing to a CDN endpoint. Once the CDN receives a user request at this endpoint (located at the edge, much closer to the user than your backend servers), it then routes the request to the Point of Presence (PoP) located closest to the user. This PoP often consists of one or more CDN edge servers collocated at an Internet Exchange Point (IxP), essentially a data center that Internet Service Providers (ISPs) use to interconnect their networks. The CDN’s internal load balancer then routes the request to an edge server located at this PoP, which then serves the content to the user.

      Caching mechanisms vary across CDN providers, but generally they work as follows:

      1. When the CDN receives a first request for a static asset, such as a PNG image, it does not have the asset cached and must fetch a copy of the asset from either a nearby CDN edge server, or the origin server itself. This is known as a cache “miss,” and can usually be detected by inspecting the HTTP response header, containing X-Cache: MISS. This initial request will be slower than future requests because after completing this request the asset will have been cached at the edge.
      2. Future requests for this asset (cache “hits”), routed to this edge location, will now be served from cache, until expiry (usually set through HTTP headers). These responses will be significantly faster than the initial request, dramatically reducing latencies for users and offloading web traffic onto the CDN network. You can verify that the response was served from a CDN cache by inspecting the HTTP response header, which should now contain X-Cache: HIT.

      To learn more about how a specific CDN works and has been implemented, consult your CDN provider’s documentation.

      In the next section, we’ll introduce the two popular types of CDNs: push and pull CDNs.

      Push vs. Pull Zones

      Most CDN providers offer two ways of caching your data: pull zones and push zones.

      Pull Zones involve entering your origin server’s address, and letting the CDN automatically fetch and cache all the static resources available on your site. Pull zones are commonly used to deliver frequently updated, small to medium sized web assets like HTML, CSS, and JavaScript files. After providing the CDN with your origin server’s address, the next step is usually rewriting links to static assets such that they now point to the URL provided by the CDN. From that point onwards, the CDN will handle your users’ incoming asset requests and serve content from its geographically distributed caches and your origin as appropriate.

      To use a Push Zone, you upload your data to a designated bucket or storage location, which the CDN then pushes out to caches on its distributed fleet of edge servers. Push zones are typically used for larger, infrequently changing files, like archives, software packages, PDFs, video, and audio files.

      Benefits of Using a CDN

      Almost any site can reap the benefits provided by rolling out a CDN, but generally the core reasons for implementing one are to offload bandwidth from your origin servers onto the CDN servers, and to reduce latency for geographically distributed users.

      We’ll go through these and several of the other major advantages afforded by using a CDN below.

      Origin Offload

      If you’re nearing bandwidth capacity on your servers, offloading static assets like images, videos, CSS and JavaScript files will drastically reduce your servers’ bandwidth usage. Content delivery networks are designed and optimized for serving static content, and client requests for this content will be routed to and served by edge CDN servers. This has the added benefit of reducing load on your origin servers, as they then serve this data at a much lower frequency.

      Lower Latency for Improved User Experience

      If your user base is geographically dispersed, and a non-trivial portion of your traffic comes from a distant geographical area, a CDN can decrease latency by caching static assets on edge servers closer to your users. By reducing the distance between your users and static content, you can more quickly deliver content to your users and improve their experience by boosting page load speeds.

      These benefits are compounded for websites serving primarily bandwidth-intensive video content, where high latencies and slow loading times more directly impact user experience and content engagement.

      Manage Traffic Spikes and Avoid Downtime

      CDNs allow you to handle large traffic spikes and bursts by load balancing requests across a large, distributed network of edge servers. By offloading and caching static content on a delivery network, you can accommodate a larger number of simultaneous users with your existing infrastructure.

      For websites using a single origin server, these large traffic spikes can often overwhelm the system, causing unplanned outages and downtime. Shifting traffic onto highly available and redundant CDN infrastructure, designed to handle variable levels of web traffic, can increase the availability of your assets and content.

      Reduce Costs

      As serving static content usually makes up the majority of your bandwidth usage, offloading these assets onto a content delivery network can drastically reduce your monthly infrastructure spend. In addition to reducing bandwidth costs, a CDN can decrease server costs by reducing load on the origin servers, enabling your existing infrastructure to scale. Finally, some CDN providers offer fixed-price monthly billing, allowing you to transform your variable monthly bandwidth usage into a stable, predictable recurring spend.

      Increase Security

      Another common use case for CDNs is DDoS attack mitigation. Many CDN providers include features to monitor and filter requests to edge servers. These services analyze web traffic for suspicious patterns, blocking malicious attack traffic while continuing to allow reputable user traffic through. CDN providers usually offer a variety of DDoS mitigation services, from common attack protection at the infrastructure level (OSI layers 3 and 4), to more advanced mitigation services and rate limiting.

      In addition, most CDNs let you configure full SSL, so that you can encrypt traffic between the CDN and the end user, as well as traffic between the CDN and your origin servers, using either CDN-provided or custom SSL certificates.

      Choosing the Best Solution

      If your bottleneck is CPU load on the origin server, and not bandwidth, a CDN may not be the most appropriate solution. In this case, local caching using popular caches such as NGINX or Varnish may significantly reduce load by serving assets from system memory.

      Before rolling out a CDN, additional optimization steps — like minifying and compressing JavaScript and CSS files, and enabling web server HTTP request compression — can also have a significant impact on page load times and bandwidth usage.

      A helpful tool to measure your page load speed and improve it is Google’s PageSpeed Insights. Another helpful tool that provides a waterfall breakdown of request and response times as well as suggested optimizations is Pingdom.

      Conclusion

      A content delivery network can be a quick and effective solution for improving the scalability and availability of your web sites. By caching static assets on a geographically distributed network of optimized servers, you can greatly reduce page load times and latencies for end users. In addition, CDNs allow you to significantly reduce your bandwidth usage by absorbing user requests and responding from cache at the edge, thus lowering your bandwidth and infrastructure costs.

      With plugins and third-party support for major frameworks like WordPress, Drupal, Django, and Ruby on Rails, as well as additional features like DDoS mitigation, full SSL, user monitoring, and asset compression, CDNs can be an impactful tool for securing and optimizing high-traffic web sites.



      Source link