One place for hosting & domains

      How To Build and Deploy a Node.js Application To DigitalOcean Kubernetes Using Semaphore Continuous Integration and Delivery


      The author selected the Open Internet / Free Speech fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes allows users to create resilient and scalable services with a single command. Like anything that sounds too good to be true, it has a catch: you must first prepare a suitable Docker image and thoroughly test it.

      Continuous Integration (CI) is the practice of testing the application on each update. Doing this manually is tedious and error-prone, but a CI platform runs the tests for you, catches errors early, and locates the point at which the errors were introduced. Release and deployment procedures are often complicated, time-consuming, and require a reliable build environment. With Continuous Delivery (CD) you can build and deploy your application on each update without human intervention.

      To automate the whole process, you’ll use Semaphore, a Continuous Integration and Delivery (CI/CD) platform.

      In this tutorial, you’ll build an address book API service with Node.js. The API exposes a simple RESTful API interface to create, delete, and find people in the database. You’ll use Git to push the code to GitHub. Then you’ll use Semaphore to test the application, build a Docker image, and deploy it to a DigitalOcean Kubernetes cluster. For the database, you’ll create a PostgreSQL cluster using DigitalOcean Managed Databases.

      Prerequisites

      Before reading on, ensure you have the following:

      • A DigitalOcean account and a Personal Access Token. Follow Create a Personal Access Token to set one up for your account.
      • A Docker Hub account.
      • A GitHub account.
      • A Semaphore account; you can sign up with your GitHub account.
      • A new GitHub repository called addressbook for the project. When creating the repository, select the Initialize this repository with a README checkbox and select Node in the Add .gitignore menu. Follow GitHub’s Create a Repo help page for more details.
      • Git installed on your local machine and set up to work with your GitHub account. If you are unfamiliar or need a refresher, consider reading the How to use Git reference guide.
      • curl installed on your local machine.
      • Node.js installed on your local machine. In this tutorial, you’ll use Node.js version 10.16.0.

      Step 1 — Creating the Database and the Kubernetes Cluster

      Start by provisioning the services that will power the application: the DigitalOcean Database Cluster and the DigitalOcean Kubernetes Cluster.

      Log in to your DigitalOcean account and create a project. A project lets you organize all the resources that make up the application. Call the project addressbook.

      Next, create a PostgreSQL cluster. The PostgreSQL database service will hold the application’s data. You can pick the latest version available. It should take a few minutes before the service is ready.

      Once the PostgreSQL service is ready, create a database and a user. Set the database name to addessbook_db and set the username to addressbook_user. Take note of the password that’s generated for your new user. Databases are PostgreSQL’s way of organizing data. Usually, each application has its own database, although there are no hard rules about this. The application will use the username and password to get access to the database so it can save and retrieve its data.

      Finally, create a Kubernetes Cluster. Choose the same region in which the database is running. Name the cluser addressbook-server and set the number of nodes to 3.

      While the nodes are provisioning, you can start building your application.

      Step 2 — Writing the Application

      Let’s build the address book application you’re going to deploy. To start, clone the GitHub repository you created in the prerequisites so you have a local copy of the .gitignore file GitHub created for you, and you’ll be able to commit your application code quickly without having to manually create a repository. Open your browser and go to your new GitHub repository. Click on the Clone or download button and copy the provided URL. Use Git to clone the empty repository to your machine:

      • git clone https://github.com/your_github_username/addressbook

      Enter the project directory:

      With the repository cloned, you can start writing the app. You’ll build two components: a module that interacts with the database, and a module that provides the HTTP service. The database module will know how to save and retrieve persons from the address book database, and the HTTP module will receive requests and respond accordingly.

      While not strictly mandatory, it’s good practice to test your code while you write it, so you’ll also create a testing module. This is the planned layout for the application:

      • database.js: database module. It handles database operations.
      • app.js: the end user module and the main application. It provides an HTTP service for the users to connect to.
      • database.test.js: tests for the database module.

      In addition, you’ll want a package.json file for your project, which describes the project and its required dependencies. You can either create it manually with your editor, or interactively using npm. Run the npm init command to create the file interactively:

      The command will ask for some information to get started. Fill in the values as shown in the example. If you don’t see an answer listed, leave the answer blank, which uses the default value in parentheses:

      npm output

      package name: (addressbook) addressbook version: (1.0.0) 1.0.0 description: Addressbook API and database entry point: (index.js) app.js test command: git repository: URL for your GitHub repository keywords: author: Sammy the Shark <sammy@example.com>" license: (ISC) About to write to package.json: { "name": "addressbook", "version": "1.0.0", "description": "Addressbook API and database", "main": "app.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "", "license": "ISC" } Is this OK? (yes) yes

      Now you can start writing the code. The database is at the core of the service you’re developing. It’s essential to have a well-designed database model before writing any other components. Consequently, it makes sense to start with the database code.

      You don’t have to code all the bits of the application; Node.js has a large library of reusable modules. For instance, you don’t have to write any SQL queries if you have the Sequelize ORM module in the project. This module provides an interface that handles databases as JavaScript objects and methods. It can also create tables in your database. Sequelize needs the pg module to work with PostgreSQL.

      Install modules using the npm install command with the --save option, which tells npm to save the module in package.json. Execute this command to install both sequelize and pg:

      • npm install --save sequelize pg

      Create a new JavaScript file to hold the database code:

      Import the sequelize module by adding this line to the file:

      database.js

      const Sequelize = require('sequelize');
      
      . . .
      

      Then, below that line, initialize a sequelize object with the database connection parameters, which you’ll retrieve from the system environment. This keeps the credentials out of your code so you don’t accidentally share your credentials when you push your code to GitHub. You can use process.env to access environment variables, and JavaScripts’s || operator to set defaults for undefined variables:

      database.js

      . . .
      
      const sequelize = new Sequelize(process.env.DB_SCHEMA || 'postgres',
                                      process.env.DB_USER || 'postgres',
                                      process.env.DB_PASSWORD || '',
                                      {
                                          host: process.env.DB_HOST || 'localhost',
                                          port: process.env.DB_PORT || 5432,
                                          dialect: 'postgres',
                                          dialectOptions: {
                                              ssl: process.env.DB_SSL == "true"
                                          }
                                      });
      
      . . .
      

      Now define the Person model. To keep the example from getting too complex, you’ll only create two fields: firstName and lastName, both storing string values. Add the following code to define the model:

      database.js

      . . .
      
      const Person = sequelize.define('Person', {
          firstName: {
              type: Sequelize.STRING,
              allowNull: false
          },
          lastName: {
              type: Sequelize.STRING,
              allowNull: true
          },
      });
      
      . . .
      

      This defines the two fields, making firstName mandatory with allowNull: false. Sequelize’s model definition documentation shows the available data types and options.

      Finally, export the sequelize object and the Person model so other modules can use them:

      database.js

      . . .
      
      module.exports = {
          sequelize: sequelize,
          Person: Person
      };
      

      It’s handy to have a table-creation script in a separate file that you can call at any time during development. These types of files are called migrations. Create a new file to hold this code:

      Add these lines to the file to import the database model you defined, and call the sync() function to initialize the database, which creates the table for your model:

      migrate.js

      var db = require('./database.js');
      db.sequelize.sync();
      

      The application is looking for database connection information in system environment variables. Create a file called .env to hold those values, which you will load into the environment during development:

      Add the following variable declarations to the file. Ensure that you set DB_HOST, DB_PORT, and DB_PASSWORD to those associated with your DigitalOcean PostgreSQL cluster:

      .env

      export DB_SCHEMA=addressbook_db
      export DB_USER=addressbook_user
      export DB_PASSWORD=your_db_user_password
      export DB_HOST=your_db_cluster_host
      export DB_PORT=your_db_cluster_port
      export DB_SSL=true
      export PORT=3000
      

      Save the file.

      Warning: never check environment files into source control. They usually have sensitive information.

      Since you defined a default .gitignore file when you created the repository, this file is already ignored.

      You are ready to initialize the database. Import the environment file and run migrate.js:

      • source ./.env
      • node migrate.js

      This creates the database table:

      Output

      Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;

      The output shows two commands. The first one creates the People table as per your definition. The second command checks that the table was indeed created by looking it up in the PostgreSQL catalog.

      It’s good practice to create tests for your code. With tests, you can validate the code’s behavior. You can write a check for each function, method, or any other part of your system and verify that it works the way you’d expect, without having to test things manually.

      The jest testing framework is a great fit for writing tests against Node.js applications. Jest scans the files in the project for test files and executes them one a time. Install Jest with the --save-dev option, which tells npm that the module is not required to run the program, but it is a dependency for developing the application:

      • npm install --save-dev jest

      You’ll write tests to verify that you can insert, read, and delete records from your database. These tests will verify that your database connection and permissions are configured properly, and will also provide some tests you can use in your CI/CD pipeline later.

      Create the database.test.js file:

      Add the following content. Start by importing the database code:

      database.test.js

      const db = require('./database');
      
      . . .
      

      To ensure the database is ready to use, call sync() inside the beforeAll function:

      database.test.js

      . . .
      
      beforeAll(async () => {
          await db.sequelize.sync();
      });
      
      . . .
      

      The first test creates a person record in the database. The sequelize library executes all queries asynchronously, which means it doesn’t wait for the results of the query. To make the test wait for results so you can verify them, you must use the async and await keywords. This test calls the create() method to insert a new row in the database. Use expect to compare the person.id column with 1. The test will fail if you get a different value:

      database.test.js

      . . .
      
      test('create person', async () => {
          expect.assertions(1);
          const person = await db.Person.create({
              id: 1,
              firstName: 'Sammy',
              lastName: 'Davis Jr.',
              email: 'sammy@example.com'
          });
          expect(person.id).toEqual(1);
      });
      
      . . .
      

      In the next test, use the findByPk() method to retrieve the row with id=1. Then, validate the firstName and lastName values. Once again, use async and await:

      database.test.js

      . . .
      
      test('get person', async () => {
          expect.assertions(2);
          const person = await db.Person.findByPk(1);
          expect(person.firstName).toEqual('Sammy');
          expect(person.lastName).toEqual('Davis Jr.');
      });
      
      . . .
      

      Finally, test removing a person from the database. The destroy() method deletes the person with id=1. To ensure that it worked, try retrieving the person a second time and checking that the returned value is null:

      database.test.js

      . . .
      
      test('delete person', async () => {
          expect.assertions(1);
          await db.Person.destroy({
              where: {
                  id: 1
              }
          });
          const person = await db.Person.findByPk(1);
          expect(person).toBeNull();
      });
      
      . . .
      

      Finally, add this code to close the connection to the database with close() once all tests have finished:

      app.js

      . . .
      
      afterAll(async () => {
          await db.sequelize.close();
      });
      

      Save the file.

      The jest command runs the test suite for your program, but you can also store commands in package.json. Open this file in your editor:

      Locate the scripts keyword and replace the existing test line (which was just a placeholder). The test command is jest:

      . . .
      
        "scripts": {
          "test": "jest"
        },
      
      . . .
      

      Now you can call npm run test to invoke the test suite. This may be a longer command, but if you need to modify the jest command later, external services won’t have to change; they can continue calling npm run test.

      Run the tests:

      Then, check the results:

      Output

      console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): INSERT INTO "People" ("id","firstName","lastName","createdAt","updatedAt") VALUES ($1,$2,$3,$4,$5) RETURNING *; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): DELETE FROM "People" WHERE "id" = 1 console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; PASS ./database.test.js ✓ create person (344ms) ✓ get person (173ms) ✓ delete person (323ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 5.315s Ran all test suites.

      With the database code tested, you can build the API service to manage the people in the address book.

      To serve HTTP requests, you’ll use the Express web framework. Install Express and save it as a dependency using npm install:

      • npm install --save express

      You’ll also need the body-parser module, which you’ll use to access the HTTP request body. Install this as a dependency as well:

      • npm install --save body-parser

      Create the main application file app.js:

      Import the express, body-parser, and database modules. Then create an instance of the express module called app to control and configure the service. You use app.use() to add features such as middleware. Use this to add the body-parser module so the application can read url-encoded strings:

      app.js

      var express = require('express');
      var bodyParser = require('body-parser');
      var db = require('./database');
      var app = express();
      app.use(bodyParser.urlencoded({ extended: true }));
      
      . . .
      

      Next, add routes to the application. Routes are similar to buttons in an app or website; they trigger some action in your application. Routes link unique URLs to actions in the application. Each route will serve a specific path and support a different operation.

      The first route you’ll define handles GET requests for the /person/$ID path, which will display the database record for the person with the specified ID. Express automatically sets the value of the requested $ID in the req.params.id variable.

      The application must reply with the person data encoded as a JSON string. As you did in the database tests, use the findByPk() method to retrieve the person by id and reply to the request with HTTP status 200 (OK) and send the person record as JSON. Add the following code:

      app.js

      . . .
      
      app.get("/person/:id", function(req, res) {
          db.Person.findByPk(req.params.id)
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Errors cause the code in catch() to be executed. For instance, if the database is down, the connection will fail, and this will execute instead. In case of trouble, set the HTTP status to 500 (Internal Server Error) and send the error message back to the user:

      Add another route to create a person in the database. This route will handle PUT requests and access the person’s data from the req.body. Use the create() method to insert a row in the database:

      app.js

      . . .
      
      app.put("/person", function(req, res) {
          db.Person.create({
              firstName: req.body.firstName,
              lastName: req.body.lastName,
              id: req.body.id
          })
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Add another route to handle DELETE requests, which will remove records from the address book. First, use the ID to locate the record and then use the destroy method to remove it:

      app.js

      . . .
      
      app.delete("/person/:id", function(req, res) {
          db.Person.destroy({
              where: {
                  id: req.params.id
              }
          })
              .then( () => {
                  res.status(200).send();
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      And for convenience, add a route that retrieves all people in the database using the /all path:

      app.js

      . . .
      
      app.get("/all", function(req, res) {
          db.Person.findAll()
              .then( persons => {
                  res.status(200).send(JSON.stringify(persons));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      One last route left. If the request did not match any of the previous routes, send status code 404 (Not Found):

      app.js

      . . .
      
      app.use(function(req, res) {
          res.status(404).send("404 - Not Found");
      });
      
      . . .
      

      Finally, add the listen() method, which starts up the service. If the environment variable PORT is defined, then the service listens in that port; otherwise, it defaults to port 3000:

      app.js

      . . .
      
      var server = app.listen(process.env.PORT || 3000, function() {
          console.log("app is running on port", server.address().port);
      });
      

      As you’ve learned, the package.json file lets you define various commands to run tests, start your apps, and other tasks, which often lets you run common commands with much less typing. Add a new command on package.json to start the application. Edit the file:

      Add the start command, so it looks like this:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js"
        },
      
      . . .
      

      Don’t forget to add a comma to the previous line, as the scripts section needs its entries separated by commas.

      Save the file and start the application for the first time. First, load the environment file with source; this imports the variables into the session and makes them available to the application. Then, start the application with npm run start:

      • source ./.env
      • npm run start

      The app starts on port 3000:

      Output

      app is running on port 3000

      Open a browser and navigate to http://localhost:3000/all. You’ll see a page showing [].

      Switch back to your terminal and press CTRL-C to stop the application.

      Now is an excellent time to add code quality tests. Code quality tools, also known as linters, scan the project for issues in the code. Bad coding practices like leaving unused variables, not ending statements with a semicolon, or missing curly braces can cause bugs that are difficult to find.

      Install jshint tool, a JavaScript linter, as a development dependency:

      • npm install --save-dev jshint

      Over the years, JavaScript has received of updates, features, and syntax changes. The language has been standardized by ECMA International under the name of “ECMAScript”. About once a year, ECMA releases a new version of ECMAScript with new features.

      By default, jshint assumes that your code is compatible with ES6 (ECMAScript Version 6), and will throw an error if it finds any keywords not supported in that version. You’ll want to find the version that is compatible with your code. If you look at the feature table for all the recent versions, you’ll find that the async/await keywords were not introduced until ES8. You used both keywords in the database test code, so that sets the minimum compatible version to ES8.

      To tell jshint the version you’re using, create a file called .jshintrc:

      In the file, specify esversion. The jshintrc file uses JSON, so create a new JSON object in the file:

      .jshintrc

      { "esversion": 8 }
      

      Save the file and exit the editor.

      Add a command to run jshint. Edit package.json:

      Add a lint command to your project in the scripts section of package.json. The command calls the lint tool against all the JavaScript files you created so far:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js",
          "lint": "jshint app.js database*.js migrate.js"
        },
      
      . . .
      

      Now you can run the linter to find any issues:

      There should not be any error messages:

      Output

      > jshint app.js database*.js migrate.js

      If there are any errors, jshint will show the line that has the problem.

      You’ve completed the project and ensured it works. Add the files to the repository, commit, and push the changes:

      • git add *.js
      • git add package*.json
      • git add .jshintrc
      • git commit -m 'initial commit'
      • git push origin master

      Now you can configure Semaphore to test, build, and deploy the application, starting by configuring Semaphore with your DigitalOcean Personal Access Token and database credentials.

      Step 3 — Creating Secrets in Semaphore

      There is some information that doesn’t belong in a GitHub repository. Passwords and API Tokens are good examples of this. You’ve stored this sensitive data in a separate file and loaded it into your environment, When using Semaphore, you can use Secrets to store sensitive data.

      There are three kinds of secrets in the project:

      • Docker Hub: the username and password of your Docker Hub account.
      • DigitalOcean Personal Access Token: to deploy the application to your Kubernetes cluster.
      • Environment Variables: for database username and password connection parameters.

      To create the first secret, open your browser and log in to the Semaphore website. On the left navigation menu, click Secrets under the CONFIGURATION heading. Click the Create New Secret button.

      For Name of the Secret, enter dockerhub. Then under Environment Variables, create two environment variables:

      • DOCKER_USERNAME: your DockerHub username.
      • DOCKER_PASSWORD: your DockerHub password.

      Docker Hub Secret

      Click Save Changes.

      Create a second secret for your DigitalOcean Personal Access Token. Once again, click on Secrets on the left navigation menu, then on Create New Secret. Call this secret do-access-token and create an environment value called DO_ACCESS_TOKEN with the value set to your Personal Access Token:

      DigitalOcean Token Secret

      Save the secret.

      For the next secret, instead of setting environment variables directly, you’ll upload the .env file from the project’s root.

      Create a new secret called env-production. Under the Files section, press the Upload file link to locate and upload your .env file, and tell Semaphore to place it at /home/semaphore/env-production.

      Environment Secret

      Note: Because the file is hidden, you may have trouble finding it on your computer. There is usually a menu item or a key combination to view hidden files, such as CTRL+H. If all else fails, you can try copying the file with a non-hidden name:

      Then upload the file and rename it back:

      The environment variables are all configured. Now you can begin the Continuous Integration setup.

      Step 4 — Adding your Project to Semaphore

      In this step you will add your project to Semaphore and start the Continuous Integration (CI) pipeline.

      First, link your GitHub repository with Semaphore:

      1. Log in to your Semaphore account.
      2. Click the + icon next to PROJECTS.
      3. Click the Add Repository button next to your repository.

      Add Repository to Semaphore

      Now that Semaphore is connected, it will pick up any changes in the repository automatically.

      You are now ready to create the Continuous Integration pipeline for the application. A pipeline defines the path your code must travel to get built, tested, and deployed. The pipeline is automatically run each time there is a change in the GitHub repository.

      First, you should ensure that Semaphore uses the same version of Node you’ve been using during development. You can check which version is running on your machine:

      Output

      v10.16.0

      You can tell Semaphore which version of Node.js to use by creating a file called .nvmrc in your repository. Internally, Semaphore uses node version manager to switch between Node.js versions. Create the .nvmrc file and set the version to 10.16.0:

      Semaphore pipelines go in the .semaphore directory. Create the directory:

      Create a new pipeline file. The initial pipeline is always called semaphore.yml. In this file, you’ll define all the steps required to build and test the application.

      • nano .semaphore/semaphore.yml

      Note: You are creating a file in the YAML format. You must preserve the leading spaces as shown in the tutorial.

      The first line must set the Semaphore file version; the current stable is v1.0. Also, the pipeline needs a name. Add these lines to your file:

      .semaphore/semaphore.yml

      version: v1.0
      name: Addressbook
      
      . . .
      

      Semaphore automatically provisions virtual machines to run the tasks. There are various machines to choose from. For the integration jobs, use the e1-standard-2 (2 CPUs 4 GB RAM) along with an Ubuntu 18.04 OS. Add these lines to the file:

      .semaphore/semaphore.yml

      . . .
      
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      Semaphore uses blocks to organize the tasks. Each block can have one or more jobs. All jobs in a block run in parallel, each one in an isolated machine. Semaphore waits for all jobs in a block to pass before starting the next one.

      Start by defining the first block, which installs all the JavaScript dependencies to test and run the application:

      .semaphore/semaphore.yml

      . . .
      
      blocks:
        - name: Install dependencies
          task:
      
      . . .
      

      You can define environment variables that are common for all jobs, like setting NODE_ENV to test, so Node.js knows this is a test environment. Add this code after task:

      .semaphore/semaphore.yml

      . . .
          task:
            env_vars:
              - name: NODE_ENV
                value: test
      
      . . .
      

      Commands in the prologue section are executed before each job in the block. It’s a convenient place to define setup tasks. You can use checkout to clone the GitHub repository. Then, nvm use activates the appropriate Node.js version you specified in .nvmrc. Add the prologue section:

      .semaphore/semaphore.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - nvm use
      
      . . .
      

      Next add this code to install the project’s dependencies. To speed up jobs, Semaphore provides the cache tool. You can run cache store to save node_modules directory in Semaphore’s cache. cache automatically figures out which files and directories should be stored. The second time the job is executed, cache restore restores the directory.

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: npm install and cache
                commands:
                  - cache restore
                  - npm install
                  - cache store 
      
      . . .
      

      Add another block which will run two jobs. One to run the lint test, and another to run the application’s test suite.

      .semaphore/semaphore.yml

      . . .
      
        - name: Tests
          task:
            env_vars:
              - name: NODE_ENV
                value: test
            prologue:
              commands:
                - checkout
                - nvm use
                - cache restore 
      
      . . .
      

      The prologue repeats the same commands as in the previous block and restores node_module from the cache. Since this block will run tests, you set the NODE_ENV environment variable to test.

      Now add the jobs. The first job performs the code quality check with jshint:

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: Static test
                commands:
                  - npm run lint
      
      . . .
      

      The next job executes the unit tests. You’ll need a database to run them, as you don’t want to use your production database. Semaphore’s sem-service can start a local PostgreSQL database in the test environment that is completely isolated. The database is destroyed when the job ends. Start this service and run the tests:

      .semaphore/semaphore.yml

      . . .
      
              - name: Unit test
                commands:
                  - sem-service start postgres
                  - npm run test
      

      Save the .semaphore/semaphore.yml file.

      Now add and commit the changes to the GitHub repository:

      • git add .nvmrc
      • git add .semaphore/semaphore.yml
      • git commit -m "continuous integration pipeline"
      • git push origin master

      As soon as the code is pushed to GitHub, Semaphore starts the CI pipeline:

      Running Workflow

      You can click on the pipeline to show the blocks and jobs, and their output.

      Integration Pipeline

      Next you will create a new pipeline that builds a Docker image for the application.

      Step 5 — Building Docker Images for the Application

      A Docker image is the basic unit of a Kubernetes deployment. The image should have all the binaries, libraries, and code required to run the application. A Docker container is not a lightweight virtual machine, but it behaves like one. The Docker Hub registry contains hundreds of ready-to-use images, but we’re going to build our own.

      In this step, you’ll add a new pipeline to build a custom Docker image for your app and push it to Docker Hub.

      To build a custom image, create a Dockerfile:

      The Dockerfile is a recipe to create the image. You can use the official Node.js distribution as a starting point instead of starting from scratch. Add this to your Dockerfile:

      Dockerfile

      FROM node:10.16.0-alpine
      
      . . .
      

      Then add a command which copies package.json and package-lock.json, and then install the node modules inside the image:

      Dockerfile

      . . .
      
      COPY package*.json ./
      RUN npm install
      
      . . .
      

      Installing the dependencies first will speed up subsequent builds, as Docker will cache this step.

      Now add this command which copies all the application files in the project root into the image:

      Dockerfile

      . . .
      
      COPY *.js ./
      
      . . .
      

      Finally, EXPOSE specifies that the container listens for connections on port 3000, where the application is listening, and CMD sets the command that should run when the container starts. Add these lines to your file:

      Dockerfile

      . . .
      
      EXPOSE 3000
      CMD [ "npm", "run", "start" ]
      

      Save the file.

      With the Dockerfile complete, you can create a new pipeline so Semaphore can build the image for you when you push your code to GitHub. Create a new file called docker-build.yml:

      • nano .semaphore/docker-build.yml

      Start the pipeline with the same boilerplate as the the CI pipline, but with the name Docker build:

      .semaphore/docker-build.yml

      version: v1.0
      name: Docker build
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have only one block and one job. In Step 3, you created a secret named dockerhub with your Docker Hub username and password. Here, you’ll import these values using the secrets keyword. Add this code:

      .semaphore/docker-build.yml

      . . .
      
      blocks:
        - name: Build
          task:
            secrets:
              - name: dockerhub
      
      . . .
      

      Docker images are stored in repositories. We’ll use the official Docker Hub which allows for an unlimited number of public images. Add these lines to check out the code from GitHub and use the docker login command to authenticate with Docker Hub.

      .semaphore/docker-build.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
      
      . . .
      

      Each Docker image is fully identified by the combination of name and tag. The name usually corresponds to the product or software, and the tag corresponds to the particular version of the software. For example, node.10.16.0. When no tag is supplied, Docker defaults to the special latest tag. Hence, it’s considered good practice to use the latest tag to refer to the most current image.

      Add the following code to build the image and push it to Docker Hub:

      .semaphore/docker-build.yml

      . . .
      
            jobs:
            - name: Docker build
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:latest" || true
                - docker build --cache-from "${DOCKER_USERNAME}/addressbook:latest" -t "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" .
                - docker push "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID"
      

      When Docker builds the image, it reuses parts of existing images to speed up the process. The first command tries to pull the latest image from Docker Hub so it may be reused. Semaphore stops the pipeline if any of the commands return a status code different than zero. For example, if the repository doesn’t have any latest image, as it won’t on the first try, the pipeline will stop. You can force Semaphore to ignore failed commands by appending || true to the command.

      The second command builds the image. To reference this particular image later, you can tag it with a unique string. Semaphore provides several environment variables for jobs. One of them, $SEMAPHORE_WORKFLOW_ID is unique and shared among all the pipelines in the workflow. It’s handy for referencing this image later in the deployment.

      The third command pushes the image to Docker Hub.

      The build pipeline is ready, but Semaphore will not start it unless you connect it to the main CI pipeline. You can chain multiple pipelines to create complex, multi-branch workflows using promotions.

      Edit the main pipeline file .semaphore/semaphore.yml:

      • nano .semaphore/semaphore.yml

      Add the following lines at the end of the file:

      .semaphore/semaphore.yml

      . . .
      
      promotions:
        - name: Dockerize
          pipeline_file: docker-build.yml
          auto_promote_on:
            - result: passed
      

      auto_promote_on defines the condition to start the docker build pipeline. In this case, it runs when all jobs defined in the semaphore.yml file have passed.

      To test the new pipeline, you need to add, commit, and push all the modified files to GitHub:

      • git add Dockerfile
      • git add .semaphore/docker-build.yml
      • git add .semaphore/semaphore.yml
      • git commit -m "docker build pipeline"
      • git push origin master

      After the CI pipeline is complete, the Docker build pipeline starts.

      Build Pipeline

      When it finishes, you’ll see your new image in your Docker Hub repository.

      You’ve got your build process testing and creating the image. Now you’ll create the final pipeline to deploy the application to your Kubernetes cluster.

      Step 6 — Setting up Continuous Deployment to Kubernetes

      The building block of a Kubernetes deployment is the pod. A pod is a group of containers that are managed as a single unit. The containers inside a pod start and stop in unison and always run on the same machine, sharing its resources. Each pod has an IP address. In this case, the pods will only have one container.

      Pods are ephemeral; they are created and destroyed frequently. You can’t tell which IP address is going to be assigned to each pod until it’s started. To solve this, you’ll use services, which have fixed public IP addresses so incoming connections can be load-balanced and forwarded to the pods.

      You could manage pods directly, but it’s better to let Kubernetes handle that by using a deployment. In this section, you will create a declarative manifest that describes the final desired state for your cluster. The manifest has two resources:

      • Deployment: starts the pods in the cluster nodes as required and keeps track of their status. Since in this tutorial we’re using a 3-node cluster, we’ll deploy 3 pods.
      • Service: acts as an entry point for our users. Listens to traffic on port 80 (HTTP) and forwards the connection to the pods.

      Create a manifest file called deployment.yml:

      Start the manifest with the Deployment resource. Add the following contents to the new file to define the deployment:

      deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: addressbook
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: addressbook
        template:
          metadata:
            labels:
              app: addressbook
          spec:
            containers:
              - name: addressbook
                image: ${DOCKER_USERNAME}/addressbook:${SEMAPHORE_WORKFLOW_ID}
                env:
                  - name: NODE_ENV
                    value: "production"
                  - name: PORT
                    value: "$PORT"
                  - name: DB_SCHEMA
                    value: "$DB_SCHEMA"
                  - name: DB_USER
                    value: "$DB_USER"
                  - name: DB_PASSWORD
                    value: "$DB_PASSWORD"
                  - name: DB_HOST
                    value: "$DB_HOST"
                  - name: DB_PORT
                    value: "$DB_PORT"
                  - name: DB_SSL
                    value: "$DB_SSL"
      
      
      . . .
      

      For each resource in the manifest, you need to set an apiVersion. For deployments, use apiVersion: apps/v1, a stable version. Then, tell Kubernetes that this resource is a Deployment with kind: Deployment. Each definition should have a name defined in metadata.name.

      In the spec section you tell Kubernetes what the desired final state is. This definition requests that Kubernetes should create 3 pods with replicas: 3.

      Labels are key-value pairs used to organize and cross-reference Kubernetes resources. You define labels with metadata.labels, and you can look for matching labels with selector.matchLabels. This is how you connect elements togther.

      The key spec.template defines a model that Kubernetes will use to create each pod. Inside spec.template.metadata.labels you set one label for the pods: app: addressbook.

      With spec.selector.matchLabels you make the deployment manage any pods with the label app: addressbook. In this case you are making this deployment responsible for all the pods.

      Finally, you define the image that runs in the pods. In spec.template.spec.containers you set the image name. Kubernetes will pull the image from the registry as needed. In this case, it will pull from Docker Hub). You can also set environment variables for the containers, which is fortunate because you need to supply several values for the database connection.

      To keep the deployment manifest flexible, you’ll be relying on variables. The YAML format, however, doesn’t allow variables, so the file isn’t valid yet. You’ll solve that problem when you define the deployment pipeline for Semaphore.

      That’s it for the deployment. But this only defines the pods. You still need a service that will allow traffic to flow to your pods. You can add another Kubernetes resource in the same file as long as you use three hyphens (---) as a separator.

      Add the following code to define a load balancer service that connects to pods with the addressbook label:

      deployment.yml

      . . .
      
      ---
      
      apiVersion: v1
      kind: Service
      metadata:
        name: addressbook-lb
      spec:
        selector:
          app: addressbook
        type: LoadBalancer
        ports:
          - port: 80
            targetPort: 3000
      

      The load balancer will receive connections on port 80 and forward them to the pods’ port 3000 where the application is listening.

      Save the file.

      Now, create a deployment pipeline for Semaphore that will deploy the app using the manifest. Create a new file in the .semaphore directory:

      • nano .semaphore/deploy-k8s.yml

      Begin the pipeline as usual, specifying the version, name, and image:

      .semaphore/deploy-k8s.yml

      version: v1.0
      name: Deploy to Kubernetes
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have two blocks. The first block deploys the application to the Kubernetes cluster.

      Define the block and import all the secrets:

      .semaphore/deploy-k8s.yml

      . . .
      
      blocks:
        - name: Deploy to Kubernetes
          task:
            secrets:
              - name: dockerhub
              - name: do-access-token
              - name: env-production
      
      . . .
      

      Store your DigitalOcean Kubernetes cluster name in an environment variable so you can reference it later:

      .semaphore/deploy-k8s.yml

      . . .
      
            env_vars:
              - name: CLUSTER_NAME
                value: addressbook-server
      
      . . .
      

      DigitalOcean Kubernetes clusters are managed with a combination of two programs: kubectl and doctl. The former is already included in Semaphore’s image, but the latter isn’t, so you need to install it. You can use the prologue section to do it.

      Add this prologue section:

      .semaphore/deploy-k8s.yml

      . . .
      
            prologue:
              commands:
                - wget https://github.com/digitalocean/doctl/releases/download/v1.20.0/doctl-1.20.0-linux-amd64.tar.gz
                - tar xf doctl-1.20.0-linux-amd64.tar.gz 
                - sudo cp doctl /usr/local/bin
                - doctl auth init --access-token $DO_ACCESS_TOKEN
                - doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
                - checkout
      
      . . .
      

      The first command downloads the doctl official release with wget. The second command decompresses it with tar and copies it into the local path. Once doctl is installed, it can be used to authenticate with the DigitalOcean API and request the Kubernetes config file for our cluster. After checking out our code, we are done with the prologue:

      Next comes the final piece of our pipeline: deploying to the cluster.

      Remember that there were some environment variables in deployment.yml, and YAML does not allow that. As a result, deployment.yml in its current state, won’t work. To get around that, source the environment file to load the variables, then use the envsubst command to expand the variables in-place with the actual values. The result, a file called deploy.yml, is entirely valid YAML with the values inserted. With the file in place, you can start the deployment with kubectl apply:

      .semaphore/deploy-k8s.yml

      . . .
      
            jobs:
            - name: Deploy
              commands:
                - source $HOME/env-production
                - envsubst < deployment.yml | tee deploy.yml
                - kubectl apply -f deploy.yml
      
      . . .
      

      The second block adds the latest tag to the image on Docker Hub to denote that this is the most current version deployed. Repeat the Docker login steps, then pull, retag, and push to Docker Hub:

      .semaphore/deploy-k8s.yml

      . . .
      
        - name: Tag latest release
          task:
            secrets:
              - name: dockerhub
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
                - checkout
            jobs:
            - name: docker tag latest
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" 
                - docker tag "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" "${DOCKER_USERNAME}/addressbook:latest"
                - docker push "${DOCKER_USERNAME}/addressbook:latest"
      

      Save the file.

      This pipeline performs the deployment, but it can only start if the Docker image was successfully generated and pushed to Docker Hub. As a result, you must connect the build and deployment pipelines with a promotion. Edit the Docker build pipeline to add it:

      • nano .semaphore/docker-build.yml

      Add the promotion to the end of the file:

      .semaphore/docker-build.yml

      . . .
      
      promotions:
        - name: Deploy to Kubernetes
          pipeline_file: deploy-k8s.yml
          auto_promote_on:
            - result: passed
      

      You are done setting up the CI/CD workflow.

      All that remains is pushing the modified files and letting Semaphore do the work. Add, commit, and push your repository’s changes:

      • git add .semaphore/deploy-k8s.yml
      • git add .semaphore/docker-build.yml
      • git add deployment.yml
      • git commit -m "kubernetes deploy pipeline"
      • git push origin master

      It’ll take a few minutes for the deployment to complete.

      Deploy Pipeline

      Let’s test the application next.

      Step 7 — Testing the Application

      At this point, the application is up and running. In this step, you’ll use curl to test the API endpoint.

      You’ll need to know the public IP that DigitalOcean has given to your cluster. Follow these steps to find it:

      1. Log in to your DigitalOcean account.
      2. Select the addressbook project
      3. Go to Networking.
      4. Click on Load Balancers.
      5. The IP Address is shown. Copy the IP address.

      Load Balancer IP

      Let’s check the /all route using curl:

      • curl -w "n" YOUR_CLUSTER_IP/all

      You can use the -w "n" option to ensure curl prints all lines:

      Since there are no records in the database yet, you get an empty JSON array as the result:

      Output

      []

      Create a new person record by making a PUT request to the /person endpoint:

      • curl -w "n" -X PUT
      • -d "firstName=Sammy&lastName=the Shark" YOUR_CLUSTER_IP/person

      The API returns the JSON object for the person:

      Output

      { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "updatedAt": "2019-07-04T23:51:00.548Z", "createdAt": "2019-07-04T23:51:00.548Z" }

      Create a second person:

      • curl -w "n" -X PUT
      • -d "firstName=Tommy&lastName=the Octopus" YOUR_CLUSTER_IP/person

      The output indicates that a second person was created:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "updatedAt": "2019-07-04T23:52:08.724Z", "createdAt": "2019-07-04T23:52:08.724Z" }

      Now make a GET request to get the person with the id of 2:

      • curl -w "n" YOUR_CLUSTER_IP/person/2

      The server replies with the data you requested:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "createdAt": "2019-07-04T23:52:08.724Z", "updatedAt": "2019-07-04T23:52:08.724Z" }

      To delete the person, send a DELETE request:

      • curl -w "n" -X DELETE YOUR_CLUSTER_IP/person/2

      No output is returned by this command.

      You should only have one person in your database, the one with the id of 1. Try getting /all again:

      • curl -w "n" YOUR_CLUSTER_IP/all

      The server replies with an array of persons containing only one record:

      Output

      [ { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "createdAt": "2019-07-04T23:51:00.548Z", "updatedAt": "2019-07-04T23:51:00.548Z" } ]

      At this point, there’s only one person left in the database.

      This completes the tests for all the endpoints in our application and marks the end of the tutorial.

      Conclusion

      In this tutorial, you wrote a complete Node.js application from scratch which used DigitalOcean’s managed PostgreSQL database service. You then used Semaphore’s CI/CD pipelines to fully automate a workflow that tested and built a container image, uploaded it to Docker Hub, and deployed it to DigitalOcean Kubernetes.

      To learn more about Kubernetes, you can read An Introduction to Kubernetes and the rest of DigitalOcean’s Kubernetes tutorials.

      Now that your application is deployed, you may consider adding a domain name, securing your database cluster, or setting up alerts for your database.



      Source link

      Customizing Go Binaries with Build Tags


      Introduction

      In Go, a build tag, or a build constraint, is an identifier added to a piece of code that determines when the file should be included in a package during the build process. This allows you to build different versions of your Go application from the same source code and to toggle between them in a fast and organized manner. Many developers use build tags to improve the workflow of building cross-platform compatible applications, such as programs that require code changes to account for variances between different operating systems. Build tags are also used for integration testing, allowing you to quickly switch between the integrated code and the code with a mock service or stub, and for differing levels of feature sets within an application.

      Let’s take the problem of differing customer feature sets as an example. When writing some applications, you may want to control which features to include in the binary, such as an application that offers Free, Pro, and Enterprise levels. As the customer increases their subscription level in these applications, more features become unlocked and available. To solve this problem, you could maintain separate projects and try to keep them in sync with each other through the use of import statements. While this approach would work, over time it would become tedious and error prone. An alternative approach would be to use build tags.

      In this article, you will use build tags in Go to generate different executable binaries that offer Free, Pro, and Enterprise feature sets of a sample application. Each will have a different set of features available, with the Free version being the default.

      Prerequisites

      To follow the example in this article, you will need:

      Building the Free Version

      Let’s start by building the Free version of the application, as it will be the default when running go build without any build tags. Later on, we will use build tags to selectively add other parts to our program.

      In the src directory, create a folder with the name of your application. This tutorial will use app:

      Move into this folder:

      Next, make a new text file in your text editor of choice named main.go:

      Now, we’ll define the Free version of the application. Add in the following contents to main.go:

      main.go

      package main
      
      import "fmt"
      
      var features = []string{
        "Free Feature #1",
        "Free Feature #2",
      }
      
      func main() {
        for _, f := range features {
          fmt.Println(">", f)
        }
      }
      

      In this file, we created a program that declares a slice named features, which holds two strings that represent the features of our Free application. The main() function in the application uses a for loop to range through the features slice and print all of the features available to the screen.

      Save and exit the file. Now that this file is saved, we will no longer have to edit it for the rest of the article. Instead we will use build tags to change the features of the binaries we will build from it.

      Build and run the program:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      The program has printed out our two free features, completing the Free version of our app.

      So far, you created an application that has a very basic feature set. Next, you will build a way to add more features into the application at build time.

      Adding the Pro Features With go build

      We have so far avoided making changes to main.go, simulating a common production environment in which code needs to be added without changing and possibly breaking the main code. Since we can’t edit the main.go file, we’ll need to use another mechanism for injecting more features into the features slice using build tags.

      Let’s create a new file called pro.go that will use an init() function to append more features to the features slice:

      Once the editor has opened the file, add the following lines:

      pro.go

      package main
      
      func init() {
        features = append(features,
          "Pro Feature #1",
          "Pro Feature #2",
        )
      }
      

      In this code, we used init() to run code before the main() function of our application, followed by append() to add the Pro features to the features slice. Save and exit the file.

      Compile and run the application using go build:

      Since there are now two files in our current directory (pro.go and main.go), go build will create a binary from both of them. Execute this binary:

      This will give you the following feature set:

      Output

      > Free Feature #1 > Free Feature #2 > Pro Feature #1 > Pro Feature #2

      The application now includes both the Pro and the Free features. However, this is not desirable: since there is no distinction between versions, the Free version now includes the features that are supposed to be only available in the Pro version. To fix this, you could include more code to manage the different tiers of the application, or you could use build tags to tell the Go tool chain which .go files to build and which to ignore. Let’s add build tags in the next step.

      You can now use build tags to distinguish the Pro version of your application from the Free version.

      Let’s start by examining what a build tag looks like:

      // +build tag_name
      

      By putting this line of code as the first line of your package and replacing tag_name with the name of your build tag, you will tag this package as code that can be selectively included in the final binary. Let’s see this in action by adding a build tag to the pro.go file to tell the go build command to ignore it unless the tag is specified. Open up the file in your text editor:

      Then add the following highlighted line:

      pro.go

      // +build pro
      
      package main
      
      func init() {
        features = append(features,
          "Pro Feature #1",
          "Pro Feature #2",
        )
      }
      

      At the top of the pro.go file, we added // +build pro followed by a blank newline. This trailing newline is required, otherwise Go interprets this as a comment. Build tag declarations must also be at the very top of a .go file. Nothing, not even comments, can be above build tags.

      The +build declaration tells the go build command that this isn’t a comment, but instead is a build tag. The second part is the pro tag. By adding this tag at the top of the pro.go file, the go build command will now only include the pro.go file with the pro tag is present.

      Compile and run the application again:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      Since the pro.go file requires a pro tag to be present, the file is ignored and the application compiles without it.

      When running the go build command, we can use the -tags flag to conditionally include code in the compiled source by adding the tag itself as an argument. Let’s do this for the pro tag:

      This will output the following:

      Output

      > Free Feature #1 > Free Feature #2 > Pro Feature #1 > Pro Feature #2

      Now we only get the extra features when we build the application using the pro build tag.

      This is fine if there are only two versions, but things get complicated when you add in more tags. To add in the Enterprise version of our app in the next step, we will use multiple build tags joined together with Boolean logic.

      Build Tag Boolean Logic

      When there are multiple build tags in a Go package, the tags interact with each other using Boolean logic. To demonstrate this, we will add the Enterprise level of our application using both the pro tag and the enterprise tag.

      In order to build an Enterprise binary, we will need to include both the default features, the Pro level features, and a new set of features for Enterprise. First, open an editor and create a new file, enterprise.go, that will add the new Enterprise features:

      The contents of enterprise.go will look almost identical to pro.go but will contain new features. Add the following lines to the file:

      enterprise.go

      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Save and exit the file.

      Currently the enterprise.go file does not have any build tags, and as you learned when you added pro.go, this means that these features will be added to the Free version when executing go.build. For pro.go, you added // +build pro and a newline to the top of the file to tell go build that it should only be included when -tags pro is used. In this situation, you only needed one build tag to accomplish the goal. When adding the new Enterprise features, however, you first must also have the Pro features.

      Let’s add support for the pro build tag to enterprise.go first. Open the file with your text editor:

      Next add the build tag before the package main declaration and make sure to include a newline after the build tag:

      enterprise.go

      // +build pro
      
      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Save and exit the file.

      Compile and run the application without any tags:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      The Enterprise features no longer show up in the Free version. Now let’s add the pro build tag and build and run the application again:

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2 > Enterprise Feature #1 > Enterprise Feature #2 > Pro Feature #1 > Pro Feature #2

      This is still not exactly what we need: The Enterprise features now show up when we try to build the Pro version. To solve this, we need to use another build tag. Unlike the pro tag, however, we need to now make sure both the pro and enterprise features are available.

      The Go build system accounts for this situation by allowing the use of some basic Boolean logic in the build tags system.

      Let’s open enterprise.go again:

      Add another build tag, enterprise, on the same line as the pro tag:

      enterprise.go

      // +build pro enterprise
      
      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Save and close the file.

      Now let’s compile and run the application with the new enterprise build tag.

      • go build -tags enterprise
      • ./app

      This will give the following:

      Output

      > Free Feature #1 > Free Feature #2 > Enterprise Feature #1 > Enterprise Feature #2

      Now we have lost the Pro features. This is because when we put multiple build tags on the same line in a .go file, go build interprets them as using OR logic. With the addition of the line // +build pro enterprise, the enterprise.go file will be built if either the pro build tag or the enterprise build tag is present. We need to set up the build tags correctly to require both and use AND logic instead.

      Instead of putting both tags on the same line, if we put them on separate lines, then go build will interpret those tags using AND logic.

      Open enterprise.go once again and let’s separate the build tags onto multiple lines.

      enterprise.go

      // +build pro
      // +build enterprise
      
      package main
      
      func init() {
        features = append(features,
          "Enterprise Feature #1",
          "Enterprise Feature #2",
        )
      }
      

      Now compile and run the application with the new enterprise build tag.

      • go build -tags enterprise
      • ./app

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2

      Still not quite there: Because an AND statement requires both elements to be considered true, we need to use both pro and enterprise build tags.

      Let’s try again:

      • go build -tags "enterprise pro"
      • ./app

      You’ll receive the following output:

      Output

      > Free Feature #1 > Free Feature #2 > Enterprise Feature #1 > Enterprise Feature #2 > Pro Feature #1 > Pro Feature #2

      Now our application can be built from the same source tree in multiple ways unlocking the features of the application accordingly.

      In this example, we used a new // +build tag to signify AND logic, but there are alternative ways to represent Boolean logic with build tags. The following table holds some examples of other syntactic formatting for build tags, along with their Boolean equivalent:

      Build Tag Syntax Build Tag Sample Boolean Statement
      Space-separated elements // +build pro enterprise pro OR enterprise
      Comma-separated elements // +build pro,enterprise pro AND enterprise
      Exclamation point elements // +build !pro NOT pro

      Conclusion

      In this tutorial, you used build tags to allow you to control which of your code got compiled into the binary. First, you declared build tags and used them with go build, then you combined multiple tags with Boolean logic. You then built a program that represented the different feature sets of a Free, Pro, and Enterprise version, showing the powerful level of control that build tags can give you over your project.

      If you’d like to learn more about build tags, take a look at the Golang documentation on the subject, or continue to explore our How To Code in Go series.



      Source link

      How To Build and Deploy Packages for Your FreeBSD Servers Using Buildbot and Poudriere


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      The FreeBSD ports and packages collection, hereafter called ports tree, is FreeBSD’s build system for external software. It offers a Makefile-based, consistent way of building packages. The port refers to the build recipe, that is the Makefile and related files; while package is the output of building one port into a binary (compressed) archive of the package files and its meta information.

      Manually building and installing a subset or all of the over 30,000 ports is possible with make install. However, the builds would run on one of your servers—not a clean environment. For production use cases, manual builds would also mean that each host needs the same revision of the ports tree, and needs to compile all packages for itself. This means repeated, error-prone work by humans and the servers. It is preferable to retrieve and use identical, pre-built binary packages on each host and serve them from a central, secure package repository.

      To achieve this, Poudriere is the standard tool on FreeBSD to build, test, and audit packages as well as maintain the package repositories. Each build is run isolated in a fresh jail, running the desired version of FreeBSD, and starting with no packages installed. Only the base system, plus any explicitly specified dependencies, are available to the clean build. Poudriere takes care of rebuilding packages when necessary as well as updating the package repository after a build has finished. The poudriere command line tool is central to administering different ports trees, FreeBSD versions, port build options, and lastly, running the builds.

      In this tutorial you’ll configure Poudriere, build a set of desired packages, set up HTTP-based package hosting, and automate the build using Buildbot as a continuous integration platform. Finally, you will securely access the packages from a client machine.

      Note: To cover production-like use cases, the tutorial examples use the quarterly stable branches of the ports tree. Staying on one such branch protects you from breaking changes and provides security and build fixes where necessary—if you regularly update the tree from upstream (Subversion, or its GitHub mirror). You can choose to stay on one branch for an extended period of time, depending on the pace at which your system updates can be handled by developer/infrastructure teams. The ports collection supports FreeBSD releases until they become end-of-life (EOL)—see Supported FreeBSD releases—so that OS and package updates can be handled independently. Alternatively, you could consider a local version control repository cloned from the upstream tree. That way, you can manage patches and only merge upstream changes at the time you desire.

      Prerequisites

      Before you begin this guide, you will need:

      • A server running FreeBSD 11.2. If you’re new to working with FreeBSD, you may find it helpful to customize this server by following our guide on How to Get Started with FreeBSD.
        Note: FreeBSD 12.0 currently has an issue with nested jails, which first needs to be fixed before 12.x can be used for this tutorial.
      • 10 GB free disk space or more to have enough capacity to store packages and logs.
      • A basic Buildbot setup by completing the How To Set Up Buildbot on FreeBSD tutorial.
      • Another server running FreeBSD, the same version, which you are going to use as a client to fetch and install the packages that you’re going to automatically build and host in a HTTP/HTTPS-based package repository.

      Step 1 — Installing Poudriere for Use in Buildbot Worker

      After completing the prerequisite tutorial, you’ll have a working Buildbot master and worker jail plus Nginx setup. You will build upon this existing setup in the following steps. In this first step, you’re going to install the build tool Poudriere inside the worker jail, since that is where the Buildbot worker process will trigger builds later on.

      Connect to your server hosting Buildbot and open a root shell in the worker jail with the following command:

      • sudo jexec buildbot-worker0 csh

      Install Poudriere as a package:

      Then confirm installation by pressing y and then ENTER.

      Note: It is preferable to use the official FreeBSD package repository for installing Buildbot, Poudriere, and so on. If you build those tool packages yourself, you start off in a chicken-and-egg situation: wanting to install external software, but requiring Poudriere installed to get cleanly built packages. Since Poudriere is a very stable and backward-compatible tool, nothing speaks against updating it regularly and independently from your production packages.

      If you followed the prerequisite tutorial, this is already the case and you can continue without following this note.

      You’ve successfully installed the latest Poudriere tool and dependencies. In the next several steps, you will go through preparations to configure Poudriere.

      Step 2 — Creating a Package Signing Key (Optional)

      It’s recommended to set up digital signatures for built packages in order to provide more security. Skip this step if you want to secure your installation later, or in a different way. Otherwise, let’s go ahead and create a key pair used to sign packages (using the private key) and verify packages (using the public part).

      Packages, by default, are built as .txz files, which are strongly compressed tarballs of the package contents. The compressed files’ checksums, together with serving the files via HTTP/HTTPS (TCP checksums), already provide some protection against corrupted data. Package contents typically comprise files and directories plus meta information such as the package name, version, and miscellaneous options. Files may even include setuid-able programs (as seen in the sudo package—though sudo is not built into FreeBSD), and the installation-time scripts run as root user. Installing from unverified sources therefore poses a security risk.

      By serving the packages over HTTPS, you cannot detect whether someone tampered with the packages on-disk. Integrity and authenticity of your packages can be added by configuring Poudriere to sign the package repository with an RSA private key. Signed digests and the corresponding public key are thereby stored in the package repository’s digests.txz file. The required key pair (RSA private and public key) can be kept unchanged for a long time unless the private key was lost or compromised.

      In this step you’ll create the key pair where the builds run (worker jail) and download the public part for later use on package clients (discussed in a later step).

      Ensure you’re still in the worker jail root shell.

      Create a new RSA private key:

      • openssl genrsa -out /usr/local/etc/poudriere.key 4096

      The private key file only needs to be accessible by root—the user that runs Poudriere. Protect its access permissions:

      • chmod 0600 /usr/local/etc/poudriere.key

      Later, you’ll need the public key part available on clients for verifying package signatures. Let’s extract the public key now:

      • openssl rsa -in /usr/local/etc/poudriere.key -pubout -out /tmp/poudriere.pub

      Lastly, download the public key file from your own computer:

      • scp your-server:/usr/jails/buildbot-worker0/tmp/poudriere.pub /tmp/poudriere.pub

      This concludes the optional creation of a key pair for package signing. You will later configure the actual signing with Poudriere and use the downloaded public key file on clients for the verification.

      Another optional step follows: if you use the ZFS filesystem, Poudriere can make use of it to speed up builds. Otherwise, you can skip to Step 4 to configure Poudriere in order to get ready for running the first build.

      Step 3 — Setting Up ZFS (Optional)

      This step only applies if you run a FreeBSD system on top of the ZFS filesystem. For instance if you’re using a DigitalOcean Droplet the image is labeled 11.2 x64 zfs (for FreeBSD 11.2). In this step, you’re going to create the filesystems that Poudriere can use to create and manage jails faster, potentially speeding up your builds.

      You can find out whether you’re using ZFS by listing pools. Make sure you’re on the server’s shell, not inside a jail.

      Run the following command to list the zpools:

      If any pool is available, it will print information about it:

      Output

      NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zroot 148G 94.4G 54.1G - - 66% 63% 1.00x ONLINE -

      Otherwise if ZFS support is not available the tool will print no pools available, or failed to initialize ZFS library. This means that none of your system is using ZFS; in this case, skip to the next step. If you have decided to use another disk or storage type, such as the UFS filesystem, you can also move on to the next step.

      If you plan to use ZFS, remember the printed pool name on which you want to store build-related data. You should plan for several gigabytes of storage.

      ZFS is helpful to separate the various datasets of Poudriere, like build jails, ports trees, logs, packages, and other data. These are stored independently and as a result can be deleted quickly with the certainty of not leaving free space or traces behind.

      For Poudriere to make use of ZFS, you need to do three things: create a parent ZFS dataset, allow for the creation and deletion of ZFS datasets (which the Buildbot worker jail, or any other jail, by default cannot do), and edit Poudriere’s configuration accordingly.

      In the prerequisite tutorial, you configured the Buildbot worker jail in /etc/jail.buildbot-worker0.conf. Open this file with your preferred text editor and add the following highlighted lines to delegate a parent dataset to allow the jail to administer ZFS datasets beneath the parent. Remember to replace zroot with your desired pool name:

      • sudo ee /etc/jail.buildbot-worker0.conf

      /etc/jail.buildbot-worker0.conf

      buildbot-worker0 {
          host.hostname = buildbot-worker0.localdomain;
          ip4.addr = "lo1|10.0.0.3/24";
          path = "/usr/jails/buildbot-worker0";
          exec.start = "/bin/sh /etc/rc";
          exec.stop = "/bin/sh /etc/rc.shutdown";
          mount.devfs; # need /dev/*random for Python
          persist;
      
          exec.poststart = "/sbin/zfs jail buildbot-worker0 zroot/pdr/w0";
      }
      

      In this article we will store build-related data on the ZFS pool zroot—please adapt this ZFS-related configuration here and throughout the rest of the article if you chose a pool of a different name.

      After adding this content, save and exit the editor. If you’re using ee, do this by pressing CTRL+C, typing exit, and pressing ENTER.

      Create the parent ZFS dataset mentioned in the configuration file:

      • sudo zfs create zroot/pdr
      • sudo zfs create zroot/pdr/w0

      This deliberately assumes that you may want to add more workers in the future and therefore creates a sub-dataset for your first worker. The dataset name is short on purpose, since older versions of FreeBSD (before 12.0) had a mount name limit of 88 characters.

      In order for a jail to take control of a parent dataset and administer any children, the dataset must be marked with the following flag:

      • sudo zfs set jailed=on zroot/pdr/w0

      With the preconditions now met, the jail will start correctly with the new configuration:

      • sudo service jail restart buildbot-worker0

      With these instructions, you successfully created the required filesystems—ZFS datasets—and allowed the jail to manage the parent dataset. In the next step, you will configure Poudriere, which involves specifying the chosen zpool and dataset used to store build-related data.

      Step 4 — Configuring Poudriere, the Build Jail, and the Ports Tree

      Until this point, you’ve installed Poudriere and optionally covered requirements for package signing and ZFS. For Poudriere to be able to run in a “jailed” fashion—that is, functioning correctly from within the Buildbot worker jail—you need to provide certain permissions to the jail. For example, if you use ZFS, you have already delegated a parent dataset for use and administration by the jail.

      Let’s first configure the loopback IP and all of the permissions, and then step through the respective meaning following the changes.

      Poudriere wants to start two build jails per build: one with loopback-only networking and one with internet access. Only build stages that are supposed to reach the internet will use the latter. For example, the fetch may download source tarballs, but the build phase is not allowed internet access. The existing configuration of the worker jail has ip4.addr = "lo1|10.0.0.3/24" that allows internet access. In order to allow Poudriere to assign a loopback address to freshly started build jails, the IP must also be passed to its parent (the worker jail). For this to work, please ensure you have applied the latest version of the firewall configuration file /usr/local/etc/ipfw.rules from the prerequisite tutorial, which will block the loopback interface lo0 from opening outgoing connections through NAT.

      Add the highlighted lines to your worker jail configuration:

      • sudo ee /etc/jail.buildbot-worker0.conf

      /etc/jail.buildbot-worker0.conf

      buildbot-worker0 {
          host.hostname = buildbot-worker0.localdomain;
          ip4.addr = "lo1|10.0.0.3/24";
          ip4.addr += "lo0|127.0.0.3";
          path = "/usr/jails/buildbot-worker0";
          exec.start = "/bin/sh /etc/rc";
          exec.stop = "/bin/sh /etc/rc.shutdown";
          mount.devfs; # need /dev/*random for Python
          persist;
      
          # If you followed the ZFS setup step, you have this line
          # already (keep it). For non-ZFS setup, this line must be absent.
          exec.poststart = "/sbin/zfs jail buildbot-worker0 zroot/pdr/w0";
      
          allow.chflags;
          allow.mount;
          allow.mount.devfs;
          allow.mount.nullfs;
          allow.mount.procfs;
          allow.mount.tmpfs;
          allow.mount.zfs; # only needed if you use ZFS
          allow.raw_sockets; # optional
          allow.socket_af; # optional
          allow.sysvipc; # optional
          children.max=16;
          enforce_statfs=1;
      }
      

      Here you’ve added the following (also see the jail(8) manpage):

      • ip4.addr += "lo0|127.0.0.3" adds another IPv4 address to the jail. You will later configure Poudriere’s LOIP4 variable in order to assign this loopback address to build jails that are not supposed to talk to the internet or other machines in your network, such as during the build phase. If you ever have a build that requires internet access during build, Poudriere supports a variable ALLOW_NETWORKING_PACKAGES as a workaround. However, it is preferable to follow best practice and perform downloads, and other internet-facing tasks earlier, in the fetch phase for which Poudriere permits internet access.
      • allow.chflags allows Poudriere to render certain system files like /bin/sh immutable in the build jail.
      • allow.mount and the other allow.mount.* options enable Poudriere to mount certain required filesystems into the build jails.
      • allow.raw_sockets which permits use of raw sockets, and allow.socket_af which permits use of any socket address family, are both applied to the internet-capable build jails. This is helpful so that you can run tools like ping in interactive mode, like when entering a build jail to debug problems.
      • allow.sysvipc is deprecated in favor of three separate settings sysvmsg/sysvsem/sysvshm to restrict jails to only see their own shared memory objects (via “SYS V” IPC primitives). However, Poudriere can only pass on allow.sysvipc to build jails because it cannot read the relevant sysctl information for the three separate parameters (as of FreeBSD 11.2). With this deprecated configuration, the jail could read shared memory of processes outside the jail. This is only relevant for certain software that depends on IPC features, like PostgreSQL, so chances are small for this to affect security. You can remove this configuration unless you depend on a port that requires it during build.
      • children.max=16 allows 16 sub-jails below the worker jail. You can raise this number later if you have a lot of CPUs and Poudriere tries to create more build jails than permitted. Each Poudriere build will try to create a reference jail and two build jails per “job”, and its default is to use the number of CPUs (as output by sysctl -n hw.ncpu) as the job count.
      • enforce_statfs=1 is required together with allow.mount in order to mount certain filesystems.

      Save and exit the configuration file.

      Restart the jail for its configuration to take affect immediately:

      • sudo service jail restart buildbot-worker0

      The respective kernel modules must be loaded so that Poudriere can perform mounts. Run the following commands to load the modules at boot time and immediately:

      • sudo sysrc -f /boot/loader.conf nullfs_load=YES
      • sudo kldload -n nullfs
      • sudo sysrc -f /boot/loader.conf tmpfs_load=YES
      • sudo kldload -n tmpfs

      You already installed the Poudriere package earlier, which has copied the sample file /usr/local/etc/poudriere.conf.sample to /usr/local/etc/poudriere.conf. Next, you will make edits to the configuration file. All possible configuration variables already exist in the sample, so uncomment or adapt the respective line in the file to set a variable to a certain value.

      For the following commands, please ensure you are still in a root shell in the worker jail:

      • sudo jexec buildbot-worker0 csh

      Open the file with the following command:

      • ee /usr/local/etc/poudriere.conf

      If you have decided to use ZFS, please fill in your desired zpool and parent dataset:

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # Poudriere can optionally use ZFS for its ports/jail storage. For
      # ZFS define ZPOOL, otherwise set NO_ZFS=yes
      #
      #### ZFS
      # The pool where poudriere will create all the filesystems it needs
      # poudriere will use ${ZPOOL}/${ZROOTFS} as its root
      #
      # You need at least 7GB of free space in this pool to have a working
      # poudriere.
      #
      ZPOOL=zroot
      
      ### NO ZFS
      # To not use ZFS, define NO_ZFS=yes
      #NO_ZFS=yes
      
      # root of the poudriere zfs filesystem, by default /poudriere
      ZROOTFS=/pdr/w0
      . . .
      

      Otherwise, if you decided against ZFS, please disable ZFS support:

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # Poudriere can optionally use ZFS for its ports/jail storage. For
      # ZFS define ZPOOL, otherwise set NO_ZFS=yes
      #
      #### ZFS
      # The pool where poudriere will create all the filesystems it needs
      # poudriere will use ${ZPOOL}/${ZROOTFS} as its root
      #
      # You need at least 7GB of free space in this pool to have a working
      # poudriere.
      #
      #ZPOOL=zroot
      
      ### NO ZFS
      # To not use ZFS, define NO_ZFS=yes
      NO_ZFS=yes
      
      # root of the poudriere zfs filesystem, by default /poudriere
      # ZROOTFS=/poudriere
      . . .
      

      You will later instruct Poudriere to download a FreeBSD base system and thereby bootstrap the first build jail. This requires specifying a download host, add the following highlighted line:

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # the host where to download sets for the jails setup
      # You can specify here a host or an IP
      # replace _PROTO_ by http or ftp
      # replace _CHANGE_THIS_ by the hostname of the mirrors where you want to fetch
      # by default: ftp://ftp.freebsd.org
      #
      # Also note that every protocols supported by fetch(1) are supported here, even
      # file:///
      # Suggested: https://download.FreeBSD.org
      FREEBSD_HOST=https://download.FreeBSD.org
      

      Since Poudriere will run jailed, the mount name limit of 88 characters of FreeBSD versions before 12.0 is especially harmful, as the full path of the jail /usr/jails/buildbot-worker0 is part of each mount path. Exceeding the limit would fatally break the builds, so let’s take good care to reduce path lengths. Instead of the typical directory /usr/local/poudriere, you can use /pdr like the following:

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # The directory where poudriere will store jails and ports
      BASEFS=/pdr
      

      Now, create that directory:

      Switch again to your editor of poudriere.conf:

      • ee /usr/local/etc/poudriere.conf

      Poudriere will mount a central directory for dist files (the source code tarballs for each port) while running builds so that all builders share the same cache. The default directory is:

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # If set the given directory will be used for the distfiles
      # This allows to share the distfiles between jails and ports tree
      # If this is "no", poudriere must be supplied a ports tree that already has
      # the required distfiles.
      DISTFILES_CACHE=/usr/ports/distfiles
      

      Now, create that directory:

      • mkdir -p /usr/ports/distfiles

      If you followed Step 2 and created a package repository signing key, please enter the editor again and specify it:

      • ee /usr/local/etc/poudriere.conf

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # Path to the RSA key to sign the PKG repo with. See pkg-repo(8)
      PKG_REPO_SIGNING_KEY=/usr/local/etc/poudriere.key
      

      Builds will run much faster if you cache C/C++ compiler and linker outputs for next time. The ports tree supports this directly by leveraging the tool ccache. Please enable it and create the respective cache directory if you can spare at least 5GB more space (the default cache size):

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # ccache support. Supply the path to your ccache cache directory.
      # It will be mounted into the jail and be shared among all jails.
      # It is recommended that extra ccache configuration be done with
      # ccache -o rather than from the environment.
      CCACHE_DIR=/var/cache/ccache
      

      Building and running Linux software is uncommon, so disable it until needed:

      • ee /usr/local/etc/poudriere.conf

      /usr/local/etc/poudriere.conf (snippet)

      . . .
      # Disable linux support
      NOLINUX=yes
      

      The jails should get a loopback address assigned, or Poudriere will warn about it. We can inherit the jail’s IP because it is on a loopback-only network interface (lo1). For this, please add the following line to the end of the configuration file:

      /usr/local/etc/poudriere.conf (snippet)

      LOIP4=127.0.0.3
      

      Save and exit the configuration file.

      For working builds, we need two more resources: a FreeBSD base system to use as the build jail template and an up-to-date ports tree. Choose the FreeBSD version you are targeting. In this tutorial, we will tell Poudriere to download FreeBSD 11.2 for amd64 architecture. You can name the jail how you like, but a consistent naming scheme like 112amd64 is recommended. Also keep in mind the choice between quarterly, stable ports tree branches (here, we use 2019Q2) and the bleeding edge “head” branch that might lead to breaking builds after updates every now and then. FreeBSD versions newer than that on the server can’t be used in the build jail.

      Download and create the build jail:

      • poudriere jail -c -j 112amd64 -v 11.2-RELEASE -a amd64

      Lastly, let’s download the ports tree. The default download method is portsnap, which uses compressed snapshots of the tree without history information. Either Subversion or Git are preferable to merge upstream changes or contribute back. This is also important if you want to use a custom, self-hosted tree in a version control system. In the following command, please fill in the current year and quarter.

      If you want to start with the upstream, official ports tree:

      • poudriere ports -c -p 2019Q2 -m svn+https -B branches/2019Q2

      The method svn+https would sync from the FreeBSD Subversion host (viewable online here). If you plan to use an alternative source, read the following note, otherwise skip it.

      Note: As an alternative, the method git clones the tree from the mirror on GitHub by default.

      To use the “head” branch, replace the last parameter with -B head (for Subversion) or -B master (for Git).

      If you prefer to use your own Git repository, you will have to explicitly specify your repository URL and branch name. Let’s assume you want to name your tree customtree and use the branch custom:

      • poudriere ports -c -p customtree -m git -B custom -U https://github.com/AndiDog/freebsd-ports.git

      The example URL points to a fork of freebsd-ports on GitHub, but could be any Git or other supported type of repository to which the CI server has access.

      Available trees can be listed with poudriere ports -l, which outputs a listing like:

      Output

      PORTSTREE METHOD TIMESTAMP PATH 2019Q2 svn+https 2019-04-20 19:23:19 /pdr/ports/2019Q2

      You’re now done setting up Poudriere’s configuration and resources. You’ve configured Poudriere with the required data to trigger the first builds and enabled the jail to create subjails. Next, you’re going to run the first build manually to verify that the setup is working.

      Step 5 — Running a Manual Test Build

      You can use the command poudriere bulk to build one or more packages and all its dependencies. After the first build of a package, Poudriere also automatically detects if a rebuild is necessary, or otherwise leaves the existing package file untouched. While the bulk subcommand only builds packages, running a build using poudriere testport would also test the specified ports using the definition of “testing” given in the port’s Makefile. For the scope of this article, we’re only interested in providing packages for installation on clients, so we are using bulk builds.

      Ensure you’re still in a root shell of the worker jail where you have installed Poudriere. Later on, this will also be where the Buildbot worker process will run builds automatically.

      Run the build, filling in the placeholders with the build jail name and ports tree name you chose earlier:

      • poudriere bulk -j 112amd64 -p 2019Q2 ports-mgmt/pkg

      This builds the port ports-mgmt/pkg. Ports in the official tree are stored in a <category>/<name> hierarchy, and those paths (called package origin) are used to tell Poudriere which packages should be built. For the start, we have chosen to only build the package manager pkg, which does not have any third-party dependencies and is therefore a good, quick check on the configuration. If everything runs fine, you’ll see output like this:

      Output

      [00:00:00] Creating the reference jail... done [00:00:06] Mounting system devices for 112amd64-2019Q2 [00:00:06] Mounting ports/packages/distfiles [00:00:06] Using packages from previously failed build [00:00:06] Mounting ccache from: /var/cache/ccache [00:00:06] Mounting packages from: /pdr/data/packages/112amd64-2019Q2 /etc/resolv.conf -> /pdr/data/.m/112amd64-2019Q2/ref/etc/resolv.conf [00:00:06] Starting jail 112amd64-2019Q2 [00:00:07] Logs: /pdr/data/logs/bulk/112amd64-2019Q2/2019-04-20_19h35m00s [00:00:07] Loading MOVED for /pdr/data/.m/112amd64-2019Q2/ref/usr/ports [00:00:08] Ports supports: FLAVORS SELECTED_OPTIONS [00:00:08] Gathering ports metadata [00:00:08] Calculating ports order and dependencies [00:00:08] pkg package missing, skipping sanity [00:00:08] Skipping incremental rebuild and repository sanity checks [00:00:08] Cleaning the build queue [00:00:08] Sanity checking build queue [00:00:08] Processing PRIORITY_BOOST [00:00:08] Balancing pool [00:00:08] Recording filesystem state for prepkg... done [00:00:08] Building 1 packages using 1 builders [00:00:08] Starting/Cloning builders [00:00:14] Hit CTRL+t at any time to see build progress and stats [00:00:14] [01] [00:00:00] Building ports-mgmt/pkg | pkg-1.10.5_5 [00:03:24] [01] [00:03:10] Finished ports-mgmt/pkg | pkg-1.10.5_5: Success [00:03:25] Stopping 1 builders [00:03:25] Creating pkg repository Creating repository in /tmp/packages: 100% Packing files for repository: 100% [00:03:25] Committing packages to repository [00:03:25] Removing old packages [00:03:25] Built ports: ports-mgmt/pkg [112amd64-2019Q2] [2019-04-20_19h35m00s] [committing:] Queued: 1 Built: 1 Failed: 0 Skipped: 0 Ignored: 0 Tobuild: 0 Time: 00:03:18 [00:03:25] Logs: /pdr/data/logs/bulk/112amd64-2019Q2/2019-04-20_19h35m00s [00:03:25] Cleaning up [00:03:25] Unmounting file systems

      This output shows where packages will go after build, and from where existing packages are taken in case they don’t need rebuild (here: /pdr/data/packages/112amd64-2019Q2). Also, the output shows an overview of running builds while Poudriere runs (you can press CTRL+T in an interactive shell to print the progress). In the final summary you’ll see one package was built. You can view verbose build output in the log directory (/pdr/data/logs/bulk/112amd64-2019Q2/*).

      This output confirms a successful build. If Poudriere has built at least one package successfully, it will automatically commit it to the package repository. This means that packages are only available after all builds have finished, even if other packages failed to build. You now have a working package repository at /pdr/data/packages/112amd64-2019Q2 within the Buildbot worker jail.

      You’ve completed all the configuration needed to return working Poudriere builds, and you’ve successfully verified with a manual build. You’ll see this same output later in the tutorial once you’ve automated the bulk build in Buildbot. In addition, a link to view the detailed logs shall be accessible from the web interface. To achieve this, and to serve the package repository to clients, you’ll set up a web server next.

      Step 6 — Configuring Nginx to Serve the Poudriere Web Interface and Package Repository

      Poudriere provides several output artifacts that we want to host using a web server:

      • Package repositories are made available to clients so they can access them with the regular pkg update and pkg install commands, using HTTPS or HTTP as transport.
      • Detailed build logs are helpful for developers to debug problematic builds or to investigate build output. They are stored per package and per build—in the Poudriere output from the last step, you saw that logs are stored in one directory per build, labeled with date and time.
      • Poudriere’s built-in web interface is a small, single HTML page per build that uses WebSockets to regularly update the status shown on the page. This is helpful to get a better overview of how far a build is, which dependencies triggered other package builds to fail, and lastly as a replacement for the command line output, which only shows a summary at the end unless you specifically make it print the current build progress.

      The configuration change in Nginx is short, as only static files need to be served. Since you’ll serve them to the outside world, you’re now going to configure the existing Nginx instance on the server, outside the jails, to serve the mentioned files from paths within the worker jail.

      Please exit the jail shell since you’re now going to work on the server:

      Open an editor with the Nginx configuration /usr/local/etc/nginx/nginx.conf:

      • sudo ee /usr/local/etc/nginx/nginx.conf

      Add the following locations inside the server { block:

      /usr/local/etc/nginx/nginx.conf

      . . .
      http {
          . . .
          server {
              . . .
              location / {
                  root /usr/local/www/nginx;
                  index index.html index.htm;
              }
      
              # poudriere logs
              location ~ ^/logs(/(.*))?$ {
                  include mime.types;
                  types {
                      text/plain log;
                  }
      
                  alias /usr/jails/buildbot-worker0/pdr/data/logs/bulk$1;
                  index index.html index.htm;
                  autoindex on;
              }
      
              # poudriere packages
              location ~ ^/packages(/(.*))?$ {
                  alias /usr/jails/buildbot-worker0/pdr/data/packages$1;
                  index no-index-file-but-required-directive-to-list-dir-contents;
                  autoindex on;
              }
      
              location /buildbot/ {
                  proxy_pass http://10.0.0.2:8010/;
              }
      
              . . .
          }
      }
      . . .
      

      Save and close the Nginx configuration file. Then, reload the Nginx service:

      • sudo service nginx reload

      Let’s now check out the artifacts created by the first manual build. Open up your preferred web browser on your local machine to access the resources.

      The package repository is below https://your-domain/packages/ (or http://your-server-ip/). You will find meta information in the root directory, e.g. 112amd64-2019Q2, and all built packages in the subdirectory All:

      Package repository listing

      Detailed build logs and Poudriere’s built-in web interface can be found below https://your-domain/logs/. Click through the directory hierarchy to reach the data of your previous manual build. In this example, you might end up on a URL like https://your-domain/logs/112amd64-2019Q2/latest/build.html.

      Poudriere web interface

      If you did not set up a domain name for your server, you will need to enter your server’s public IP address for these examples, e.g. http://your-server-ip/logs/.

      This concludes all manual setup to get working builds and have visibility into the output (packages and logs). Going forward, you will automate builds to achieve continuous integration.

      Step 7 — Setting Up a Buildbot Builder for Your Packages

      Your goal in this step is to automate bulk-package builds by executing Poudriere in the same way you already have manually—by adding to the existing Buildbot sample configuration. By the end of this step, Buildbot will trigger the package build whenever the chosen branch of the ports tree changes. In this tutorial’s examples, that would be the quarterly branch 2019Q2.

      All necessary changes are done in the Buildbot master configuration, so please open a root shell in the master jail:

      • sudo jexec buildbot-master csh

      First, a builder must be defined that describes the commands and actions performed to run a build. In the existing configuration /var/buildbot-master/master.cfg, you will find a section ####### BUILDERS—open an editor and replace the whole section until the next heading starting with ####### ..., with the following configuration:

      • ee /var/buildbot-master/master.cfg

      /var/buildbot-master/master.cfg (snippet)

      . . .
      ####### BUILDERS
      
      c['builders'] = []
      
      PORTS_TO_BUILD = {
          'security/sudo',
          'shells/bash',
          'sysutils/tmux',
      }
      
      
      # Custom classes
      class PoudriereLogLineObserver(util.LogLineObserver):
          _logsRe = re.compile(r'Logs: /pdr/data/logs/bulk(/[-_/0-9A-Za-z]+)$')
      
          def __init__(self):
              super().__init__()
              self._hadUrls = False
      
          def outLineReceived(self, line):
              if not self._hadUrls:
                  m = self._logsRe.search(line.strip())
                  if m:
                      poudriereUiUrl = f'''{re.sub('/buildbot/$', '', c['buildbotURL'])}/logs{m.group(1)}'''
                      self.step.addURL('Poudriere build', poudriereUiUrl)
                      self.step.addURL('Poudriere logs', poudriereUiUrl + '/logs/')
                      self._hadUrls = True
      
      
      class PoudriereCompileStep(steps.Compile):
          def __init__(self, *args, **kwargs):
              super().__init__(*args, **kwargs)
              self.addLogObserver('stdio', PoudriereLogLineObserver())
      
      
      # Poudriere bulk build
      bulkBuildFactory = util.BuildFactory()
      bulkBuildFactory.addSteps([
          steps.ShellCommand(
              name='update ports tree',
              command=['sudo', 'poudriere', 'ports', '-u', '-p', '2019Q2', '-v'],
              haltOnFailure=True,
          ),
          PoudriereCompileStep(
              name='make bulk',
              command=['sudo', 'poudriere', 'bulk', '-j', '112amd64', '-p', '2019Q2'] + list(sorted(PORTS_TO_BUILD)),
              haltOnFailure=True,
          ),
      ])
      c['builders'].append(util.BuilderConfig(name='bulk-112amd64-2019Q2',
                                              workernames=['worker0'],
                                              factory=bulkBuildFactory))
      . . .
      

      Note how this makes use of Buildbot’s extensibility: custom classes are used to observe and parse information from Poudriere’s log output. Namely, PoudriereLogLineObserver is added as “log observer”, i.e. gets called whenever a new log line is printed during the build. The class searches the logs for the log directory and converts that into hyperlinks. Those links will be displayed alongside the build step and take the user directly to Poudriere’s web interface and logs.

      In the first build step “update ports tree”, we use Poudriere’s built-in update command (ports -u) to pull the latest version of the ports tree. This will use the previously configured method automatically (for example SVN/Git). This way, you can be sure the packages are always built against the latest committed tree, which is especially helpful if you have your own versioned repository where you maintain software versions and patches.

      At the top, the list PORTS_TO_BUILD specifies which ports should be built. It is used in the steps of the build factory specified at the bottom of the block. The build factory is a template used to instantiate a build. Buildbot creates a unique build whenever one is triggered, and the build uses a copy of the steps that were defined for the build factory at the time. In this case, we configured exactly two steps:

      • Update the ports tree. Since this example uses the quarterly branch 2019Q2, it will not receive changes very often (typically only security and build fixes).
      • Run the bulk build using the same tree.

      To make the added code block work, please add a required import to the top of the file:

      /var/buildbot-master/master.cfg (snippet)

      # -*- python -*-
      # ex: set filetype=python:
      
      import re
      
      from buildbot.plugins import *
      

      The re library in Python implements regular expressions, a feature to search or replace parts of a string—the PoudriereLogLineObserver class uses it to search for a line Logs: /pdr/data/logs/... that mentions the log directory.

      The build commands use sudo to run certain commands. This is required because Poudriere needs superuser privileges when running a build—in order to create, manage, and destroy the build jails—and also the ports trees managed by Poudriere are created with the root user as owner. In the previous tutorial, we configured the user that runs the Buildbot worker process with sysrc buildbot_worker_uid=buildbot-worker. Hence, we want to allow the buildbot-worker user to run exactly the necessary commands as root, but not other commands (for security reasons). Let’s install the sudo program and configure it accordingly.

      This needs to be done on the worker jail, not the master. Please exit the master jail shell and enter the worker jail:

      • sudo jexec buildbot-worker0 csh

      Install the sudo package:

      Confirm installation with y and ENTER.

      On FreeBSD, the sudo package by default reads configuration files from /usr/local/etc/sudoers.d/. Open an editor to create a new configuration file:

      • env EDITOR=ee visudo /usr/local/etc/sudoers.d/buildbot-worker

      The use of visudo is intentional, since it will warn on syntax errors and allow fixing them instead of committing a bad configuration.

      Specify which commands the buildbot-worker user can run as root without requiring any password:

      /usr/local/etc/sudoers.d/buildbot-worker

      buildbot-worker ALL=(ALL) NOPASSWD: /usr/local/bin/poudriere bulk *
      buildbot-worker ALL=(ALL) NOPASSWD: /usr/local/bin/poudriere ports -u *
      

      Save the file and switch back to the master jail for further required configuration of the Buildbot master:

      • sudo jexec buildbot-master csh

      You just fulfilled the requirements to get the bulk build to work. But as mentioned, each build must be triggered to run. Buildbot uses the term scheduler for an object that defines when a build is triggered, and with which extra information, such as which branch has been changed. Please remove the existing section SCHEDULERS from the configuration file, and place the following content after the BUILDERS section, so that the code can use all existing builder names:

      • ee /var/buildbot-master/master.cfg

      /var/buildbot-master/master.cfg (snippet)

      . . .
      ####### SCHEDULERS
      
      c['schedulers'] = []
      
      # Forceful scheduler allowed for all builders
      c['schedulers'].append(schedulers.ForceScheduler(
          name='force',
          builderNames=[builder.name for builder in c['builders']]))
      
      # Watch ports tree for changes on given branch
      c['schedulers'].append(schedulers.SingleBranchScheduler(
          name='sched-bulk-112amd64-2019Q2',
          change_filter=util.ChangeFilter(project='freebsd-ports', branch='branches/2019Q2'),
          builderNames=['bulk-112amd64-2019Q2']))
      . . .
      

      This replaces the sample configuration so that a force button appears on every builder. And most importantly, it creates a scheduler that watches all changes pertaining to the given project/branch and triggers a build for each change. Yet, no such change events can occur—you first have to create a change source. Typically, those are version control systems like SVN or Git on which one can detect changes on a branch. Buildbot supports the most popular ones, so we can use its functionality to add our chosen upstream ports tree repository as source. Completely replace the section CHANGESOURCES with the following configuration:

      /var/buildbot-master/master.cfg (snippet)

      . . .
      ####### CHANGESOURCES
      
      c['change_source'] = []
      
      c['change_source'].append(changes.SVNPoller(
          'svn://svn.freebsd.org/ports/',
          project='freebsd-ports',
          split_file=util.svn.split_file_branches,
          svnbin='svnlite',
          pollInterval=4 * 3600))
      
      # Example for Git:
      # c['change_source'].append(changes.GitPoller(
      #     repourl='https://github.com/AndiDog/freebsd-ports.git',
      #     project='freebsd-ports',
      #     branches=['custom'],
      #     pollInterval=4 * 3600))
      . . .
      

      This polls the SVN repository every four hours on the Buildbot master, and any new (not seen before) changes are forwarded to matching schedulers which in turn would trigger builds that are eventually dispatched to run on our single Buildbot worker. The ports tree is very large, and at first run these pollers will download the full history (for Git, only the specified branches), which can take a few minutes and require significant space (several gigabytes).

      Apply the new configuration file by restarting Buildbot:

      In this example, you have used the upstream ports collection from svn://svn.freebsd.org/ports/ and builds are scheduled whenever the branch 2019Q2 changes. As noted before, quarterly branches are mostly stable and do not receive updates very often. Since you probably do not want to wait for such a change to come in before the build is triggered the first time, let’s run it once by hand.

      Open your Buildbot web interface (https://your-domain/buildbot/), navigate to Builds > Builders > bulk-112amd64-2019Q2. It will not show any builds yet.

      Bulk builder page – no builds yet

      Click the force button at the top-right and then Start Build. That will trigger the build using its default settings, i.e. reason, branch, and other values are not overridden. The “update ports tree” step might take a minute to run, and eventually the Poudriere build should also run through successfully. The web interface will show the build as successful.

      Successful build

      Clicking one of the links (Poudriere build and Poudriere logs) will take you to the Poudriere web interface and build logs for this specific build, respectively (as shown in Step 6). Expand by clicking the arrow next to make bulk and then stdio > view all … lines to show the full output of the poudriere bulk ... command.

      Having completed the first build, the packages are now available, as configured in Nginx in Step 6. Head to https://your-domain/packages/ (or http://your-server-ip/packages/) in a browser and click through the package repository created by Poudriere. You can find the actual package files (*.txz) once you enter one of the repositories and navigate to the All/ subdirectory.

      List of package repositories

      Now that packages are available over HTTPS (or HTTP if you decided so) and built automatically on ports tree changes, you can configure one or more hosts to use those packages.

      Step 8 — Configuring Package Clients

      In this step, you need a second FreeBSD server and will set it up such that it can fetch and install the packages built on the CI server. We will call this second server the package client.

      SSH into the client host. Most remaining instructions in this section will be done on the client:

      Create the directory for custom package repository configurations:

      • sudo mkdir -p /usr/local/etc/pkg/repos

      As root user, open an editor to create the file /usr/local/etc/pkg/repos/ci.conf, and specify how and from where to retrieve packages:

      • sudo ee /usr/local/etc/pkg/repos/ci.conf

      In case you chose package signing, use this content:

      /usr/local/etc/pkg/repos/ci.conf

      ci: {
          url: "https://your-domain/packages/112amd64-2019Q2",
          signature_type: "pubkey",
          pubkey: "/usr/local/etc/pkg/repos/ci.pub",
          enabled: yes
      }
      

      Alternatively, if you decided to go without package signing, disable signature checks as follows:

      /usr/local/etc/pkg/repos/ci.conf

      ci: {
          url: "https://your-domain/packages/112amd64-2019Q2",
          signature_type: "none",
          enabled: yes
      }
      

      Note: This note applies only if you followed Step 2 to create a package repository signing key. Please skip it otherwise.

      From your local machine, upload the public key to the package client:

      • scp /tmp/poudriere.pub package-client:/tmp/ci.pub

      Using the client shell again, move the key into place so it can verify the authenticity of packages:

      • sudo mv /tmp/ci.pub /usr/local/etc/pkg/repos/ci.pub

      You completed configuring the package repository and enabled it, but on a regular FreeBSD installation, the official package repository “FreeBSD” would be enabled as well. Mixing installed packages from different sources is a foolproof way to have your production software crash at some point due to incompatible software versions or differing ABI, API, or build options. All packages on a host should stem from the same source.

      The default configuration of the official repository is stored in /etc/pkg/FreeBSD.conf. This file belongs to the base system and should not be touched. However, you can override its settings—namely, we want to disable the repository altogether—by adding the respective flag in a configuration file under /usr/local/etc/pkg/repos, where also your own repository is configured. Please create a new file /usr/local/etc/pkg/repos/FreeBSD.conf with an editor, and use the following content to disable the FreeBSD repository:

      • sudo ee /usr/local/etc/pkg/repos/FreeBSD.conf

      /usr/local/etc/pkg/repos/FreeBSD.conf

      FreeBSD: {
          enabled: no
      }
      

      If you are on a fully pristine package client host, no packages are installed yet and you can immediately begin using your own package repository. However, if even only one package was installed from another source, you are recommended to uninstall those packages and begin from scratch using your own source. The package manager pkg itself is installed as a package—to solve the chicken-and-egg problem, FreeBSD’s base system ships with a small executable /usr/sbin/pkg, which can bootstrap the package manager. That is, download the pkg package and install it as the very first package on the system. From that point on, the executable /usr/local/sbin/pkg of that package supports you as full-blown package manager.

      Run the following command to bootstrap pkg:

      In the output of pkg bootstrap, you should see that packages are taken from your own package repository which we called ci in the configuration file. If you are using a package signing key, the output will also hint about the security verification.

      Output

      The package management tool is not yet installed on your system. Do you want to fetch and install it now? [y/N]: y Bootstrapping pkg from https://your-domain/packages/112amd64-2019Q2, please wait... Verifying signature with public key /usr/local/etc/pkg/repos/ci.pub... done Installing pkg-1.10.5_5... Extracting pkg-1.10.5_5: 100%

      If you see this successful output, please skip to the next note block. However, if the package manager or other packages had already been installed from another source, and you get this error:

      Output

      pkg already bootstrapped at /usr/local/sbin/pkg

      Then please follow the instructions in the note.

      Note – only if package manager was bootstrapped already:

      You can list installed packages with pkg info. In this case, you should uninstall all of them including pkg, and reinstall them later. To do that, please first list the manually installed packages with pkg query -e "%a==0" "%n". Remember which of them you want to install again later. If, for instance, you use a shell which is not part of the base system (e.g. bash is an external package), you will want to reinstall it later or you might not be able to log in again.

      The following commands will remove all existing packages and the package manager, bootstrap the package manager again from your own package repository, and give an example of reinstalling your desired packages such as bash. Note though that you will only be able to install packages that you have built through the CI, i.e. listed in the Buildbot master configuration (variable PORTS_TO_BUILD).

      First, open a root shell before uninstalling the sudo package, or else you may not be able to gain superuser privileges anymore. Keep it open until you have bootstrapped pkg through the course of the tutorial and successfully reinstalled sudo:

      Uninstall all packages, including pkg:

      Bootstrap the package manager:

      Confirm to bootstrap the package manager by pressing y, followed by ENTER.

      In the likely case that you set up your package host using a Let’s Encrypt certificate for HTTPS, you will run into the chicken-and-egg problem where your package host is not trusted but you would need to install the package ca_root_nss (containing trustworthy root certificate authorities) to trust the Let’s Encrypt CA and thereby also trust the server hosting your custom-built packages. The same problem would arise if you used an internal CA (self-signed by you or your company). Certificate verification errors would result in error output like this when bootstrapping the package manager:

      Output

      The package management tool is not yet installed on your system. Do you want to fetch and install it now? [y/N]: y Bootstrapping pkg from https://example.com/packages/112amd64-2019Q2, please wait... Certificate verification failed for /C=US/O=Let's Encrypt/CN=Let's Encrypt Authority X3 34389740104:error:14090086:SSL routines:ssl3_get_server_certificate:certificate verify failed:/usr/src/crypto/openssl/ssl/s3_clnt.c:1269: [...]

      If you see this error, please follow the instructions in the note below. Otherwise, you are all set and can skip this part and continue after the note.

      Note – only if using HTTPS and certificate verification failed:

      There is one straight workaround: trust the security of the package signing key, hence bootstrapping pkg and installing the ca_root_nss package via unencrypted HTTP. Since this is not always an option because of privacy concerns, blocked HTTP ports etc., we should prefer a more “best practice” way. The official FreeBSD repository is also signed by Let’s Encrypt, so we cannot simply install the ca_root_nss package from there. No matter which CA it is, you are recommended to set up your package clients with a fixed set of HTTPS CAs to trust. You can achieve exactly that within the next few instructions. We are going to assume this is for Let’s Encrypt, but the instructions will work the same way for your own, self-signed CA (you’ll need its certificate chain handy).

      In your web browser, visit Let’s Encrypt’s certificate listing at https://letsencrypt.org/certificates/. Make sure the web site is trusted by the browser. Download the certificates under Root Certificates > Active > ISRG Root X1 (self-signed) and Intermediate Certificates > Active > Let’s Encrypt Authority X3 (Signed by ISRG Root X1) in PEM format to /tmp/root.pem and /tmp/intermediate.pem on your local computer, respectively.

      After the download has succeeded, concatenate the files into a certificate chain:

      • cat /tmp/intermediate.pem /tmp/root.pem >/tmp/letsencrypt-chain.pem
      • scp /tmp/letsencrypt-chain.pem package-client:/tmp/.

      Back in the shell of the package client, you now need to specify this chain of trust in the package manager configuration /usr/local/etc/pkg.conf so it gets used for TLS verification. Add these lines using an editor, and create the file if it does not exist yet:

      • sudo ee /usr/local/etc/pkg.conf

      /usr/local/etc/pkg.conf (snippet)

      pkg_env: {
          SSL_CA_CERT_FILE: "/usr/local/etc/pkg/repos/letsencrypt-chain.pem",
      }
      

      Move the CA chain into place:

      • sudo mv /tmp/letsencrypt-chain.pem /usr/local/etc/pkg/repos/.

      If you stayed in a root shell until now because the sudo package was removed, this command must be run without sudo. The same applies to the next command within this note.

      With this setting, you can try bootstrapping once again and should not get any more TLS errors. There is one small twist: the FreeBSD built-in /usr/sbin/pkg, which bootstraps the full package manager, does not honor the configured pkg_env setting, so we have to override the respective environment variable for this one time only, using the same value as configured:

      • sudo env SSL_CA_CERT_FILE=/usr/local/etc/pkg/repos/letsencrypt-chain.pem pkg bootstrap

      If you previously deleted existing packages, it’s a good time to reinstall essential tools now (e.g. sudo), plus any other desired packages.

      And drop out of the root shell, if that is still the case:

      In order to test whether everything works, install packages from the list specified in the Buildbot master config (variable PORTS_TO_BUILD). For example, the Bash shell and sudo:

      • sudo pkg install bash sudo tmux

      Again, confirm installation by pressing y and then ENTER. The package installation should run through without any issues.

      You can use pkg info to list which packages are currently installed (including dependencies, if any). To verify that no packages from other sources are installed, possibly causing clashes or incompatibilities, you could list installed packages with these details using pkg query "%n: autoinstalled=%a from repo=%R". Mind that pkg will be shown as bootstrapped from unknown-repository—this is why previously, you verified the bootstrapping output to see that the package manager itself is also taken from your own package repository.

      In this last step, you configured access to the CI’s package repository on a client, optionally enabled package signature verification for security purposes, ensured that packages only come from a single source to avoid compatibility issues, bootstrapped the package manager pkg, and installed your desired packages as built by the CI.

      Conclusion

      In this tutorial, you have installed and configured Poudriere, automated running package builds, and configured secure access to the package repository from a client host, ending up with the latest built packages installed from a single, central source. The setup puts you in an excellent position to keep your servers consistent and up-to-date, and manage version upgrades of external software packages.

      To further enhance your current setup, you could consider select follow-up steps:

      • Private access only: By default, Droplets have a public IP address on the internet. Also, Buildbot supports authentication but is by default unprotected.
      • Alert on build problems: Check out how to set up Buildbot reporters to get started.
      • Keep ports tree up to date: In the examples from the tutorial, the quarterly branch 2019Q2 was used, but you should switch to a newer tree eventually or use your own version-controlled repository to apply desired patches.
      • Adding builds for own projects: The FreeBSD Porter’s Handbook explains how to write a build recipe (a port) if you want to build and install your internal software as FreeBSD packages.
      • Monitor outdated packages on clients: You can compare installed packages on a client with the latest available packages on the CI using the output of sudo pkg update -q && sudo pkg version -q --not-like "=" which prints all packages whose version does not exactly match. See the manpage of pkg-version for more details.
      • Add cleanup job: Over time, the Buildbot worker jail will run full of old build log files, source tarballs, and possibly deprecated packages. Use the commands poudriere {logclean,distclean,pkgclean} to clean up (see manpage of poudriere).



      Source link