One place for hosting & domains

      Continuous

      How To Set Up a Continuous Deployment Pipeline with GitLab CI/CD on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      GitLab is an open source collaboration platform that provides powerful features beyond hosting a code repository. You can track issues, host packages and registries, maintain Wikis, set up continuous integration (CI) and continuous deployment (CD) pipelines, and more.

      In this tutorial you’ll build a continuous deployment pipeline with GitLab. You will configure the pipeline to build a Docker image, push it to the GitLab container registry, and deploy it to your server using SSH. The pipeline will run for each commit pushed to the repository.

      You will deploy a small, static web page, but the focus of this tutorial is configuring the CD pipeline. The static web page is only for demonstration purposes; you can apply the same pipeline configuration using other Docker images for the deployment as well.

      When you have finished this tutorial, you can visit http://your_server_IP in a browser for the results of the automatic deployment.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Creating the GitLab Repository

      Let’s start by creating a GitLab project and adding an HTML file to it. You will later copy the HTML file into an Nginx Docker image, which in turn you’ll deploy to the server.

      Log in to your GitLab instance and click New project.

      The new project button in GitLab

      1. Give it a proper Project name.
      2. Optionally add a Project description.
      3. Make sure to set the Visibility Level to Private or Public depending on your requirements.
      4. Finally click Create project

      The new project form in GitLab

      You will be redirected to the Project’s overview page.

      Let’s create the HTML file. On your Project’s overview page, click New file.

      The new file button on the project overview page

      Set the File name to index.html and add the following HTML to the file body:

      index.html

      <html>
      <body>
      <h1>My Personal Website</h1>
      </body>
      </html>
      

      Click Commit changes at the bottom of the page to create the file.

      This HTML will produce a blank page with one headline showing My Personal Website when opened in a browser.

      Dockerfiles are recipes used by Docker to build Docker images. Let’s create a Dockerfile to copy the HTML file into an Nginx image.

      Go back to the Project’s overview page, click the + button and select the New file option.

      New file option in the project's overview page listed in the plus button

      Set the File name to Dockerfile and add these instructions to the file body:

      Dockerfile

      FROM nginx:1.18
      COPY index.html /usr/share/nginx/html
      

      The FROM instruction specifies the image to inherit from—in this case the nginx:1.18 image. 1.18 is the image tag representing the Nginx version. The nginx:latest tag references the latest Nginx release, but that could break your application in the future, which is why fixed versions are recommended.

      The COPY instruction copies the index.html file to /usr/share/nginx/html in the Docker image. This is the directory where Nginx stores static HTML content.

      Click Commit changes at the bottom of the page to create the file.

      In the next step, you’ll configure a GitLab runner to keep control of who gets to execute the deployment job.

      Step 2 — Registering a GitLab Runner

      In order to keep track of the environments that will have contact with the SSH private key, you’ll register your server as a GitLab runner.

      In your deployment pipeline you want to log in to your server using SSH. To achieve this, you’ll store the SSH private key in a GitLab CI/CD variable (Step 5). The SSH private key is a very sensitive piece of data, because it is the entry ticket to your server. Usually, the private key never leaves the system it was generated on. In the usual case, you would generate an SSH key on your host machine, then authorize it on the server (that is, copy the public key to the server) in order to log in manually and perform the deployment routine.

      Here the situation changes slightly: You want to grant an autonomous authority (GitLab CI/CD) access to your server to automate the deployment routine. Therefore the private key needs to leave the system it was generated on and be given in trust to GitLab and other involved parties. You never want your private key to enter an environment that is not either controlled or trusted by you.

      Besides GitLab, the GitLab runner is yet another system that your private key will enter. For each pipeline, GitLab uses runners to perform the heavy work, that is, execute the jobs you have specified in the CI/CD configuration. That means the deployment job will ultimately be executed on a GitLab runner, hence the private key will be copied to the runner such that it can log in to the server using SSH.

      If you use unknown GitLab Runners (for example, shared runners) to execute the deployment job, then you’d be unaware of the systems getting in contact with the private key. Even though GitLab runners clean up all data after job execution, you can avoid sending the private key to unknown systems by registering your own server as a GitLab runner. The private key will then be copied to the server controlled by you.

      Start by logging in to your server:

      In order to install the gitlab-runner service, you’ll add the official GitLab repository. Download and inspect the install script:

      • curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh > script.deb.sh
      • less script.deb.sh

      Once you are satisfied with the safety of the script, run the installer:

      It may not be obvious, but you have to enter your non-root user’s password to proceed. When you execute the previous command, the output will be like:

      Output

      [sudo] password for sammy: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 5945 100 5945 0 0 8742 0 --:--:-- --:--:-- --:--:-- 8729

      When the curl command finishes, you will receive the following message:

      Output

      The repository is setup! You can now install packages.

      Next install the gitlab-runner service:

      • sudo apt install gitlab-runner

      Verify the installation by checking the service status:

      • systemctl status gitlab-runner

      You will have active (running) in the output:

      Output

      ● gitlab-runner.service - GitLab Runner Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-06-01 09:01:49 UTC; 4s ago Main PID: 16653 (gitlab-runner) Tasks: 6 (limit: 1152) CGroup: /system.slice/gitlab-runner.service └─16653 /usr/lib/gitlab-runner/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitla

      To register the runner, you need to get the project token and the GitLab URL:

      1. In your GitLab project, navigate to Settings > CI/CD > Runners.
      2. In the Set up a specific Runner manually section, you’ll find the registration token and the GitLab URL. Copy both to a text editor; you’ll need them for the next command. They will be referred to as https://your_gitlab.com and project_token.

      The runners section in the ci/cd settings with the copy token button

      Back to your terminal, register the runner for your project:

      • sudo gitlab-runner register -n --url https://your_gitlab.com --registration-token project_token --executor docker --description "Deployment Runner" --docker-image "docker:stable" --tag-list deployment --docker-privileged

      The command options can be interpreted as follows:

      • -n executes the register command non-interactively (we specify all parameters as command options).
      • --url is the GitLab URL you copied from the runners page in GitLab.
      • --registration-token is the token you copied from the runners page in GitLab.
      • --executor is the executor type. docker executes each CI/CD job in a Docker container (see GitLab’s documentation on executors).
      • --description is the runner’s description, which will show up in GitLab.
      • --docker-image is the default Docker image to use in CI/CD jobs, if not explicitly specified.
      • --tag-list is a list of tags assigned to the runner. Tags can be used in a pipeline configuration to select specific runners for a CI/CD job. The deployment tag will allow you to refer to this specific runner to execute the deployment job.
      • --docker-privileged executes the Docker container created for each CI/CD job in privileged mode. A privileged container has access to all devices on the host machine and has nearly the same access to the host as processes running outside containers (see Docker’s documentation about runtime privilege and Linux capabilities). The reason for running in privileged mode is so you can use Docker-in-Docker (dind) to build a Docker image in your CI/CD pipeline. It is good practice to give a container the minimum requirements it needs. For you it is a requirement to run in privileged mode in order to use Docker-in-Docker. Be aware, you registered the runner for this specific project only, where you are in control of the commands being executed in the privileged container.

      After executing the gitlab-runner register command, you will receive the following output:

      Output

      Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

      Verify the registration process by going to Settings > CI/CD > Runners in GitLab, where the registered runner will show up.

      The registered runner in the runners section of the ci/cd settings

      In the next step you’ll create a deployment user.

      Step 3 — Creating a Deployment User

      You are going to create a non-sudo user that is dedicated for the deployment task, so that its power is limited and the deployment takes place in an isolated user space. You will later configure the CI/CD pipeline to log in to the server with that user.

      On your server, create a new user:

      You’ll be guided through the user creation process. Enter a strong password and optionally any further user information you want to specify. Finally confirm the user creation with Y.

      Add the user to the Docker group:

      • sudo usermod -aG docker deployer

      This permits deployer to execute the docker command, which is required to perform the deployment.

      In the next step you’ll create an SSH key to be able to log in to the server as deployer.

      Step 4 — Setting Up an SSH Key

      You are going to create an SSH key for the deployment user. GitLab CI/CD will later use the key to log in to the server and perform the deployment routine.

      Let’s start by switching to the newly created deployer user for whom you’ll generate the SSH key:

      You’ll be prompted for the deployer password to complete the user switch.

      Next, generate a 4096-bit SSH key. It is important to answer the questions of the ssh-keygen command correctly:

      1. First question: answer it with ENTER, which stores the key in the default location (the rest of this tutorial assumes the key is stored in the default location).
      2. Second question: configures a password to protect the SSH private key (the key used for authentication). If you specify a passphrase, you’ll have to enter it each time the private key is used. In general, a passphrase adds another security layer to SSH keys, which is good practice. Somebody in possession of the private key would also require the passphrase to use the key. For the purposes of this tutorial, it is important that you have an empty passphrase, because the CI/CD pipeline will execute non-interactively and therefore does not allow to enter a passphrase.

      To summarize, run the following command and confirm both questions with ENTER to create a 4096-bit SSH key and store it in the default location with an empty passphrase:

      To authorize the SSH key for the deployer user, you need to append the public key to the authorized_keys file:

      • cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

      ~ is short for the user home in Linux. The cat program will print the contents of a file; here you use the >> operator to redirect the output of cat and append it to the authorized_keys file.

      In this step you have created an SSH key pair for the CI/CD pipeline to log in and deploy the application. Next you’ll store the private key in GitLab to make it accessible during the pipeline process.

      Step 5 — Storing the Private Key in a GitLab CI/CD Variable

      You are going to store the SSH private key in a GitLab CI/CD file variable, so that the pipeline can make use of the key to log in to the server.

      When GitLab creates a CI/CD pipeline, it will send all variables to the corresponding runner and the variables will be set as environment variables for the duration of the job. In particular, the values of file variables are stored in a file and the environment variable will contain the path to this file.

      While you’re in the variables section, you’ll also add a variable for the server IP and the server user, which will inform the pipeline about the destination server and user to log in.

      Start by showing the SSH private key:

      Copy the output to your clipboard using CTRL+C. Make sure to copy everything including the BEGIN and END line:

      ~/.ssh/id_rsa

      -----BEGIN RSA PRIVATE KEY-----
      ...
      -----END RSA PRIVATE KEY-----
      

      Now navigate to Settings > CI / CD > Variables in your GitLab project and click Add Variable. Fill out the form as follows:

      • Key: ID_RSA
      • Value: Paste your SSH private key from your clipboard with CTRL+V
      • Type: File
      • Environment Scope: All (default)
      • Protect variable: Checked
      • Mask variable: Unchecked

      Note: The variable can’t be masked because it does not meet the regular expression requirements (see GitLab’s documentation about masked variables). However, the private key will never appear in the console log, which makes masking it obsolete.

      A file containing the private key will be created on the runner for each CI/CD job and its path will be stored in the $ID_RSA environment variable.

      Create another variable with your server IP. Click Add Variable and fill out the form as follows:

      • Key: SERVER_IP
      • Value: your_server_IP
      • Type: Variable
      • Environment scope: All (default)
      • Protect variable: Checked
      • Mask variable: Checked

      Finally, create a variable with the login user. Click Add Variable and fill out the form as follows:

      • Key: SERVER_USER
      • Value: deployer
      • Type: Variable
      • Environment scope: All (default)
      • Protect variable: Checked
      • Mask variable: Checked

      You have now stored the private key in a GitLab CI/CD variable, which makes the key available during pipeline execution. In the next step, you’re moving on to configuring the CI/CD pipeline.

      Step 6 — Configuring the .gitlab-ci.yml File

      You are going to configure the GitLab CI/CD pipeline. The pipeline will build a Docker image and push it to the container registry. GitLab provides a container registry for each project. You can explore the container registry by going to Packages & Registries > Container Registry in your GitLab project (read more in GitLab’s container registry documentation.) The final step in your pipeline is to log in to your server, pull the latest Docker image, remove the old container, and start a new container.

      Now you’re going to create the .gitlab-ci.yml file that contains the pipeline configuration. In GitLab, go to the Project overview page, click the + button and select New file. Then set the File name to .gitlab-ci.yml.

      (Alternatively you can clone the repository and make all following changes to .gitlab-ci.yml on your local machine, then commit and push to the remote repository.)

      To begin add the following:

      .gitlab-ci.yml

      stages:
        - publish
        - deploy
      

      Each job is assigned to a stage. Jobs assigned to the same stage run in parallel (if there are enough runners available). Stages will be executed in the order they were specified. Here, the publish stage will go first and the deploy stage second. Successive stages only start when the previous stage finished successfully (that is, all jobs have passed). Stage names can be chosen arbitrarily.

      When you want to combine this CD configuration with your existing CI pipeline, which tests and builds the app, you may want to add the publish and deploy stages after your existing stages, such that the deployment only takes place if the tests passed.

      Following this, add this to your .gitlab-ci.yml file:

      .gitlab-ci.yml

      . . .
      variables:
        TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
        TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
      

      The variables section defines environment variables that will be available in the context of a job’s script section. These variables will be available as usual Linux environment variables; that is, you can reference them in the script by prefixing with a dollar sign such as $TAG_LATEST. GitLab creates some predefined variables for each job that provide context specific information, such as the branch name or the commit hash the job is working on (read more about predefined variable). Here you compose two environment variables out of predefined variables. They represent:

      • CI_REGISTRY_IMAGE: Represents the URL of the container registry tied to the specific project. This URL depends on the GitLab instance. For example, registry URLs for gitlab.com projects follow the pattern: registry.gitlab.com/your_user/your_project. But since GitLab will provide this variable, you do not need to know the exact URL.
      • CI_COMMIT_REF_NAME: The branch or tag name for which project is built.
      • CI_COMMIT_SHORT_SHA: The first eight characters of the commit revision for which the project is built.

      Both of the variables are composed of predefined variables and will be used to tag the Docker image.

      TAG_LATEST will add the latest tag to the image. This is a common strategy to provide a tag that always represents the latest release. For each deployment, the latest image will be overridden in the container registry with the newly built Docker image.

      TAG_COMMIT, on the other hand, uses the first eight characters of the commit SHA being deployed as the image tag, thereby creating a unique Docker image for each commit. You will be able to trace the history of Docker images down to the granularity of Git commits. This is a common technique when doing continuous deployments, because it allows you to quickly deploy an older version of the code in case of a defective deployment.

      As you’ll explore in the coming steps, the process of rolling back a deployment to an older Git revision can be done directly in GitLab.

      $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME specifies the Docker image base name. According to GitLab’s documentation, a Docker image name has to follow this scheme:

      image name scheme

      <registry URL>/<namespace>/<project>/<image>

      $CI_REGISTRY_IMAGE represents the <registry URL>/<namespace>/<project> part and is mandatory because it is the project’s registry root. $CI_COMMIT_REF_NAME is optional but useful to host Docker images for different branches. In this tutorial you will only work with one branch, but it is good to build an extendable structure. In general, there are three levels of image repository names supported by GitLab:

      repository name levels

      registry.example.com/group/project:some-tag registry.example.com/group/project/image:latest registry.example.com/group/project/my/image:rc1

      For your TAG_COMMIT variable you used the second option, where image will be replaced with the branch name.

      Next, add the following to your .gitlab-ci.yml file:

      .gitlab-ci.yml

      . . .
      publish:
        image: docker:latest
        stage: publish
        services:
          - docker:dind
        script:
          - docker build -t $TAG_COMMIT -t $TAG_LATEST .
          - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
          - docker push $TAG_COMMIT
          - docker push $TAG_LATEST
      

      The publish section is the first job in your CI/CD configuration. Let’s break it down:

      • image is the Docker image to use for this job. The GitLab runner will create a Docker container for each job and execute the script within this container. docker:latest image ensures that the docker command will be available.
      • stage assigns the job to the publish stage.
      • services specifies Docker-in-Docker—the dind service. This is the reason why you registered the GitLab runner in privileged mode.

      The script section of the publish job specifies the shell commands to execute for this job. The working directory will be set to the repository root when these commands will be executed.

      • docker build ...: Builds the Docker image based on the Dockerfile and tags it with the latest commit tag defined in the variables section.
      • docker login ...: Logs Docker in to the project’s container registry. You use the predefined variable $CI_BUILD_TOKEN as an authentication token. GitLab will generate the token and stay valid for the job’s lifetime.
      • docker push ...: Pushes both image tags to the container registry.

      Following this, add the deploy job to your .gitlab-ci.yml:

      .gitlab-ci.yml

      . . .
      deploy:
        image: alpine:latest
        stage: deploy
        tags:
          - deployment
        script:
          - chmod og= $ID_RSA
          - apk update && apk add openssh-client
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker pull $TAG_COMMIT"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker container rm -f my-app || true"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker run -d -p 80:80 --name my-app $TAG_COMMIT"
      

      Alpine is a lightweight Linux distribution and is sufficient as a Docker image here. You assign the job to the deploy stage. The deployment tag ensures that the job will be executed on runners that are tagged deployment, such as the runner you configured in Step 2.

      The script section of the deploy job starts with two configurative commands:

      • chmod og= $ID_RSA: Revokes all permissions for group and others from the private key, such that only the owner can use it. This is a requirement, otherwise SSH refuses to work with the private key.
      • apk update && apk add openssh-client: Updates Alpine’s package manager (apk) and installs the openssh-client, which provides the ssh command.

      Four consecutive ssh commands follow. The pattern for each is:

      ssh connect pattern for all deployment commands

      ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "command"

      In each ssh statement you are executing command on the remote server. To do so, you authenticate with your private key.

      The options are as follows:

      • -i stands for identity file and $ID_RSA is the GitLab variable containing the path to the private key file.
      • -o StrictHostKeyChecking=no makes sure to bypass the question, whether or not you trust the remote host. This question can not be answered in a non-interactive context such as the pipeline.
      • $SERVER_USER and $SERVER_IP are the GitLab variables you created in Step 5. They specify the remote host and login user for the SSH connection.
      • command will be executed on the remote host.

      The deployment ultimately takes place by executing these four commands on your server:

      1. docker login ...: Logs Docker in to the container registry.
      2. docker pull ...: Pulls the latest image from the container registry.
      3. docker container rm ...: Deletes the existing container if it exists. || true makes sure that the exit code is always successful, even if there was no container running by the name my-app. This guarantees a delete if exists routine without breaking the pipeline when the container does not exist (for example, for the first deployment).
      4. docker run ...: Starts a new container using the latest image from the registry. The container will be named my-app. Port 80 on the host will be bound to port 80 of the container (the order is -p host:container). -d starts the container in detached mode, otherwise the pipeline would be stuck waiting for the command to terminate.

      Note: It may seem odd to use SSH to run these commands on your server, considering the GitLab runner that executes the commands is the exact same server. Yet it is required, because the runner executes the commands in a Docker container, thus you would deploy inside the container instead of the server if you’d execute the commands without the use of SSH. One could argue that instead of using Docker as a runner executor, you could use the shell executor to run the commands on the host itself. But, that would create a constraint to your pipeline, namely that the runner has to be the same server as the one you want to deploy to. This is not a sustainable and extensible solution because one day you may want to migrate the application to a different server or use a different runner server. In any case it makes sense to use SSH to execute the deployment commands, may it be for technical or migration-related reasons.

      Let’s move on by adding this to the deployment job in your .gitlab-ci.yml:

      .gitlab-ci.yml

      . . .
      deploy:
      . . .
        environment:
          name: production
          url: http://your_server_IP
        only:
          - master
      

      GitLab environments allow you to control the deployments within GitLab. You can examine the environments in your GitLab project by going to Operations > Environments. If the pipeline did not finish yet, there will be no environment available, as no deployment took place so far.

      When a pipeline job defines an environment section, GitLab will create a deployment for the given environment (here production) each time the job successfully finishes. This allows you to trace all the deployments created by GitLab CI/CD. For each deployment you can see the related commit and the branch it was created for.

      There is also a button available for re-deployment that allows you to rollback to an older version of the software. The URL that was specified in the environment section will be opened when clicking the View deployment button.

      The only section defines the names of branches and tags for which the job will run. By default, GitLab will start a pipeline for each push to the repository and run all jobs (provided that the .gitlab-ci.yml file exists). The only section is one option of restricting job execution to certain branches/tags. Here you want to execute the deployment job for the master branch only. To define more complex rules on whether a job should run or not, have a look at the rules syntax.

      Note: In October 2020, GitHub has changed its naming convention for the default branch from master to main. Other providers such as GitLab and the developer community in general are starting to follow this approach. The term master branch is used in this tutorial to denote the default branch for which you may have a different name.

      Your complete .gitlab-ci.yml file will look like the following:

      .gitlab-ci.yml

      stages:
        - publish
        - deploy
      
      variables:
        TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
        TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
      
      publish:
        image: docker:latest
        stage: publish
        services:
          - docker:dind
        script:
          - docker build -t $TAG_COMMIT -t $TAG_LATEST .
          - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
          - docker push $TAG_COMMIT
          - docker push $TAG_LATEST
      
      deploy:
        image: alpine:latest
        stage: deploy
        tags:
          - deployment
        script:
          - chmod og= $ID_RSA
          - apk update && apk add openssh-client
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker pull $TAG_COMMIT"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker container rm -f my-app || true"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker run -d -p 80:80 --name my-app $TAG_COMMIT"
        environment:
          name: production
          url: http://your_server_IP
        only:
          - master
      

      Finally click Commit changes at the bottom of the page in GitLab to create the .gitlab-ci.yml file. Alternatively, when you have cloned the Git repository locally, commit and push the file to the remote.

      You’ve created a GitLab CI/CD configuration for building a Docker image and deploying it to your server. In the next step you are validating the deployment.

      Step 7 — Validating the Deployment

      Now you’ll validate the deployment in various places of GitLab as well as on your server and in a browser.

      When a .gitlab-ci.yml file is pushed to the repository, GitLab will automatically detect it and start a CI/CD pipeline. At the time you created the .gitlab-ci.yml file, GitLab started the first pipeline.

      Go to CI/CD > Pipelines in your GitLab project to see the pipeline’s status. If the jobs are still running/pending, wait until they are complete. You will see a Passed pipeline with two green checkmarks, denoting that the publish and deploy job ran successfully.

      The pipeline overview page showing a passed pipeline

      Let’s examine the pipeline. Click the passed button in the Status column to open the pipeline’s overview page. You will get an overview of general information such as:

      • Execution duration of the whole pipeline.
      • For which commit and branch the pipeline was executed.
      • Related merge requests. If there is an open merge request for the branch in charge, it would show up here.
      • All jobs executed in this pipeline as well as their status.

      Next click the deploy button to open the result page of the deploy job.

      The result page of the deploy job

      On the job result page you can see the shell output of the job’s script. This is the place to look for when debugging a failed pipeline. In the right sidebar you’ll find the deployment tag you added to this job, and that it was executed on your Deployment Runner.

      If you scroll to the top of the page, you will find the This job is deployed to production message. GitLab recognizes that a deployment took place because of the job’s environment section. Click the production link to move over to the production environment.

      The production environment in GitLab

      You will have an overview of all production deployments. There was only a single deployment so far. For each deployment there is a re-deploy button available to the very right. A re-deployment will repeat the deploy job of that particular pipeline.

      Whether a re-deployment works as intended depends on the pipeline configuration, because it will not do more than repeating the deploy job under the same circumstances. Since you have configured to deploy a Docker image using the commit SHA as a tag, a re-deployment will work for your pipeline.

      Note: Your GitLab container registry may have an expiration policy. The expiration policy regularly removes older images and tags from the container registry. As a consequence, a deployment that is older than the expiration policy would fail to re-deploy, because the Docker image for this commit will have been removed from the registry. You can manage the expiration policy in Settings > CI/CD > Container Registry tag expiration policy. The expiration interval is usually set to something high, like 90 days. But when you run into the case of trying to deploy an image that has been removed from the registry due to the expiration policy, you can solve the problem by re-running the publish job of that particular pipeline as well, which will re-create and push the image for the given commit to registry.

      Next click the View deployment button, which will open http://your_server_IP in a browser and you should see the My Personal Website headline.

      Finally we want to check the deployed container on your server. Head over to your terminal and make sure to log in again, if you have disconnected already (it works for both users, sammy and deployer):

      Now list the running containers:

      Which will list the my-app container:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5b64df4b37f8 registry.your_gitlab.com/your_gitlab_user/your_project/master:your_commit_sha "nginx -g 'daemon of…" 4 hours ago Up 4 hours 0.0.0.0:80->80/tcp my-app

      Read the How To Install and Use Docker on Ubuntu 18.04 guide to learn more about managing Docker containers.

      You have now validated the deployment. In the next step, you will go through the process of rolling back a deployment.

      Step 8 — Rolling Back a Deployment

      Next you’ll update the web page, which will create a new deployment and then re-deploy the previous deployment using GitLab environments. This covers the use case of a deployment rollback in case of a defective deployment.

      Start by making a little change in the index.html file:

      1. In GitLab, go to the Project overview and open the index.html file.
      2. Click the Edit button to open the online editor.
      3. Change the file content to the following:

      index.html

      <html>
      <body>
      <h1>My Enhanced Personal Website</h1>
      </body>
      </html>
      

      Save the changes by clicking Commit changes at the bottom of the page.

      A new pipeline will be created to deploy the changes. In GitLab, go to CI/CD > Pipelines. When the pipeline has completed, you can open http://your_server_IP in a browser for the updated web page now showing My Enhanced Personal Website instead of My Personal Website.

      When you move over to Operations > Environments > production you will see the newly created deployment. Now click the re-deploy button of the initial, older deployment:

      A list of the deployments of the production environment in GitLab with emphasize on the re-deploy button of the first deployment

      Confirm the popup by clicking the Rollback button.

      The deploy job of that older pipeline will be restarted and you will be redirected to the job’s overview page. Wait for the job to finish, then open http://your_server_IP in a browser, where you’ll see the initial headline My Personal Website showing up again.

      Let’s summarize what you have achieved throughout this tutorial.

      Conclusion

      In this tutorial, you have configured a continuous deployment pipeline with GitLab CI/CD. You created a small web project consisting of an HTML file and a Dockerfile. Then you configured the .gitlab-ci.yml pipeline configuration to:

      1. Build the Docker image.
      2. Push the Docker image to the container registry.
      3. Log in to the server, pull the latest image, stop the current container, and start a new one.

      GitLab will now deploy the web page to your server for each push to the repository.

      Furthermore you have verified a deployment in GitLab and on your server. You have also created a second deployment and rolled back to the first deployment using GitLab environments, which demonstrates how you deal with defective deployments.

      At this point you have automated the whole deployment chain. You can now share code changes more frequently with the world and/or customer. As a result, development cycles are likely to become shorter, as less time is required to gather feedback and publish the code changes.

      As a next step you could make your service accessible by a domain name and secure the communication with HTTPS for which How To Use Traefik as a Reverse Proxy for Docker Containers is a good follow up.



      Source link

      How To Build and Deploy a Node.js Application To DigitalOcean Kubernetes Using Semaphore Continuous Integration and Delivery


      The author selected the Open Internet / Free Speech fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Kubernetes allows users to create resilient and scalable services with a single command. Like anything that sounds too good to be true, it has a catch: you must first prepare a suitable Docker image and thoroughly test it.

      Continuous Integration (CI) is the practice of testing the application on each update. Doing this manually is tedious and error-prone, but a CI platform runs the tests for you, catches errors early, and locates the point at which the errors were introduced. Release and deployment procedures are often complicated, time-consuming, and require a reliable build environment. With Continuous Delivery (CD) you can build and deploy your application on each update without human intervention.

      To automate the whole process, you’ll use Semaphore, a Continuous Integration and Delivery (CI/CD) platform.

      In this tutorial, you’ll build an address book API service with Node.js. The API exposes a simple RESTful API interface to create, delete, and find people in the database. You’ll use Git to push the code to GitHub. Then you’ll use Semaphore to test the application, build a Docker image, and deploy it to a DigitalOcean Kubernetes cluster. For the database, you’ll create a PostgreSQL cluster using DigitalOcean Managed Databases.

      Prerequisites

      Before reading on, ensure you have the following:

      • A DigitalOcean account and a Personal Access Token. Follow Create a Personal Access Token to set one up for your account.
      • A Docker Hub account.
      • A GitHub account.
      • A Semaphore account; you can sign up with your GitHub account.
      • A new GitHub repository called addressbook for the project. When creating the repository, select the Initialize this repository with a README checkbox and select Node in the Add .gitignore menu. Follow GitHub’s Create a Repo help page for more details.
      • Git installed on your local machine and set up to work with your GitHub account. If you are unfamiliar or need a refresher, consider reading the How to use Git reference guide.
      • curl installed on your local machine.
      • Node.js installed on your local machine. In this tutorial, you’ll use Node.js version 10.16.0.

      Step 1 — Creating the Database and the Kubernetes Cluster

      Start by provisioning the services that will power the application: the DigitalOcean Database Cluster and the DigitalOcean Kubernetes Cluster.

      Log in to your DigitalOcean account and create a project. A project lets you organize all the resources that make up the application. Call the project addressbook.

      Next, create a PostgreSQL cluster. The PostgreSQL database service will hold the application’s data. You can pick the latest version available. It should take a few minutes before the service is ready.

      Once the PostgreSQL service is ready, create a database and a user. Set the database name to addessbook_db and set the username to addressbook_user. Take note of the password that’s generated for your new user. Databases are PostgreSQL’s way of organizing data. Usually, each application has its own database, although there are no hard rules about this. The application will use the username and password to get access to the database so it can save and retrieve its data.

      Finally, create a Kubernetes Cluster. Choose the same region in which the database is running. Name the cluser addressbook-server and set the number of nodes to 3.

      While the nodes are provisioning, you can start building your application.

      Step 2 — Writing the Application

      Let’s build the address book application you’re going to deploy. To start, clone the GitHub repository you created in the prerequisites so you have a local copy of the .gitignore file GitHub created for you, and you’ll be able to commit your application code quickly without having to manually create a repository. Open your browser and go to your new GitHub repository. Click on the Clone or download button and copy the provided URL. Use Git to clone the empty repository to your machine:

      • git clone https://github.com/your_github_username/addressbook

      Enter the project directory:

      With the repository cloned, you can start writing the app. You’ll build two components: a module that interacts with the database, and a module that provides the HTTP service. The database module will know how to save and retrieve persons from the address book database, and the HTTP module will receive requests and respond accordingly.

      While not strictly mandatory, it’s good practice to test your code while you write it, so you’ll also create a testing module. This is the planned layout for the application:

      • database.js: database module. It handles database operations.
      • app.js: the end user module and the main application. It provides an HTTP service for the users to connect to.
      • database.test.js: tests for the database module.

      In addition, you’ll want a package.json file for your project, which describes the project and its required dependencies. You can either create it manually with your editor, or interactively using npm. Run the npm init command to create the file interactively:

      The command will ask for some information to get started. Fill in the values as shown in the example. If you don’t see an answer listed, leave the answer blank, which uses the default value in parentheses:

      npm output

      package name: (addressbook) addressbook version: (1.0.0) 1.0.0 description: Addressbook API and database entry point: (index.js) app.js test command: git repository: URL for your GitHub repository keywords: author: Sammy the Shark <sammy@example.com>" license: (ISC) About to write to package.json: { "name": "addressbook", "version": "1.0.0", "description": "Addressbook API and database", "main": "app.js", "scripts": { "test": "echo "Error: no test specified" && exit 1" }, "author": "", "license": "ISC" } Is this OK? (yes) yes

      Now you can start writing the code. The database is at the core of the service you’re developing. It’s essential to have a well-designed database model before writing any other components. Consequently, it makes sense to start with the database code.

      You don’t have to code all the bits of the application; Node.js has a large library of reusable modules. For instance, you don’t have to write any SQL queries if you have the Sequelize ORM module in the project. This module provides an interface that handles databases as JavaScript objects and methods. It can also create tables in your database. Sequelize needs the pg module to work with PostgreSQL.

      Install modules using the npm install command with the --save option, which tells npm to save the module in package.json. Execute this command to install both sequelize and pg:

      • npm install --save sequelize pg

      Create a new JavaScript file to hold the database code:

      Import the sequelize module by adding this line to the file:

      database.js

      const Sequelize = require('sequelize');
      
      . . .
      

      Then, below that line, initialize a sequelize object with the database connection parameters, which you’ll retrieve from the system environment. This keeps the credentials out of your code so you don’t accidentally share your credentials when you push your code to GitHub. You can use process.env to access environment variables, and JavaScripts’s || operator to set defaults for undefined variables:

      database.js

      . . .
      
      const sequelize = new Sequelize(process.env.DB_SCHEMA || 'postgres',
                                      process.env.DB_USER || 'postgres',
                                      process.env.DB_PASSWORD || '',
                                      {
                                          host: process.env.DB_HOST || 'localhost',
                                          port: process.env.DB_PORT || 5432,
                                          dialect: 'postgres',
                                          dialectOptions: {
                                              ssl: process.env.DB_SSL == "true"
                                          }
                                      });
      
      . . .
      

      Now define the Person model. To keep the example from getting too complex, you’ll only create two fields: firstName and lastName, both storing string values. Add the following code to define the model:

      database.js

      . . .
      
      const Person = sequelize.define('Person', {
          firstName: {
              type: Sequelize.STRING,
              allowNull: false
          },
          lastName: {
              type: Sequelize.STRING,
              allowNull: true
          },
      });
      
      . . .
      

      This defines the two fields, making firstName mandatory with allowNull: false. Sequelize’s model definition documentation shows the available data types and options.

      Finally, export the sequelize object and the Person model so other modules can use them:

      database.js

      . . .
      
      module.exports = {
          sequelize: sequelize,
          Person: Person
      };
      

      It’s handy to have a table-creation script in a separate file that you can call at any time during development. These types of files are called migrations. Create a new file to hold this code:

      Add these lines to the file to import the database model you defined, and call the sync() function to initialize the database, which creates the table for your model:

      migrate.js

      var db = require('./database.js');
      db.sequelize.sync();
      

      The application is looking for database connection information in system environment variables. Create a file called .env to hold those values, which you will load into the environment during development:

      Add the following variable declarations to the file. Ensure that you set DB_HOST, DB_PORT, and DB_PASSWORD to those associated with your DigitalOcean PostgreSQL cluster:

      .env

      export DB_SCHEMA=addressbook_db
      export DB_USER=addressbook_user
      export DB_PASSWORD=your_db_user_password
      export DB_HOST=your_db_cluster_host
      export DB_PORT=your_db_cluster_port
      export DB_SSL=true
      export PORT=3000
      

      Save the file.

      Warning: never check environment files into source control. They usually have sensitive information.

      Since you defined a default .gitignore file when you created the repository, this file is already ignored.

      You are ready to initialize the database. Import the environment file and run migrate.js:

      • source ./.env
      • node migrate.js

      This creates the database table:

      Output

      Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname;

      The output shows two commands. The first one creates the People table as per your definition. The second command checks that the table was indeed created by looking it up in the PostgreSQL catalog.

      It’s good practice to create tests for your code. With tests, you can validate the code’s behavior. You can write a check for each function, method, or any other part of your system and verify that it works the way you’d expect, without having to test things manually.

      The jest testing framework is a great fit for writing tests against Node.js applications. Jest scans the files in the project for test files and executes them one a time. Install Jest with the --save-dev option, which tells npm that the module is not required to run the program, but it is a dependency for developing the application:

      • npm install --save-dev jest

      You’ll write tests to verify that you can insert, read, and delete records from your database. These tests will verify that your database connection and permissions are configured properly, and will also provide some tests you can use in your CI/CD pipeline later.

      Create the database.test.js file:

      Add the following content. Start by importing the database code:

      database.test.js

      const db = require('./database');
      
      . . .
      

      To ensure the database is ready to use, call sync() inside the beforeAll function:

      database.test.js

      . . .
      
      beforeAll(async () => {
          await db.sequelize.sync();
      });
      
      . . .
      

      The first test creates a person record in the database. The sequelize library executes all queries asynchronously, which means it doesn’t wait for the results of the query. To make the test wait for results so you can verify them, you must use the async and await keywords. This test calls the create() method to insert a new row in the database. Use expect to compare the person.id column with 1. The test will fail if you get a different value:

      database.test.js

      . . .
      
      test('create person', async () => {
          expect.assertions(1);
          const person = await db.Person.create({
              id: 1,
              firstName: 'Sammy',
              lastName: 'Davis Jr.',
              email: 'sammy@example.com'
          });
          expect(person.id).toEqual(1);
      });
      
      . . .
      

      In the next test, use the findByPk() method to retrieve the row with id=1. Then, validate the firstName and lastName values. Once again, use async and await:

      database.test.js

      . . .
      
      test('get person', async () => {
          expect.assertions(2);
          const person = await db.Person.findByPk(1);
          expect(person.firstName).toEqual('Sammy');
          expect(person.lastName).toEqual('Davis Jr.');
      });
      
      . . .
      

      Finally, test removing a person from the database. The destroy() method deletes the person with id=1. To ensure that it worked, try retrieving the person a second time and checking that the returned value is null:

      database.test.js

      . . .
      
      test('delete person', async () => {
          expect.assertions(1);
          await db.Person.destroy({
              where: {
                  id: 1
              }
          });
          const person = await db.Person.findByPk(1);
          expect(person).toBeNull();
      });
      
      . . .
      

      Finally, add this code to close the connection to the database with close() once all tests have finished:

      app.js

      . . .
      
      afterAll(async () => {
          await db.sequelize.close();
      });
      

      Save the file.

      The jest command runs the test suite for your program, but you can also store commands in package.json. Open this file in your editor:

      Locate the scripts keyword and replace the existing test line (which was just a placeholder). The test command is jest:

      . . .
      
        "scripts": {
          "test": "jest"
        },
      
      . . .
      

      Now you can call npm run test to invoke the test suite. This may be a longer command, but if you need to modify the jest command later, external services won’t have to change; they can continue calling npm run test.

      Run the tests:

      Then, check the results:

      Output

      console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): CREATE TABLE IF NOT EXISTS "People" ("id" SERIAL , "firstName" VARCHAR(255) NOT NULL, "lastName" VARCHAR(255), "createdAt" TIMESTAMP WITH TIME ZONE NOT NULL, "updatedAt" TIMESTAMP WITH TIME ZONE NOT NULL, PRIMARY KEY ("id")); console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT i.relname AS name, ix.indisprimary AS primary, ix.indisunique AS unique, ix.indkey AS indkey, array_agg(a.attnum) as column_indexes, array_agg(a.attname) AS column_names, pg_get_indexdef(ix.indexrelid) AS definition FROM pg_class t, pg_class i, pg_index ix, pg_attribute a WHERE t.oid = ix.indrelid AND i.oid = ix.indexrelid AND a.attrelid = t.oid AND t.relkind = 'r' and t.relname = 'People' GROUP BY i.relname, ix.indexrelid, ix.indisprimary, ix.indisunique, ix.indkey ORDER BY i.relname; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): INSERT INTO "People" ("id","firstName","lastName","createdAt","updatedAt") VALUES ($1,$2,$3,$4,$5) RETURNING *; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): DELETE FROM "People" WHERE "id" = 1 console.log node_modules/sequelize/lib/sequelize.js:1176 Executing (default): SELECT "id", "firstName", "lastName", "createdAt", "updatedAt" FROM "People" AS "Person" WHERE "Person"."id" = 1; PASS ./database.test.js ✓ create person (344ms) ✓ get person (173ms) ✓ delete person (323ms) Test Suites: 1 passed, 1 total Tests: 3 passed, 3 total Snapshots: 0 total Time: 5.315s Ran all test suites.

      With the database code tested, you can build the API service to manage the people in the address book.

      To serve HTTP requests, you’ll use the Express web framework. Install Express and save it as a dependency using npm install:

      • npm install --save express

      You’ll also need the body-parser module, which you’ll use to access the HTTP request body. Install this as a dependency as well:

      • npm install --save body-parser

      Create the main application file app.js:

      Import the express, body-parser, and database modules. Then create an instance of the express module called app to control and configure the service. You use app.use() to add features such as middleware. Use this to add the body-parser module so the application can read url-encoded strings:

      app.js

      var express = require('express');
      var bodyParser = require('body-parser');
      var db = require('./database');
      var app = express();
      app.use(bodyParser.urlencoded({ extended: true }));
      
      . . .
      

      Next, add routes to the application. Routes are similar to buttons in an app or website; they trigger some action in your application. Routes link unique URLs to actions in the application. Each route will serve a specific path and support a different operation.

      The first route you’ll define handles GET requests for the /person/$ID path, which will display the database record for the person with the specified ID. Express automatically sets the value of the requested $ID in the req.params.id variable.

      The application must reply with the person data encoded as a JSON string. As you did in the database tests, use the findByPk() method to retrieve the person by id and reply to the request with HTTP status 200 (OK) and send the person record as JSON. Add the following code:

      app.js

      . . .
      
      app.get("/person/:id", function(req, res) {
          db.Person.findByPk(req.params.id)
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Errors cause the code in catch() to be executed. For instance, if the database is down, the connection will fail, and this will execute instead. In case of trouble, set the HTTP status to 500 (Internal Server Error) and send the error message back to the user:

      Add another route to create a person in the database. This route will handle PUT requests and access the person’s data from the req.body. Use the create() method to insert a row in the database:

      app.js

      . . .
      
      app.put("/person", function(req, res) {
          db.Person.create({
              firstName: req.body.firstName,
              lastName: req.body.lastName,
              id: req.body.id
          })
              .then( person => {
                  res.status(200).send(JSON.stringify(person));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      Add another route to handle DELETE requests, which will remove records from the address book. First, use the ID to locate the record and then use the destroy method to remove it:

      app.js

      . . .
      
      app.delete("/person/:id", function(req, res) {
          db.Person.destroy({
              where: {
                  id: req.params.id
              }
          })
              .then( () => {
                  res.status(200).send();
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      And for convenience, add a route that retrieves all people in the database using the /all path:

      app.js

      . . .
      
      app.get("/all", function(req, res) {
          db.Person.findAll()
              .then( persons => {
                  res.status(200).send(JSON.stringify(persons));
              })
              .catch( err => {
                  res.status(500).send(JSON.stringify(err));
              });
      });
      
      . . .
      

      One last route left. If the request did not match any of the previous routes, send status code 404 (Not Found):

      app.js

      . . .
      
      app.use(function(req, res) {
          res.status(404).send("404 - Not Found");
      });
      
      . . .
      

      Finally, add the listen() method, which starts up the service. If the environment variable PORT is defined, then the service listens in that port; otherwise, it defaults to port 3000:

      app.js

      . . .
      
      var server = app.listen(process.env.PORT || 3000, function() {
          console.log("app is running on port", server.address().port);
      });
      

      As you’ve learned, the package.json file lets you define various commands to run tests, start your apps, and other tasks, which often lets you run common commands with much less typing. Add a new command on package.json to start the application. Edit the file:

      Add the start command, so it looks like this:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js"
        },
      
      . . .
      

      Don’t forget to add a comma to the previous line, as the scripts section needs its entries separated by commas.

      Save the file and start the application for the first time. First, load the environment file with source; this imports the variables into the session and makes them available to the application. Then, start the application with npm run start:

      • source ./.env
      • npm run start

      The app starts on port 3000:

      Output

      app is running on port 3000

      Open a browser and navigate to http://localhost:3000/all. You’ll see a page showing [].

      Switch back to your terminal and press CTRL-C to stop the application.

      Now is an excellent time to add code quality tests. Code quality tools, also known as linters, scan the project for issues in the code. Bad coding practices like leaving unused variables, not ending statements with a semicolon, or missing curly braces can cause bugs that are difficult to find.

      Install jshint tool, a JavaScript linter, as a development dependency:

      • npm install --save-dev jshint

      Over the years, JavaScript has received of updates, features, and syntax changes. The language has been standardized by ECMA International under the name of “ECMAScript”. About once a year, ECMA releases a new version of ECMAScript with new features.

      By default, jshint assumes that your code is compatible with ES6 (ECMAScript Version 6), and will throw an error if it finds any keywords not supported in that version. You’ll want to find the version that is compatible with your code. If you look at the feature table for all the recent versions, you’ll find that the async/await keywords were not introduced until ES8. You used both keywords in the database test code, so that sets the minimum compatible version to ES8.

      To tell jshint the version you’re using, create a file called .jshintrc:

      In the file, specify esversion. The jshintrc file uses JSON, so create a new JSON object in the file:

      .jshintrc

      { "esversion": 8 }
      

      Save the file and exit the editor.

      Add a command to run jshint. Edit package.json:

      Add a lint command to your project in the scripts section of package.json. The command calls the lint tool against all the JavaScript files you created so far:

      package.json

      . . .
      
        "scripts": {
          "test": "jest",
          "start": "node app.js",
          "lint": "jshint app.js database*.js migrate.js"
        },
      
      . . .
      

      Now you can run the linter to find any issues:

      There should not be any error messages:

      Output

      > jshint app.js database*.js migrate.js

      If there are any errors, jshint will show the line that has the problem.

      You’ve completed the project and ensured it works. Add the files to the repository, commit, and push the changes:

      • git add *.js
      • git add package*.json
      • git add .jshintrc
      • git commit -m 'initial commit'
      • git push origin master

      Now you can configure Semaphore to test, build, and deploy the application, starting by configuring Semaphore with your DigitalOcean Personal Access Token and database credentials.

      Step 3 — Creating Secrets in Semaphore

      There is some information that doesn’t belong in a GitHub repository. Passwords and API Tokens are good examples of this. You’ve stored this sensitive data in a separate file and loaded it into your environment, When using Semaphore, you can use Secrets to store sensitive data.

      There are three kinds of secrets in the project:

      • Docker Hub: the username and password of your Docker Hub account.
      • DigitalOcean Personal Access Token: to deploy the application to your Kubernetes cluster.
      • Environment Variables: for database username and password connection parameters.

      To create the first secret, open your browser and log in to the Semaphore website. On the left navigation menu, click Secrets under the CONFIGURATION heading. Click the Create New Secret button.

      For Name of the Secret, enter dockerhub. Then under Environment Variables, create two environment variables:

      • DOCKER_USERNAME: your DockerHub username.
      • DOCKER_PASSWORD: your DockerHub password.

      Docker Hub Secret

      Click Save Changes.

      Create a second secret for your DigitalOcean Personal Access Token. Once again, click on Secrets on the left navigation menu, then on Create New Secret. Call this secret do-access-token and create an environment value called DO_ACCESS_TOKEN with the value set to your Personal Access Token:

      DigitalOcean Token Secret

      Save the secret.

      For the next secret, instead of setting environment variables directly, you’ll upload the .env file from the project’s root.

      Create a new secret called env-production. Under the Files section, press the Upload file link to locate and upload your .env file, and tell Semaphore to place it at /home/semaphore/env-production.

      Environment Secret

      Note: Because the file is hidden, you may have trouble finding it on your computer. There is usually a menu item or a key combination to view hidden files, such as CTRL+H. If all else fails, you can try copying the file with a non-hidden name:

      Then upload the file and rename it back:

      The environment variables are all configured. Now you can begin the Continuous Integration setup.

      Step 4 — Adding your Project to Semaphore

      In this step you will add your project to Semaphore and start the Continuous Integration (CI) pipeline.

      First, link your GitHub repository with Semaphore:

      1. Log in to your Semaphore account.
      2. Click the + icon next to PROJECTS.
      3. Click the Add Repository button next to your repository.

      Add Repository to Semaphore

      Now that Semaphore is connected, it will pick up any changes in the repository automatically.

      You are now ready to create the Continuous Integration pipeline for the application. A pipeline defines the path your code must travel to get built, tested, and deployed. The pipeline is automatically run each time there is a change in the GitHub repository.

      First, you should ensure that Semaphore uses the same version of Node you’ve been using during development. You can check which version is running on your machine:

      Output

      v10.16.0

      You can tell Semaphore which version of Node.js to use by creating a file called .nvmrc in your repository. Internally, Semaphore uses node version manager to switch between Node.js versions. Create the .nvmrc file and set the version to 10.16.0:

      Semaphore pipelines go in the .semaphore directory. Create the directory:

      Create a new pipeline file. The initial pipeline is always called semaphore.yml. In this file, you’ll define all the steps required to build and test the application.

      • nano .semaphore/semaphore.yml

      Note: You are creating a file in the YAML format. You must preserve the leading spaces as shown in the tutorial.

      The first line must set the Semaphore file version; the current stable is v1.0. Also, the pipeline needs a name. Add these lines to your file:

      .semaphore/semaphore.yml

      version: v1.0
      name: Addressbook
      
      . . .
      

      Semaphore automatically provisions virtual machines to run the tasks. There are various machines to choose from. For the integration jobs, use the e1-standard-2 (2 CPUs 4 GB RAM) along with an Ubuntu 18.04 OS. Add these lines to the file:

      .semaphore/semaphore.yml

      . . .
      
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      Semaphore uses blocks to organize the tasks. Each block can have one or more jobs. All jobs in a block run in parallel, each one in an isolated machine. Semaphore waits for all jobs in a block to pass before starting the next one.

      Start by defining the first block, which installs all the JavaScript dependencies to test and run the application:

      .semaphore/semaphore.yml

      . . .
      
      blocks:
        - name: Install dependencies
          task:
      
      . . .
      

      You can define environment variables that are common for all jobs, like setting NODE_ENV to test, so Node.js knows this is a test environment. Add this code after task:

      .semaphore/semaphore.yml

      . . .
          task:
            env_vars:
              - name: NODE_ENV
                value: test
      
      . . .
      

      Commands in the prologue section are executed before each job in the block. It’s a convenient place to define setup tasks. You can use checkout to clone the GitHub repository. Then, nvm use activates the appropriate Node.js version you specified in .nvmrc. Add the prologue section:

      .semaphore/semaphore.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - nvm use
      
      . . .
      

      Next add this code to install the project’s dependencies. To speed up jobs, Semaphore provides the cache tool. You can run cache store to save node_modules directory in Semaphore’s cache. cache automatically figures out which files and directories should be stored. The second time the job is executed, cache restore restores the directory.

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: npm install and cache
                commands:
                  - cache restore
                  - npm install
                  - cache store 
      
      . . .
      

      Add another block which will run two jobs. One to run the lint test, and another to run the application’s test suite.

      .semaphore/semaphore.yml

      . . .
      
        - name: Tests
          task:
            env_vars:
              - name: NODE_ENV
                value: test
            prologue:
              commands:
                - checkout
                - nvm use
                - cache restore 
      
      . . .
      

      The prologue repeats the same commands as in the previous block and restores node_module from the cache. Since this block will run tests, you set the NODE_ENV environment variable to test.

      Now add the jobs. The first job performs the code quality check with jshint:

      .semaphore/semaphore.yml

      . . .
      
            jobs:
              - name: Static test
                commands:
                  - npm run lint
      
      . . .
      

      The next job executes the unit tests. You’ll need a database to run them, as you don’t want to use your production database. Semaphore’s sem-service can start a local PostgreSQL database in the test environment that is completely isolated. The database is destroyed when the job ends. Start this service and run the tests:

      .semaphore/semaphore.yml

      . . .
      
              - name: Unit test
                commands:
                  - sem-service start postgres
                  - npm run test
      

      Save the .semaphore/semaphore.yml file.

      Now add and commit the changes to the GitHub repository:

      • git add .nvmrc
      • git add .semaphore/semaphore.yml
      • git commit -m "continuous integration pipeline"
      • git push origin master

      As soon as the code is pushed to GitHub, Semaphore starts the CI pipeline:

      Running Workflow

      You can click on the pipeline to show the blocks and jobs, and their output.

      Integration Pipeline

      Next you will create a new pipeline that builds a Docker image for the application.

      Step 5 — Building Docker Images for the Application

      A Docker image is the basic unit of a Kubernetes deployment. The image should have all the binaries, libraries, and code required to run the application. A Docker container is not a lightweight virtual machine, but it behaves like one. The Docker Hub registry contains hundreds of ready-to-use images, but we’re going to build our own.

      In this step, you’ll add a new pipeline to build a custom Docker image for your app and push it to Docker Hub.

      To build a custom image, create a Dockerfile:

      The Dockerfile is a recipe to create the image. You can use the official Node.js distribution as a starting point instead of starting from scratch. Add this to your Dockerfile:

      Dockerfile

      FROM node:10.16.0-alpine
      
      . . .
      

      Then add a command which copies package.json and package-lock.json, and then install the node modules inside the image:

      Dockerfile

      . . .
      
      COPY package*.json ./
      RUN npm install
      
      . . .
      

      Installing the dependencies first will speed up subsequent builds, as Docker will cache this step.

      Now add this command which copies all the application files in the project root into the image:

      Dockerfile

      . . .
      
      COPY *.js ./
      
      . . .
      

      Finally, EXPOSE specifies that the container listens for connections on port 3000, where the application is listening, and CMD sets the command that should run when the container starts. Add these lines to your file:

      Dockerfile

      . . .
      
      EXPOSE 3000
      CMD [ "npm", "run", "start" ]
      

      Save the file.

      With the Dockerfile complete, you can create a new pipeline so Semaphore can build the image for you when you push your code to GitHub. Create a new file called docker-build.yml:

      • nano .semaphore/docker-build.yml

      Start the pipeline with the same boilerplate as the the CI pipline, but with the name Docker build:

      .semaphore/docker-build.yml

      version: v1.0
      name: Docker build
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have only one block and one job. In Step 3, you created a secret named dockerhub with your Docker Hub username and password. Here, you’ll import these values using the secrets keyword. Add this code:

      .semaphore/docker-build.yml

      . . .
      
      blocks:
        - name: Build
          task:
            secrets:
              - name: dockerhub
      
      . . .
      

      Docker images are stored in repositories. We’ll use the official Docker Hub which allows for an unlimited number of public images. Add these lines to check out the code from GitHub and use the docker login command to authenticate with Docker Hub.

      .semaphore/docker-build.yml

          task:
      . . .
      
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
      
      . . .
      

      Each Docker image is fully identified by the combination of name and tag. The name usually corresponds to the product or software, and the tag corresponds to the particular version of the software. For example, node.10.16.0. When no tag is supplied, Docker defaults to the special latest tag. Hence, it’s considered good practice to use the latest tag to refer to the most current image.

      Add the following code to build the image and push it to Docker Hub:

      .semaphore/docker-build.yml

      . . .
      
            jobs:
            - name: Docker build
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:latest" || true
                - docker build --cache-from "${DOCKER_USERNAME}/addressbook:latest" -t "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" .
                - docker push "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID"
      

      When Docker builds the image, it reuses parts of existing images to speed up the process. The first command tries to pull the latest image from Docker Hub so it may be reused. Semaphore stops the pipeline if any of the commands return a status code different than zero. For example, if the repository doesn’t have any latest image, as it won’t on the first try, the pipeline will stop. You can force Semaphore to ignore failed commands by appending || true to the command.

      The second command builds the image. To reference this particular image later, you can tag it with a unique string. Semaphore provides several environment variables for jobs. One of them, $SEMAPHORE_WORKFLOW_ID is unique and shared among all the pipelines in the workflow. It’s handy for referencing this image later in the deployment.

      The third command pushes the image to Docker Hub.

      The build pipeline is ready, but Semaphore will not start it unless you connect it to the main CI pipeline. You can chain multiple pipelines to create complex, multi-branch workflows using promotions.

      Edit the main pipeline file .semaphore/semaphore.yml:

      • nano .semaphore/semaphore.yml

      Add the following lines at the end of the file:

      .semaphore/semaphore.yml

      . . .
      
      promotions:
        - name: Dockerize
          pipeline_file: docker-build.yml
          auto_promote_on:
            - result: passed
      

      auto_promote_on defines the condition to start the docker build pipeline. In this case, it runs when all jobs defined in the semaphore.yml file have passed.

      To test the new pipeline, you need to add, commit, and push all the modified files to GitHub:

      • git add Dockerfile
      • git add .semaphore/docker-build.yml
      • git add .semaphore/semaphore.yml
      • git commit -m "docker build pipeline"
      • git push origin master

      After the CI pipeline is complete, the Docker build pipeline starts.

      Build Pipeline

      When it finishes, you’ll see your new image in your Docker Hub repository.

      You’ve got your build process testing and creating the image. Now you’ll create the final pipeline to deploy the application to your Kubernetes cluster.

      Step 6 — Setting up Continuous Deployment to Kubernetes

      The building block of a Kubernetes deployment is the pod. A pod is a group of containers that are managed as a single unit. The containers inside a pod start and stop in unison and always run on the same machine, sharing its resources. Each pod has an IP address. In this case, the pods will only have one container.

      Pods are ephemeral; they are created and destroyed frequently. You can’t tell which IP address is going to be assigned to each pod until it’s started. To solve this, you’ll use services, which have fixed public IP addresses so incoming connections can be load-balanced and forwarded to the pods.

      You could manage pods directly, but it’s better to let Kubernetes handle that by using a deployment. In this section, you will create a declarative manifest that describes the final desired state for your cluster. The manifest has two resources:

      • Deployment: starts the pods in the cluster nodes as required and keeps track of their status. Since in this tutorial we’re using a 3-node cluster, we’ll deploy 3 pods.
      • Service: acts as an entry point for our users. Listens to traffic on port 80 (HTTP) and forwards the connection to the pods.

      Create a manifest file called deployment.yml:

      Start the manifest with the Deployment resource. Add the following contents to the new file to define the deployment:

      deployment.yml

      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: addressbook
      spec:
        replicas: 3
        selector:
          matchLabels:
            app: addressbook
        template:
          metadata:
            labels:
              app: addressbook
          spec:
            containers:
              - name: addressbook
                image: ${DOCKER_USERNAME}/addressbook:${SEMAPHORE_WORKFLOW_ID}
                env:
                  - name: NODE_ENV
                    value: "production"
                  - name: PORT
                    value: "$PORT"
                  - name: DB_SCHEMA
                    value: "$DB_SCHEMA"
                  - name: DB_USER
                    value: "$DB_USER"
                  - name: DB_PASSWORD
                    value: "$DB_PASSWORD"
                  - name: DB_HOST
                    value: "$DB_HOST"
                  - name: DB_PORT
                    value: "$DB_PORT"
                  - name: DB_SSL
                    value: "$DB_SSL"
      
      
      . . .
      

      For each resource in the manifest, you need to set an apiVersion. For deployments, use apiVersion: apps/v1, a stable version. Then, tell Kubernetes that this resource is a Deployment with kind: Deployment. Each definition should have a name defined in metadata.name.

      In the spec section you tell Kubernetes what the desired final state is. This definition requests that Kubernetes should create 3 pods with replicas: 3.

      Labels are key-value pairs used to organize and cross-reference Kubernetes resources. You define labels with metadata.labels, and you can look for matching labels with selector.matchLabels. This is how you connect elements togther.

      The key spec.template defines a model that Kubernetes will use to create each pod. Inside spec.template.metadata.labels you set one label for the pods: app: addressbook.

      With spec.selector.matchLabels you make the deployment manage any pods with the label app: addressbook. In this case you are making this deployment responsible for all the pods.

      Finally, you define the image that runs in the pods. In spec.template.spec.containers you set the image name. Kubernetes will pull the image from the registry as needed. In this case, it will pull from Docker Hub). You can also set environment variables for the containers, which is fortunate because you need to supply several values for the database connection.

      To keep the deployment manifest flexible, you’ll be relying on variables. The YAML format, however, doesn’t allow variables, so the file isn’t valid yet. You’ll solve that problem when you define the deployment pipeline for Semaphore.

      That’s it for the deployment. But this only defines the pods. You still need a service that will allow traffic to flow to your pods. You can add another Kubernetes resource in the same file as long as you use three hyphens (---) as a separator.

      Add the following code to define a load balancer service that connects to pods with the addressbook label:

      deployment.yml

      . . .
      
      ---
      
      apiVersion: v1
      kind: Service
      metadata:
        name: addressbook-lb
      spec:
        selector:
          app: addressbook
        type: LoadBalancer
        ports:
          - port: 80
            targetPort: 3000
      

      The load balancer will receive connections on port 80 and forward them to the pods’ port 3000 where the application is listening.

      Save the file.

      Now, create a deployment pipeline for Semaphore that will deploy the app using the manifest. Create a new file in the .semaphore directory:

      • nano .semaphore/deploy-k8s.yml

      Begin the pipeline as usual, specifying the version, name, and image:

      .semaphore/deploy-k8s.yml

      version: v1.0
      name: Deploy to Kubernetes
      agent:
        machine:
          type: e1-standard-2
          os_image: ubuntu1804
      
      . . .
      

      This pipeline will have two blocks. The first block deploys the application to the Kubernetes cluster.

      Define the block and import all the secrets:

      .semaphore/deploy-k8s.yml

      . . .
      
      blocks:
        - name: Deploy to Kubernetes
          task:
            secrets:
              - name: dockerhub
              - name: do-access-token
              - name: env-production
      
      . . .
      

      Store your DigitalOcean Kubernetes cluster name in an environment variable so you can reference it later:

      .semaphore/deploy-k8s.yml

      . . .
      
            env_vars:
              - name: CLUSTER_NAME
                value: addressbook-server
      
      . . .
      

      DigitalOcean Kubernetes clusters are managed with a combination of two programs: kubectl and doctl. The former is already included in Semaphore’s image, but the latter isn’t, so you need to install it. You can use the prologue section to do it.

      Add this prologue section:

      .semaphore/deploy-k8s.yml

      . . .
      
            prologue:
              commands:
                - wget https://github.com/digitalocean/doctl/releases/download/v1.20.0/doctl-1.20.0-linux-amd64.tar.gz
                - tar xf doctl-1.20.0-linux-amd64.tar.gz 
                - sudo cp doctl /usr/local/bin
                - doctl auth init --access-token $DO_ACCESS_TOKEN
                - doctl kubernetes cluster kubeconfig save "${CLUSTER_NAME}"
                - checkout
      
      . . .
      

      The first command downloads the doctl official release with wget. The second command decompresses it with tar and copies it into the local path. Once doctl is installed, it can be used to authenticate with the DigitalOcean API and request the Kubernetes config file for our cluster. After checking out our code, we are done with the prologue:

      Next comes the final piece of our pipeline: deploying to the cluster.

      Remember that there were some environment variables in deployment.yml, and YAML does not allow that. As a result, deployment.yml in its current state, won’t work. To get around that, source the environment file to load the variables, then use the envsubst command to expand the variables in-place with the actual values. The result, a file called deploy.yml, is entirely valid YAML with the values inserted. With the file in place, you can start the deployment with kubectl apply:

      .semaphore/deploy-k8s.yml

      . . .
      
            jobs:
            - name: Deploy
              commands:
                - source $HOME/env-production
                - envsubst < deployment.yml | tee deploy.yml
                - kubectl apply -f deploy.yml
      
      . . .
      

      The second block adds the latest tag to the image on Docker Hub to denote that this is the most current version deployed. Repeat the Docker login steps, then pull, retag, and push to Docker Hub:

      .semaphore/deploy-k8s.yml

      . . .
      
        - name: Tag latest release
          task:
            secrets:
              - name: dockerhub
            prologue:
              commands:
                - checkout
                - echo "${DOCKER_PASSWORD}" | docker login -u "${DOCKER_USERNAME}" --password-stdin
                - checkout
            jobs:
            - name: docker tag latest
              commands:
                - docker pull "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" 
                - docker tag "${DOCKER_USERNAME}/addressbook:$SEMAPHORE_WORKFLOW_ID" "${DOCKER_USERNAME}/addressbook:latest"
                - docker push "${DOCKER_USERNAME}/addressbook:latest"
      

      Save the file.

      This pipeline performs the deployment, but it can only start if the Docker image was successfully generated and pushed to Docker Hub. As a result, you must connect the build and deployment pipelines with a promotion. Edit the Docker build pipeline to add it:

      • nano .semaphore/docker-build.yml

      Add the promotion to the end of the file:

      .semaphore/docker-build.yml

      . . .
      
      promotions:
        - name: Deploy to Kubernetes
          pipeline_file: deploy-k8s.yml
          auto_promote_on:
            - result: passed
      

      You are done setting up the CI/CD workflow.

      All that remains is pushing the modified files and letting Semaphore do the work. Add, commit, and push your repository’s changes:

      • git add .semaphore/deploy-k8s.yml
      • git add .semaphore/docker-build.yml
      • git add deployment.yml
      • git commit -m "kubernetes deploy pipeline"
      • git push origin master

      It’ll take a few minutes for the deployment to complete.

      Deploy Pipeline

      Let’s test the application next.

      Step 7 — Testing the Application

      At this point, the application is up and running. In this step, you’ll use curl to test the API endpoint.

      You’ll need to know the public IP that DigitalOcean has given to your cluster. Follow these steps to find it:

      1. Log in to your DigitalOcean account.
      2. Select the addressbook project
      3. Go to Networking.
      4. Click on Load Balancers.
      5. The IP Address is shown. Copy the IP address.

      Load Balancer IP

      Let’s check the /all route using curl:

      • curl -w "n" YOUR_CLUSTER_IP/all

      You can use the -w "n" option to ensure curl prints all lines:

      Since there are no records in the database yet, you get an empty JSON array as the result:

      Output

      []

      Create a new person record by making a PUT request to the /person endpoint:

      • curl -w "n" -X PUT
      • -d "firstName=Sammy&lastName=the Shark" YOUR_CLUSTER_IP/person

      The API returns the JSON object for the person:

      Output

      { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "updatedAt": "2019-07-04T23:51:00.548Z", "createdAt": "2019-07-04T23:51:00.548Z" }

      Create a second person:

      • curl -w "n" -X PUT
      • -d "firstName=Tommy&lastName=the Octopus" YOUR_CLUSTER_IP/person

      The output indicates that a second person was created:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "updatedAt": "2019-07-04T23:52:08.724Z", "createdAt": "2019-07-04T23:52:08.724Z" }

      Now make a GET request to get the person with the id of 2:

      • curl -w "n" YOUR_CLUSTER_IP/person/2

      The server replies with the data you requested:

      Output

      { "id": 2, "firstName": "Tommy", "lastName": "the Octopus", "createdAt": "2019-07-04T23:52:08.724Z", "updatedAt": "2019-07-04T23:52:08.724Z" }

      To delete the person, send a DELETE request:

      • curl -w "n" -X DELETE YOUR_CLUSTER_IP/person/2

      No output is returned by this command.

      You should only have one person in your database, the one with the id of 1. Try getting /all again:

      • curl -w "n" YOUR_CLUSTER_IP/all

      The server replies with an array of persons containing only one record:

      Output

      [ { "id": 1, "firstName": "Sammy", "lastName": "the Shark", "createdAt": "2019-07-04T23:51:00.548Z", "updatedAt": "2019-07-04T23:51:00.548Z" } ]

      At this point, there’s only one person left in the database.

      This completes the tests for all the endpoints in our application and marks the end of the tutorial.

      Conclusion

      In this tutorial, you wrote a complete Node.js application from scratch which used DigitalOcean’s managed PostgreSQL database service. You then used Semaphore’s CI/CD pipelines to fully automate a workflow that tested and built a container image, uploaded it to Docker Hub, and deployed it to DigitalOcean Kubernetes.

      To learn more about Kubernetes, you can read An Introduction to Kubernetes and the rest of DigitalOcean’s Kubernetes tutorials.

      Now that your application is deployed, you may consider adding a domain name, securing your database cluster, or setting up alerts for your database.



      Source link

      How To Implement Continuous Testing of Ansible Roles Using Molecule and Travis CI on Ubuntu 18.04


      The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Ansible is an agentless configuration management tool that uses YAML templates to define a list of tasks to be performed on hosts. In Ansible, roles are a collection of variables, tasks, files, templates and modules that are used together to perform a singular, complex function.

      Molecule is a tool for performing automated testing of Ansible roles, specifically designed to support the development of consistently well-written and maintained roles. Molecule’s unit tests allow developers to test roles simultaneously against multiple environments and under different parameters. It’s important that developers continuously run tests against code that often changes; this workflow ensures that roles continue to work as you update code libraries. Running Molecule using a continuous integration tool, like Travis CI, allows for tests to run continuously, ensuring that contributions to your code do not introduce breaking changes.

      In this tutorial, you will use a pre-made base role that installs and configures an Apache web server and a firewall on Ubuntu and CentOS servers. Then, you will initialize a Molecule scenario in that role to create tests and ensure that the role performs as intended in your target environments. After configuring Molecule, you will use Travis CI to continuously test your newly created role. Every time a change is made to your code, Travis CI will run molecule test to make sure that the role still performs correctly.

      Prerequisites

      Before you begin this tutorial, you will need:

      Step 1 — Forking the Base Role Repository

      You will be using a pre-made role called ansible-apache that installs Apache and configures a firewall on Debian- and Red Hat-based distributions. You will fork and use this role as a base and then build Molecule tests on top of it. Forking allows you to create a copy of a repository so you can make changes to it without tampering with the original project.

      Start by creating a fork of the ansible-apache role. Go to the ansible-apache repository and click on the Fork button.

      Once you have forked the repository, GitHub will lead you to your fork’s page. This will be a copy of the base repository, but on your own account.

      Click on the green Clone or Download button and you’ll see a box with Clone with HTTPS.

      Copy the URL shown for your repository. You’ll use this in the next step. The URL will be similar to this:

      https://github.com/username/ansible-apache.git
      

      You will replace username with your GitHub username.

      With your fork set up, you will clone it on your server and begin preparing your role in the next section.

      Step 2 — Preparing Your Role

      Having followed Step 1 of the prerequisite How To Test Ansible Roles with Molecule on Ubuntu 18.04, you will have Molecule and Ansible installed in a virtual environment. You will use this virtual environment for developing your new role.

      First, activate the virtual environment you created while following the prerequisites by running:

      • source my_env/bin/activate

      Run the following command to clone the repository using the URL you just copied in Step 1:

      • git clone https://github.com/username/ansible-apache.git

      Your output will look similar to the following:

      Output

      Cloning into 'ansible-apache'... remote: Enumerating objects: 16, done. remote: Total 16 (delta 0), reused 0 (delta 0), pack-reused 16 Unpacking objects: 100% (16/16), done.

      Move into the newly created directory:

      The base role you've downloaded performs the following tasks:

      • Includes variables: The role starts by including all the required variables according to the distribution of the host. Ansible uses variables to handle the disparities between different systems. Since you are using Ubuntu 18.04 and CentOS 7 as hosts, the role will recognize that the OS families are Debian and Red Hat respectively and include variables from vars/Debian.yml and vars/RedHat.yml.

      • Includes distribution-relevant tasks: These tasks include tasks/install-Debian.yml and tasks/install-RedHat.yml. Depending on the specified distribution, it installs the relevant packages. For Ubuntu, these packages are apache2 and ufw. For CentOS, these packages are httpd and firewalld.

      • Ensures latest index.html is present: This task copies over a template templates/index.html.j2 that Apache will use as the web server's home page.

      • Starts relevant services and enables them on boot: Starts and enables the required services installed as part of the first task. For CentOS, these services are httpd and firewalld, and for Ubuntu, they are apache2 and ufw.

      • Configures firewall to allow traffic: This includes either tasks/configure-Debian-firewall.yml or tasks/configure-RedHat-firewall.yml. Ansible configures either Firewalld or UFW as the firewall and whitelists the http service.

      Now that you have an understanding of how this role works, you will configure Molecule to test it. You will write test cases for these tasks that cover the changes they make.

      Step 3 — Writing Your Tests

      To check that your base role performs its tasks as intended, you will start a Molecule scenario, specify your target environments, and create three custom test files.

      Begin by initializing a Molecule scenario for this role using the following command:

      • molecule init scenario -r ansible-apache

      You will see the following output:

      Output

      --> Initializing new scenario default... Initialized scenario in /home/sammy/ansible-apache/molecule/default successfully.

      You will add CentOS and Ubuntu as your target environments by including them as platforms in your Molecule configuration file. To do this, edit the molecule.yml file using a text editor:

      • nano molecule/default/molecule.yml

      Add the following highlighted content to the Molecule configuration:

      ~/ansible-apache/molecule/default/molecule.yml

      ---
      dependency:
        name: galaxy
      driver:
        name: docker
      lint:
        name: yamllint
      platforms:
        - name: centos7
          image: milcom/centos7-systemd
          privileged: true
        - name: ubuntu18
          image: solita/ubuntu-systemd
          command: /sbin/init
          privileged: true
          volumes:
            - /lib/modules:/lib/modules:ro
      provisioner:
        name: ansible
        lint:
          name: ansible-lint
      scenario:
        name: default
      verifier:
        name: testinfra
        lint:
          name: flake8
      

      Here, you're specifying two target platforms that are launched in privileged mode since you're working with systemd services:

      • centos7 is the first platform and uses the milcom/centos7-systemd image.
      • ubuntu18 is the second platform and uses the solita/ubuntu-systemd image. In addition to using privileged mode and mounting the required kernel modules, you're running /sbin/init on launch to make sure iptables is up and running.

      Save and exit the file.

      For more information on running privileged containers visit the official Molecule documentation.

      Instead of using the default Molecule test file, you will be creating three custom test files, one for each target platform, and one file for writing tests that are common between all platforms. Start by deleting the scenario's default test file test_default.py with the following command:

      • rm molecule/default/tests/test_default.py

      You can now move on to creating the three custom test files, test_common.py, test_Debian.py, and test_RedHat.py for each of your target platforms.

      The first test file, test_common.py, will contain the common tests that each of the hosts will perform. Create and edit the common test file, test_common.py:

      • nano molecule/default/tests/test_common.py

      Add the following code to the file:

      ~/ansible-apache/molecule/default/tests/test_common.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
      
      
      @pytest.mark.parametrize('file, content', [
        ("/var/www/html/index.html", "Managed by Ansible")
      ])
      def test_files(host, file, content):
          file = host.file(file)
      
          assert file.exists
          assert file.contains(content)
      

      In your test_common.py file, you have imported the required libraries. You have also written a test called test_files(), which holds the only common task between distributions that your role performs: copying your template as the web servers homepage.

      The next test file, test_Debian.py, holds tests specific to Debian distributions. This test file will specifically target your Ubuntu platform.

      Create and edit the Ubuntu test file by running the following command:

      • nano molecule/default/tests/test_Debian.py

      You can now import the required libraries and define the ubuntu18 platform as the target host. Add the following code to the start of this file:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('ubuntu18')
      

      Then, in the same file, you'll add test_pkg() test.

      Add the following code to the file, which defines the test_pkg() test:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      ...
      @pytest.mark.parametrize('pkg', [
          'apache2',
          'ufw'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
          assert package.is_installed
      

      This test will check if apache2 and ufw packages are installed on the host.

      Note: When adding multiple tests to a Molecule test file, make sure there are two blank lines between each test or you'll get a syntax error from Molecule.

      To define the next test, test_svc(), add the following code under the test_pkg() test in your file:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      ...
      @pytest.mark.parametrize('svc', [
          'apache2',
          'ufw'
      ])
      def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      

      test_svc() will check if the apache2 and ufw services are running and enabled.

      Finally you will add your last test, test_ufw_rules(), to the test_Debian.py file.

      Add this code under the test_svc() test in your file to define test_ufw_rules():

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      ...
      @pytest.mark.parametrize('rule', [
          '-A ufw-user-input -p tcp -m tcp --dport 80 -j ACCEPT'
      ])
      def test_ufw_rules(host, rule):
          cmd = host.run('iptables -t filter -S')
      
          assert rule in cmd.stdout
      

      test_ufw_rules() will check that your firewall configuration permits traffic on the port used by the Apache service.

      With each of these tests added, your test_Debian.py file will look like this:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('ubuntu18')
      
      
      @pytest.mark.parametrize('pkg', [
          'apache2',
          'ufw'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
          assert package.is_installed
      
      
      @pytest.mark.parametrize('svc', [
          'apache2',
          'ufw'
      ])
      def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      
      
      @pytest.mark.parametrize('rule', [
          '-A ufw-user-input -p tcp -m tcp --dport 80 -j ACCEPT'
      ])
      def test_ufw_rules(host, rule):
          cmd = host.run('iptables -t filter -S')
      
          assert rule in cmd.stdout
      

      The test_Debian.py file now includes the three tests: test_pkg(), test_svc(), and test_ufw_rules().

      Save and exit test_Debian.py.

      Next you'll create the test_RedHat.py test file, which will contain tests specific to Red Hat distributions to target your CentOS platform.

      Create and edit the CentOS test file, test_RedHat.py, by running the following command:

      • nano molecule/default/tests/test_RedHat.py

      Similarly to the Ubuntu test file, you will now write three tests to include in your test_RedHat.py file. Before adding the test code, you can import the required libraries and define the centos7 platform as the target host, by adding the following code to the beginning of your file:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('centos7')
      

      Then, add the test_pkg() test, which will check if the httpd and firewalld packages are installed on the host.

      Following the code for your library imports, add the test_pkg() test to your file. (Again, remember to include two blank lines before each new test.)

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      ...
      @pytest.mark.parametrize('pkg', [
          'httpd',
          'firewalld'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
            assert package.is_installed
      

      Now, you can add the test_svc() test to ensure that httpd and firewalld services are running and enabled.

      Add the test_svc() code to your file following the test_pkg() test:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      ...
      @pytest.mark.parametrize('svc', [
          'httpd',
          'firewalld'
      ])
        def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      

      The final test in test_RedHat.py file will be test_firewalld(), which will check if Firewalld has the http service whitelisted.

      Add the test_firewalld() test to your file after the test_svc() code:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      ...
      @pytest.mark.parametrize('file, content', [
          ("/etc/firewalld/zones/public.xml", "<service name="http"/>")
      ])
      def test_firewalld(host, file, content):
          file = host.file(file)
      
          assert file.exists
          assert file.contains(content)
      

      After importing the libraries and adding the three tests, your test_RedHat.py file will look like this:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('centos7')
      
      
      @pytest.mark.parametrize('pkg', [
          'httpd',
          'firewalld'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
          assert package.is_installed
      
      
      @pytest.mark.parametrize('svc', [
          'httpd',
          'firewalld'
      ])
      def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      
      
      @pytest.mark.parametrize('file, content', [
          ("/etc/firewalld/zones/public.xml", "<service name="http"/>")
      ])
      def test_firewalld(host, file, content):
          file = host.file(file)
      
          assert file.exists
          assert file.contains(content)
      

      Now that you've completed writing tests in all three files, test_common.py, test_Debian.py, and test_RedHat.py, your role is ready for testing. In the next step, you will use Molecule to run these tests against your newly configured role.

      Step 4 — Testing Against Your Role

      You will now execute your newly created tests against the base role ansible-apache using Molecule. To run your tests, use the following command:

      You'll see the following output once Molecule has finished running all the tests:

      Output

      ... --> Scenario: 'default' --> Action: 'verify' --> Executing Testinfra tests found in /home/sammy/ansible-apache/molecule/default/tests/... ============================= test session starts ============================== platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1 rootdir: /home/sammy/ansible-apache/molecule/default, inifile: plugins: testinfra-1.16.0 collected 12 items tests/test_common.py .. [ 16%] tests/test_RedHat.py ..... [ 58%] tests/test_Debian.py ..... [100%] ========================== 12 passed in 80.70 seconds ========================== Verifier completed successfully.

      You'll see Verifier completed successfully in your output; this means that the verifier executed all of your tests and returned them successfully.

      Now that you've successfully completed the development of your role, you can commit your changes to Git and set up Travis CI for continuous testing.

      Step 5 — Using Git to Share Your Updated Role

      In this tutorial, so far, you have cloned a role called ansible-apache and added tests to it to make sure it works against Ubuntu and CentOS hosts. To share your updated role with the public, you must commit these changes and push them to your fork.

      Run the following command to add the files and commit the changes you've made:

      This command will add all the files that you have modified in the current directory to the staging area.

      You also need to set your name and email address in the git config in order to commit successfully. You can do so using the following commands:

      • git config user.email "sammy@digitalocean.com"
      • git config user.name "John Doe"

      Commit the changed files to your repository:

      • git commit -m "Configured Molecule"

      You'll see the following output:

      Output

      [master b2d5a5c] Configured Molecule 8 files changed, 155 insertions(+), 1 deletion(-) create mode 100644 molecule/default/Dockerfile.j2 create mode 100644 molecule/default/INSTALL.rst create mode 100644 molecule/default/molecule.yml create mode 100644 molecule/default/playbook.yml create mode 100644 molecule/default/tests/test_Debian.py create mode 100644 molecule/default/tests/test_RedHat.py create mode 100644 molecule/default/tests/test_common.py

      This signifies that you have committed your changes successfully. Now, push these changes to your fork with the following command:

      • git push -u origin master

      You will see a prompt for your GitHub credentials. After entering these credentials, your code will be pushed to your repository and you'll see this output:

      Output

      Counting objects: 13, done. Compressing objects: 100% (12/12), done. Writing objects: 100% (13/13), 2.32 KiB | 2.32 MiB/s, done. Total 13 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 2 local objects. To https://github.com/username/ansible-apache.git 009d5d6..e4e6959 master -> master Branch 'master' set up to track remote branch 'master' from 'origin'.

      If you go to your fork's repository at github.com/username/ansible-apache, you'll see a new commit called Configured Molecule reflecting the changes you made in the files.

      Now, you can integrate Travis CI with your new repository so that any changes made to your role will automatically trigger Molecule tests. This will ensure that your role always works with Ubuntu and CentOS hosts.

      Step 6 — Integrating Travis CI

      In this step, you're going to integrate Travis CI into your workflow. Once enabled, any changes you push to your fork will trigger a Travis CI build. The purpose of this is to ensure Travis CI always runs molecule test whenever contributors make changes. If any breaking changes are made, Travis will declare the build status as such.

      Proceed to Travis CI to enable your repository. Navigate to your profile page where you can click the Activate button for GitHub.

      You can find further guidance here on activating repositories in Travis CI.

      For Travis CI to work, you must create a configuration file containing instructions for it. To create the Travis configuration file, return to your server and run the following command:

      To duplicate the environment you've created in this tutorial, you will specify parameters in the Travis configuration file. Add the following content to your file:

      ~/ansible-apache/.travis.yml

      ---
      language: python
      python:
        - "2.7"
        - "3.6"
      services:
        - docker
      install:
        - pip install molecule docker
      script:
        - molecule --version
        - ansible --version
        - molecule test
      

      The parameters you've specified in this file are:

      • language: When you specify Python as the language, the CI environment uses separate virtualenv instances for each Python version you specify under the python key.
      • python: Here, you're specifying that Travis will use both Python 2.7 and Python 3.6 to run your tests.
      • services: You need Docker to run tests in Molecule. You're specifying that Travis should ensure Docker is present in your CI environment.
      • install: Here, you're specifying preliminary installation steps that Travis CI will carry out in your virtualenv.
        • pip install molecule docker to check that Ansible and Molecule are present along with the Python library for the Docker remote API.
      • script: This is to specify the steps that Travis CI needs to carry out. In your file, you're specifying three steps:
        • molecule --version prints the Molecule version if Molecule has been successfully installed.
        • ansible --version prints the Ansible version if Ansible has been successfully installed.
        • molecule test finally runs your Molecule tests.

      The reason you specify molecule --version and ansible --version is to catch errors in case the build fails as a result of ansible or molecule misconfiguration due to versioning.

      Once you've added the content to the Travis CI configuration file, save and exit .travis.yml.

      Now, every time you push any changes to your repository, Travis CI will automatically run a build based on the above configuration file. If any of the commands in the script block fail, Travis CI will report the build status as such.

      To make it easier to see the build status, you can add a badge indicating the build status to the README of your role. Open the README.md file using a text editor:

      Add the following line to the README.md to display the build status:

      ~/ansible-apache/README.md

      [![Build Status](https://travis-ci.org/username/ansible-apache.svg?branch=master)](https://travis-ci.org/username/ansible-apache)
      

      Replace username with your GitHub username. Commit and push the changes to your repository as you did earlier.

      First, run the following command to add .travis.yml and README.md to the staging area:

      • git add .travis.yml README.md

      Now commit the changes to your repository by executing:

      • git commit -m "Configured Travis"

      Finally, push these changes to your fork with the following command:

      • git push -u origin master

      If you navigate over to your GitHub repository, you will see that it initially reports build: unknown.

      build-status-unknown

      Within a few minutes, Travis will initiate a build that you can monitor at the Travis CI website. Once the build is a success, GitHub will report the status as such on your repository as well — using the badge you've placed in your README file:

      build-status-passing

      You can access the complete details of the builds by going to the Travis CI website:

      travis-build-status

      Now that you've successfully set up Travis CI for your new role, you can continuously test and integrate changes to your Ansible roles.

      Conclusion

      In this tutorial, you forked a role that installs and configures an Apache web server from GitHub and added integrations for Molecule by writing tests and configuring these tests to work on Docker containers running Ubuntu and CentOS. By pushing your newly created role to GitHub, you have allowed other users to access your role. When there are changes to your role by contributors, Travis CI will automatically run Molecule to test your role.

      Once you're comfortable with the creation of roles and testing them with Molecule, you can integrate this with Ansible Galaxy so that roles are automatically pushed once the build is successful.



      Source link