One place for hosting & domains

      Deployment

      How To Automate Deployment Using CircleCI and GitHub on Ubuntu 18.04


      The author selected International Medical Corps to receive a donation as part of the Write for DOnations program.

      Introduction

      Continuous Integration/Continuous Deployment (CI/CD) is a development practice that allows software teams to build, test, and deploy applications easier and quicker on multiple platforms. CircleCI is a popular automation platform that allows you to build and maintain CI/CD workflows for your projects.

      Having continuous deployment is beneficial in many ways. It helps to standardize the deployment steps of an application and to protect it from un-recorded changes. It also helps avoid performing repetitive steps and lets you focus more on development. With CircleCI, you can have a single view across all your different deployment processes for development, testing, and production.

      In this tutorial, you’ll build a Node.js app locally and push it to GitHub. Following that, you’ll configure CircleCI to connect to a virtual private server (VPS) that’s running Ubuntu 18.04, and you’ll go through the steps to set up your code for auto-deployment on the VPS. By the end of the article, you will have a working CI/CD pipeline where CircleCI will pick up any code you push from your local environment to the GitHub repo and deploy it on your VPS.

      Prerequisites

      Before you get started, you’ll need to have the following:

      Step 1 — Creating a Local Node Project

      In this step, you’ll create a Node.js project locally that you will use during this tutorial as an example application. You’ll push this to a repo on GitHub later.

      Go ahead and run these commands on your local terminal so that you can set up a quick Node development environment.

      First, create a directory for the test project:

      Change into the new directory:

      Follow this up by initializing a npm environment to pull the dependencies if you have any. The -y flag will auto-accept every prompt thrown by npm init:

      For more information on npm, check out our How To Use Node.js Modules with npm and package.json tutorial.

      Next, create a basic server that serves Hello World! when someone accesses any route. Using a text editor, create a file called app.js in the root directory of the project. This tutorial will use nano:

      Add the following code to the app.js file:

      circleci-test/app.js

      const http = require('http');
      
      http.createServer(function (req, res) {
        res.write('Hello World!'); 
        res.end(); 
      }).listen(8080, '0.0.0.0'); 
      

      This sample server uses the http package to listen to any incoming requests on port 8080 and fires a request listener function that replies with the string Hello World.

      Save and close the file.

      You can test this on your local machine by running the following command from the same directory in the terminal. This will create a Node process that runs the server (app.js):

      Now visit the http://localhost:8080 URL in your browser. Your browser will render the string Hello World!. Once you have tested the app, stop the server by pressing CTRL+C on the same terminal where you started the Node process.

      You’ve now set up your sample application. In the next step, you will add a configuration file in the project so that CircleCI can use it for deployment.

      Step 2 — Adding a Config File to Your Project

      CircleCI executes workflows according to a configuration file in your project folder. In this step, you will create that file to define the deployment workflow.

      Create a folder called .circleci in the root directory of your project:

      Add a new file called config.yml in it:

      • nano .circleci/config.yml

      This will open a file with the YAML file extension. YAML is a language that is often used for configuration management.

      Add the following configuration to the new config.yml file:

      circleci-test/.circleci/config.yml

      version: 2.1
      
      # Define the jobs we want to run for this project
      jobs:
        pull-and-build:
          docker:
            - image: arvindr226/alpine-ssh
          steps:
            - checkout
            - run: ssh -oStrictHostKeyChecking=no -v $USER@$IP "./deploy.sh"
      
      # Orchestrate our job run sequence
      workflows:
        version: 2
        build-project:
          jobs:
            - pull-and-build:
                filters:
                  branches:
                    only:
                      - main
      

      Save this file and exit the text editor.

      This file tells the CircleCI pipeline the following:

      • There is a job called pull-and-build whose steps involve spinning up a Docker container, SSHing from it to the VPS, and then running the deploy.sh file.
      • The Docker container serves the purpose of creating a temporary environment for executing the commands mentioned in the steps section of the config. In this case, all you need to do is SSH into the VPS and run sh deploy.sh command, so the environment needs to be lightweight but still allow the SSH command. The Docker image arvindr226/alpine-ssh is an Alpine Linux image that supports SSH.
      • deploy.sh is a file that you will create in the VPS. It will run every time as a part of the deployment process and will contain steps specific to your project.
      • In the workflows section, you inform CircleCI that it needs to perform this job based on some filters, which in this case is that only changes to the main branch will trigger this job.

      Next, you will commit and push these files to a GitHub repository. You will do this by running the following commands from the project directory.

      First, initialize the Node.js project directory as a git repo:

      Go ahead and add the new changes to the git repo:

      Then commit the changes:

      • git commit -m "initial commit"

      If this is the first time committing, git will prompt you to run some git config commands to identify you.

      From your browser navigate to GitHub and log in with your GitHub account. Create a new repository called circleci-test without a README or license file. Once you’ve created the repository, return to the command line to push your local files to GitHub.

      To follow GitHub protocol, rename your branch main with the following command:

      Before you push the files for the first time, you need to add GitHub as a remote repository. Do that by running:

      • git remote add origin https://github.com/GitHub_username/circleci-test

      Follow this with the push command, which will transfer the files to GitHub:

      You have now pushed your code to GitHub. In the next step, you’ll create a new user in the VPS that will execute the steps in the pull-and-build part.

      Step 3 — Creating a New User for Deployment

      Now that you have the project ready, you will create a deployment user in the VPS.

      Connect to your VPS as your sudo user

      • ssh your_username@your_server_ip

      Next, create a new user that doesn’t use a password for login using the useradd command.

      • sudo useradd -m -d /home/circleci -s /bin/bash circleci

      This command creates a new user on the system. The -m flag instructs the command to create a home directory specified by the -d flag.

      circleci will be the new deployment user in this case. For security purposes, you are not going to add this user to the sudo group, since the only job of this user is to create an SSH connection from the VPS to the CircleCI network and run the deploy.sh script.

      Make sure that the firewall on your VPS is open to port 8080:

      You now need to create an SSH key, which the new user can use to log in. You are going to create an SSH key with no passphrase, or else CircleCI will not be able to decrypt it. You can find more information in the official CircleCI documentation. Also, CircleCI expects the format of the SSH keys to be in the PEM format, so you are going to enforce that while creating the key pair.

      Back on your local system, move to your home folder:

      Then run the following command:

      • ssh-keygen -m PEM -t rsa -f .ssh/circleci

      This command creates an RSA key with the PEM format specified by the -m flag and the key type specified by the -t flag. You also specify the -f to create a new key pair called circleci and circleci.pub. Specifying the name will avoid overwriting your existing id_rsa file.

      Print out the new public key:

      This outputs the public key that you generated. You will need to register this public key in your VPS. Copy this to your clipboard.

      Back on the VPS, create a .ssh directory for the circleci user:

      • sudo mkdir /home/circleci/.ssh

      Here you’ll add the public key you copied from the local machine into a file called authorized_keys:

      • sudo nano /home/circleci/.ssh/authorized_keys

      Add the copied public key here, save the file, and exit the text editor.

      Give the circleci user its directory permissions so that it doesn’t run into permission issues during deployment.

      • sudo chown -R circleci:circleci /home/circleci

      Verify if you can log in as the new user by using the private key. Open a new terminal on your local system and run:

      • ssh circleci@your_server_ip -i ~/.ssh/circleci

      You will now log in as the circleci user into your VPS. This shows that the SSH connection is successful. Next, you will connect your GitHub repo to CircleCI.

      Step 4 — Adding Your GitHub Project to CircleCI

      In this step, you’ll connect your GitHub account to your CircleCI account and add the circleci-test project for CI/CD. If you signed up with your GitHub account, then your GitHub will be automatically linked with your CircleCI account. If not, head over to https://circleci.com/account and connect it.

      To add your circleci-test project, navigate to your CircleCI project dashboard at https://app.circleci.com/projects/project-dashboard/github/your_username:

      CircleCI Projects Tab

      Here you will find all the projects from GitHub listed. Click on Set Up Project for the project circleci-test. This will bring you to the project setup page:

      Setting up a project in the CircleCI Interface

      You’ll now have the option to set the config for the project, which you have already set in the repo. Since this is already set up, choose the Use Existing Config option. This will bring up a popup box confirming that you want to build the pipeline:

      Popup confirming the config file for the CircleCI build

      From here, go ahead and click on Start Building. This will bring you to the circleci-test pipeline page. For now, this pipeline will fail. This is because you must first update the SSH keys for your project.

      Navigate to the project settings at https://app.circleci.com/settings/project/github/your_username/circleci-test and select the SSH keys section on the left.

      Retrieve the private key named circleci you created earlier from your local machine by running:

      Copy the output from this command.

      Under the Additional SSH Keys section, click on the Add SSH Key button.

      Adding SSH Keys section of the settings page

      This will open up a window asking you to enter the hostname and the SSH key. Enter a hostname of your choice, and add in the private SSH key that you copied from your local environment.

      CircleCI will now be able to log in as the new circleci user to the VPS using this key.

      The last step is to provide the username and IP of the VPS to CircleCI. In the same Project Settings page, go to the Environment Variables tab on the left:

      Environment Variables section of the settings page

      Add an environment variable named USER with a value of circleci and IP with the value of the IP address of your VPS (or domain name of your VPS, if you have a DNS record).

      Once you’ve created these variables, you have completed the setup needed for CircleCI. Next, you will give the circleci user access to GitHub via SSH.

      Step 5 — Adding SSH Keys to GitHub

      You now need to provide a way that the circleci user can authenticate with GitHub so that it can perform git operations like git pull.

      To do this, you will create an SSH key for this user to authenticate against GitHub.

      Connect to the VPS as the circleci user:

      • ssh circleci@your_server_ip -i ~/.ssh/circleci

      Create a new SSH key pair with no passphrase:

      Then output the public key:

      Copy the output, then head over to your circleci-test GitHub repo’s deploy key settings at https://github.com/your_username/circleci-test/settings/keys.

      Click on Add deploy key to add the copied public key. Fill the Title field with your desired name for the key, then add the copied public key in the Key field. Finally, click the Add key button to add the key to your account.

      Now that the circleci user has access to your GitHub account, you’ll use this SSH authentication to set up your project.

      Step 6 — Setting Up the Project on the VPS

      Now for setting up the project, you are going to clone the repo and make the initial setup of the project on the VPS as the circleci user.

      On your VPS, run the following command:

      • git clone git@github.com:your_username/circleci-test.git

      Navigate into it:

      First, install the dependencies:

      Now test the app out by running the server you built:

      Head over to your browser and try the address http://your_vps_ip:8080. You will receive the output Hello World!.

      Stop this process with CTRL+C and use pm2 to run this app as a background process.

      Install pm2 so that you can run the Node app as an independent process. pm2 is a versatile process manager written in Node.js. Here it will help you keep the sample Node.js project running as an active process even after you log out of the server. You can read a bit more about this in the How To Set Up a Node.js Application for Production on Ubuntu 18.04 tutorial.

      Note: On some systems such as Ubuntu 18.04, installing an npm package globally can result in a permission error, which will interrupt the installation. Since it is a security best practice to avoid using sudo with npm install, you can instead resolve this by changing npm’s default directory. If you encounter an EACCES error, follow the instructions at the official npm documentation.

      You can use the pm2 start command to run the app.js file as a Node process. You can name it app using the --name flag to identify it later:

      • pm2 start app.js --name "app"

      You will also need to provide the deployment instructions. These commands will run every time the circleci user deploys the code.

      Head back to the home directory since that will be the path the circleci user will land in during a successful login attempt:

      Go ahead and create the deploy.sh file, which will contain the deploy instructions:

      You will now use a Bash script to automate the deployment:

      deploy.sh

      #!/bin/bash
      
      #replace this with the path of your project on the VPS
      cd ~/circleci-test
      
      #pull from the branch
      git pull origin main
      
      # followed by instructions specific to your project that you used to do manually
      npm install
      export PATH=~/.npm-global/bin:$PATH
      source ~/.profile
      
      pm2 restart app
      

      This will automatically change the working directory to the project root, pull the code from GitHub, install the dependencies, then restart the app. Save and exit the file.

      Make this file an executable by running:

      Now head back to your local machine and make a quick change to test it out. Change into your project directory:

      Open up your app.js file:

      • nano circleci-test/app.js

      Now add in the following highlighted line:

      circleci-test/app.js

      const http = require('http');
      
      http.createServer(function (req, res) {
        res.write('Foo Bar!');
        res.end();
      }).listen(8080, '0.0.0.0');
      

      Save the file and exit the text editor.

      Add this change and commit it:

      • git add .
      • git commit -m "modify app.js"

      Now push this to your main branch:

      This will trigger a new pipeline for deployment. Navigate to https://app.circleci.com/pipelines/github/your_username to view the pipeline in action.

      Once it’s successful, refresh the browser at http://your_vps_ip:8080. Foo Bar! will now render in your browser.

      Conclusion

      These are the steps to integrate CircleCI with your GitHub repository and Linux-based VPS. You can modify the deploy.sh for more specific instructions related to your project.

      If you would like to learn more about CI/CD, check out our CI/CD topic page. For more on setting up workflows with CircleCI, head over to the CircleCI documentation.



      Source link

      How To Set Up a Continuous Deployment Pipeline with GitLab CI/CD on Ubuntu 18.04


      The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      GitLab is an open source collaboration platform that provides powerful features beyond hosting a code repository. You can track issues, host packages and registries, maintain Wikis, set up continuous integration (CI) and continuous deployment (CD) pipelines, and more.

      In this tutorial you’ll build a continuous deployment pipeline with GitLab. You will configure the pipeline to build a Docker image, push it to the GitLab container registry, and deploy it to your server using SSH. The pipeline will run for each commit pushed to the repository.

      You will deploy a small, static web page, but the focus of this tutorial is configuring the CD pipeline. The static web page is only for demonstration purposes; you can apply the same pipeline configuration using other Docker images for the deployment as well.

      When you have finished this tutorial, you can visit http://your_server_IP in a browser for the results of the automatic deployment.

      Prerequisites

      To complete this tutorial, you will need:

      Step 1 — Creating the GitLab Repository

      Let’s start by creating a GitLab project and adding an HTML file to it. You will later copy the HTML file into an Nginx Docker image, which in turn you’ll deploy to the server.

      Log in to your GitLab instance and click New project.

      The new project button in GitLab

      1. Give it a proper Project name.
      2. Optionally add a Project description.
      3. Make sure to set the Visibility Level to Private or Public depending on your requirements.
      4. Finally click Create project

      The new project form in GitLab

      You will be redirected to the Project’s overview page.

      Let’s create the HTML file. On your Project’s overview page, click New file.

      The new file button on the project overview page

      Set the File name to index.html and add the following HTML to the file body:

      index.html

      <html>
      <body>
      <h1>My Personal Website</h1>
      </body>
      </html>
      

      Click Commit changes at the bottom of the page to create the file.

      This HTML will produce a blank page with one headline showing My Personal Website when opened in a browser.

      Dockerfiles are recipes used by Docker to build Docker images. Let’s create a Dockerfile to copy the HTML file into an Nginx image.

      Go back to the Project’s overview page, click the + button and select the New file option.

      New file option in the project's overview page listed in the plus button

      Set the File name to Dockerfile and add these instructions to the file body:

      Dockerfile

      FROM nginx:1.18
      COPY index.html /usr/share/nginx/html
      

      The FROM instruction specifies the image to inherit from—in this case the nginx:1.18 image. 1.18 is the image tag representing the Nginx version. The nginx:latest tag references the latest Nginx release, but that could break your application in the future, which is why fixed versions are recommended.

      The COPY instruction copies the index.html file to /usr/share/nginx/html in the Docker image. This is the directory where Nginx stores static HTML content.

      Click Commit changes at the bottom of the page to create the file.

      In the next step, you’ll configure a GitLab runner to keep control of who gets to execute the deployment job.

      Step 2 — Registering a GitLab Runner

      In order to keep track of the environments that will have contact with the SSH private key, you’ll register your server as a GitLab runner.

      In your deployment pipeline you want to log in to your server using SSH. To achieve this, you’ll store the SSH private key in a GitLab CI/CD variable (Step 5). The SSH private key is a very sensitive piece of data, because it is the entry ticket to your server. Usually, the private key never leaves the system it was generated on. In the usual case, you would generate an SSH key on your host machine, then authorize it on the server (that is, copy the public key to the server) in order to log in manually and perform the deployment routine.

      Here the situation changes slightly: You want to grant an autonomous authority (GitLab CI/CD) access to your server to automate the deployment routine. Therefore the private key needs to leave the system it was generated on and be given in trust to GitLab and other involved parties. You never want your private key to enter an environment that is not either controlled or trusted by you.

      Besides GitLab, the GitLab runner is yet another system that your private key will enter. For each pipeline, GitLab uses runners to perform the heavy work, that is, execute the jobs you have specified in the CI/CD configuration. That means the deployment job will ultimately be executed on a GitLab runner, hence the private key will be copied to the runner such that it can log in to the server using SSH.

      If you use unknown GitLab Runners (for example, shared runners) to execute the deployment job, then you’d be unaware of the systems getting in contact with the private key. Even though GitLab runners clean up all data after job execution, you can avoid sending the private key to unknown systems by registering your own server as a GitLab runner. The private key will then be copied to the server controlled by you.

      Start by logging in to your server:

      In order to install the gitlab-runner service, you’ll add the official GitLab repository. Download and inspect the install script:

      • curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-runner/script.deb.sh > script.deb.sh
      • less script.deb.sh

      Once you are satisfied with the safety of the script, run the installer:

      It may not be obvious, but you have to enter your non-root user’s password to proceed. When you execute the previous command, the output will be like:

      Output

      [sudo] password for sammy: % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 5945 100 5945 0 0 8742 0 --:--:-- --:--:-- --:--:-- 8729

      When the curl command finishes, you will receive the following message:

      Output

      The repository is setup! You can now install packages.

      Next install the gitlab-runner service:

      • sudo apt install gitlab-runner

      Verify the installation by checking the service status:

      • systemctl status gitlab-runner

      You will have active (running) in the output:

      Output

      ● gitlab-runner.service - GitLab Runner Loaded: loaded (/etc/systemd/system/gitlab-runner.service; enabled; vendor preset: enabled) Active: active (running) since Mon 2020-06-01 09:01:49 UTC; 4s ago Main PID: 16653 (gitlab-runner) Tasks: 6 (limit: 1152) CGroup: /system.slice/gitlab-runner.service └─16653 /usr/lib/gitlab-runner/gitlab-runner run --working-directory /home/gitlab-runner --config /etc/gitla

      To register the runner, you need to get the project token and the GitLab URL:

      1. In your GitLab project, navigate to Settings > CI/CD > Runners.
      2. In the Set up a specific Runner manually section, you’ll find the registration token and the GitLab URL. Copy both to a text editor; you’ll need them for the next command. They will be referred to as https://your_gitlab.com and project_token.

      The runners section in the ci/cd settings with the copy token button

      Back to your terminal, register the runner for your project:

      • sudo gitlab-runner register -n --url https://your_gitlab.com --registration-token project_token --executor docker --description "Deployment Runner" --docker-image "docker:stable" --tag-list deployment --docker-privileged

      The command options can be interpreted as follows:

      • -n executes the register command non-interactively (we specify all parameters as command options).
      • --url is the GitLab URL you copied from the runners page in GitLab.
      • --registration-token is the token you copied from the runners page in GitLab.
      • --executor is the executor type. docker executes each CI/CD job in a Docker container (see GitLab’s documentation on executors).
      • --description is the runner’s description, which will show up in GitLab.
      • --docker-image is the default Docker image to use in CI/CD jobs, if not explicitly specified.
      • --tag-list is a list of tags assigned to the runner. Tags can be used in a pipeline configuration to select specific runners for a CI/CD job. The deployment tag will allow you to refer to this specific runner to execute the deployment job.
      • --docker-privileged executes the Docker container created for each CI/CD job in privileged mode. A privileged container has access to all devices on the host machine and has nearly the same access to the host as processes running outside containers (see Docker’s documentation about runtime privilege and Linux capabilities). The reason for running in privileged mode is so you can use Docker-in-Docker (dind) to build a Docker image in your CI/CD pipeline. It is good practice to give a container the minimum requirements it needs. For you it is a requirement to run in privileged mode in order to use Docker-in-Docker. Be aware, you registered the runner for this specific project only, where you are in control of the commands being executed in the privileged container.

      After executing the gitlab-runner register command, you will receive the following output:

      Output

      Runner registered successfully. Feel free to start it, but if it's running already the config should be automatically reloaded!

      Verify the registration process by going to Settings > CI/CD > Runners in GitLab, where the registered runner will show up.

      The registered runner in the runners section of the ci/cd settings

      In the next step you’ll create a deployment user.

      Step 3 — Creating a Deployment User

      You are going to create a non-sudo user that is dedicated for the deployment task, so that its power is limited and the deployment takes place in an isolated user space. You will later configure the CI/CD pipeline to log in to the server with that user.

      On your server, create a new user:

      You’ll be guided through the user creation process. Enter a strong password and optionally any further user information you want to specify. Finally confirm the user creation with Y.

      Add the user to the Docker group:

      • sudo usermod -aG docker deployer

      This permits deployer to execute the docker command, which is required to perform the deployment.

      In the next step you’ll create an SSH key to be able to log in to the server as deployer.

      Step 4 — Setting Up an SSH Key

      You are going to create an SSH key for the deployment user. GitLab CI/CD will later use the key to log in to the server and perform the deployment routine.

      Let’s start by switching to the newly created deployer user for whom you’ll generate the SSH key:

      You’ll be prompted for the deployer password to complete the user switch.

      Next, generate a 4096-bit SSH key. It is important to answer the questions of the ssh-keygen command correctly:

      1. First question: answer it with ENTER, which stores the key in the default location (the rest of this tutorial assumes the key is stored in the default location).
      2. Second question: configures a password to protect the SSH private key (the key used for authentication). If you specify a passphrase, you’ll have to enter it each time the private key is used. In general, a passphrase adds another security layer to SSH keys, which is good practice. Somebody in possession of the private key would also require the passphrase to use the key. For the purposes of this tutorial, it is important that you have an empty passphrase, because the CI/CD pipeline will execute non-interactively and therefore does not allow to enter a passphrase.

      To summarize, run the following command and confirm both questions with ENTER to create a 4096-bit SSH key and store it in the default location with an empty passphrase:

      To authorize the SSH key for the deployer user, you need to append the public key to the authorized_keys file:

      • cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys

      ~ is short for the user home in Linux. The cat program will print the contents of a file; here you use the >> operator to redirect the output of cat and append it to the authorized_keys file.

      In this step you have created an SSH key pair for the CI/CD pipeline to log in and deploy the application. Next you’ll store the private key in GitLab to make it accessible during the pipeline process.

      Step 5 — Storing the Private Key in a GitLab CI/CD Variable

      You are going to store the SSH private key in a GitLab CI/CD file variable, so that the pipeline can make use of the key to log in to the server.

      When GitLab creates a CI/CD pipeline, it will send all variables to the corresponding runner and the variables will be set as environment variables for the duration of the job. In particular, the values of file variables are stored in a file and the environment variable will contain the path to this file.

      While you’re in the variables section, you’ll also add a variable for the server IP and the server user, which will inform the pipeline about the destination server and user to log in.

      Start by showing the SSH private key:

      Copy the output to your clipboard using CTRL+C. Make sure to copy everything including the BEGIN and END line:

      ~/.ssh/id_rsa

      -----BEGIN RSA PRIVATE KEY-----
      ...
      -----END RSA PRIVATE KEY-----
      

      Now navigate to Settings > CI / CD > Variables in your GitLab project and click Add Variable. Fill out the form as follows:

      • Key: ID_RSA
      • Value: Paste your SSH private key from your clipboard with CTRL+V
      • Type: File
      • Environment Scope: All (default)
      • Protect variable: Checked
      • Mask variable: Unchecked

      Note: The variable can’t be masked because it does not meet the regular expression requirements (see GitLab’s documentation about masked variables). However, the private key will never appear in the console log, which makes masking it obsolete.

      A file containing the private key will be created on the runner for each CI/CD job and its path will be stored in the $ID_RSA environment variable.

      Create another variable with your server IP. Click Add Variable and fill out the form as follows:

      • Key: SERVER_IP
      • Value: your_server_IP
      • Type: Variable
      • Environment scope: All (default)
      • Protect variable: Checked
      • Mask variable: Checked

      Finally, create a variable with the login user. Click Add Variable and fill out the form as follows:

      • Key: SERVER_USER
      • Value: deployer
      • Type: Variable
      • Environment scope: All (default)
      • Protect variable: Checked
      • Mask variable: Checked

      You have now stored the private key in a GitLab CI/CD variable, which makes the key available during pipeline execution. In the next step, you’re moving on to configuring the CI/CD pipeline.

      Step 6 — Configuring the .gitlab-ci.yml File

      You are going to configure the GitLab CI/CD pipeline. The pipeline will build a Docker image and push it to the container registry. GitLab provides a container registry for each project. You can explore the container registry by going to Packages & Registries > Container Registry in your GitLab project (read more in GitLab’s container registry documentation.) The final step in your pipeline is to log in to your server, pull the latest Docker image, remove the old container, and start a new container.

      Now you’re going to create the .gitlab-ci.yml file that contains the pipeline configuration. In GitLab, go to the Project overview page, click the + button and select New file. Then set the File name to .gitlab-ci.yml.

      (Alternatively you can clone the repository and make all following changes to .gitlab-ci.yml on your local machine, then commit and push to the remote repository.)

      To begin add the following:

      .gitlab-ci.yml

      stages:
        - publish
        - deploy
      

      Each job is assigned to a stage. Jobs assigned to the same stage run in parallel (if there are enough runners available). Stages will be executed in the order they were specified. Here, the publish stage will go first and the deploy stage second. Successive stages only start when the previous stage finished successfully (that is, all jobs have passed). Stage names can be chosen arbitrarily.

      When you want to combine this CD configuration with your existing CI pipeline, which tests and builds the app, you may want to add the publish and deploy stages after your existing stages, such that the deployment only takes place if the tests passed.

      Following this, add this to your .gitlab-ci.yml file:

      .gitlab-ci.yml

      . . .
      variables:
        TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
        TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
      

      The variables section defines environment variables that will be available in the context of a job’s script section. These variables will be available as usual Linux environment variables; that is, you can reference them in the script by prefixing with a dollar sign such as $TAG_LATEST. GitLab creates some predefined variables for each job that provide context specific information, such as the branch name or the commit hash the job is working on (read more about predefined variable). Here you compose two environment variables out of predefined variables. They represent:

      • CI_REGISTRY_IMAGE: Represents the URL of the container registry tied to the specific project. This URL depends on the GitLab instance. For example, registry URLs for gitlab.com projects follow the pattern: registry.gitlab.com/your_user/your_project. But since GitLab will provide this variable, you do not need to know the exact URL.
      • CI_COMMIT_REF_NAME: The branch or tag name for which project is built.
      • CI_COMMIT_SHORT_SHA: The first eight characters of the commit revision for which the project is built.

      Both of the variables are composed of predefined variables and will be used to tag the Docker image.

      TAG_LATEST will add the latest tag to the image. This is a common strategy to provide a tag that always represents the latest release. For each deployment, the latest image will be overridden in the container registry with the newly built Docker image.

      TAG_COMMIT, on the other hand, uses the first eight characters of the commit SHA being deployed as the image tag, thereby creating a unique Docker image for each commit. You will be able to trace the history of Docker images down to the granularity of Git commits. This is a common technique when doing continuous deployments, because it allows you to quickly deploy an older version of the code in case of a defective deployment.

      As you’ll explore in the coming steps, the process of rolling back a deployment to an older Git revision can be done directly in GitLab.

      $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME specifies the Docker image base name. According to GitLab’s documentation, a Docker image name has to follow this scheme:

      image name scheme

      <registry URL>/<namespace>/<project>/<image>

      $CI_REGISTRY_IMAGE represents the <registry URL>/<namespace>/<project> part and is mandatory because it is the project’s registry root. $CI_COMMIT_REF_NAME is optional but useful to host Docker images for different branches. In this tutorial you will only work with one branch, but it is good to build an extendable structure. In general, there are three levels of image repository names supported by GitLab:

      repository name levels

      registry.example.com/group/project:some-tag registry.example.com/group/project/image:latest registry.example.com/group/project/my/image:rc1

      For your TAG_COMMIT variable you used the second option, where image will be replaced with the branch name.

      Next, add the following to your .gitlab-ci.yml file:

      .gitlab-ci.yml

      . . .
      publish:
        image: docker:latest
        stage: publish
        services:
          - docker:dind
        script:
          - docker build -t $TAG_COMMIT -t $TAG_LATEST .
          - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
          - docker push $TAG_COMMIT
          - docker push $TAG_LATEST
      

      The publish section is the first job in your CI/CD configuration. Let’s break it down:

      • image is the Docker image to use for this job. The GitLab runner will create a Docker container for each job and execute the script within this container. docker:latest image ensures that the docker command will be available.
      • stage assigns the job to the publish stage.
      • services specifies Docker-in-Docker—the dind service. This is the reason why you registered the GitLab runner in privileged mode.

      The script section of the publish job specifies the shell commands to execute for this job. The working directory will be set to the repository root when these commands will be executed.

      • docker build ...: Builds the Docker image based on the Dockerfile and tags it with the latest commit tag defined in the variables section.
      • docker login ...: Logs Docker in to the project’s container registry. You use the predefined variable $CI_BUILD_TOKEN as an authentication token. GitLab will generate the token and stay valid for the job’s lifetime.
      • docker push ...: Pushes both image tags to the container registry.

      Following this, add the deploy job to your .gitlab-ci.yml:

      .gitlab-ci.yml

      . . .
      deploy:
        image: alpine:latest
        stage: deploy
        tags:
          - deployment
        script:
          - chmod og= $ID_RSA
          - apk update && apk add openssh-client
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker pull $TAG_COMMIT"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker container rm -f my-app || true"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker run -d -p 80:80 --name my-app $TAG_COMMIT"
      

      Alpine is a lightweight Linux distribution and is sufficient as a Docker image here. You assign the job to the deploy stage. The deployment tag ensures that the job will be executed on runners that are tagged deployment, such as the runner you configured in Step 2.

      The script section of the deploy job starts with two configurative commands:

      • chmod og= $ID_RSA: Revokes all permissions for group and others from the private key, such that only the owner can use it. This is a requirement, otherwise SSH refuses to work with the private key.
      • apk update && apk add openssh-client: Updates Alpine’s package manager (apk) and installs the openssh-client, which provides the ssh command.

      Four consecutive ssh commands follow. The pattern for each is:

      ssh connect pattern for all deployment commands

      ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "command"

      In each ssh statement you are executing command on the remote server. To do so, you authenticate with your private key.

      The options are as follows:

      • -i stands for identity file and $ID_RSA is the GitLab variable containing the path to the private key file.
      • -o StrictHostKeyChecking=no makes sure to bypass the question, whether or not you trust the remote host. This question can not be answered in a non-interactive context such as the pipeline.
      • $SERVER_USER and $SERVER_IP are the GitLab variables you created in Step 5. They specify the remote host and login user for the SSH connection.
      • command will be executed on the remote host.

      The deployment ultimately takes place by executing these four commands on your server:

      1. docker login ...: Logs Docker in to the container registry.
      2. docker pull ...: Pulls the latest image from the container registry.
      3. docker container rm ...: Deletes the existing container if it exists. || true makes sure that the exit code is always successful, even if there was no container running by the name my-app. This guarantees a delete if exists routine without breaking the pipeline when the container does not exist (for example, for the first deployment).
      4. docker run ...: Starts a new container using the latest image from the registry. The container will be named my-app. Port 80 on the host will be bound to port 80 of the container (the order is -p host:container). -d starts the container in detached mode, otherwise the pipeline would be stuck waiting for the command to terminate.

      Note: It may seem odd to use SSH to run these commands on your server, considering the GitLab runner that executes the commands is the exact same server. Yet it is required, because the runner executes the commands in a Docker container, thus you would deploy inside the container instead of the server if you’d execute the commands without the use of SSH. One could argue that instead of using Docker as a runner executor, you could use the shell executor to run the commands on the host itself. But, that would create a constraint to your pipeline, namely that the runner has to be the same server as the one you want to deploy to. This is not a sustainable and extensible solution because one day you may want to migrate the application to a different server or use a different runner server. In any case it makes sense to use SSH to execute the deployment commands, may it be for technical or migration-related reasons.

      Let’s move on by adding this to the deployment job in your .gitlab-ci.yml:

      .gitlab-ci.yml

      . . .
      deploy:
      . . .
        environment:
          name: production
          url: http://your_server_IP
        only:
          - master
      

      GitLab environments allow you to control the deployments within GitLab. You can examine the environments in your GitLab project by going to Operations > Environments. If the pipeline did not finish yet, there will be no environment available, as no deployment took place so far.

      When a pipeline job defines an environment section, GitLab will create a deployment for the given environment (here production) each time the job successfully finishes. This allows you to trace all the deployments created by GitLab CI/CD. For each deployment you can see the related commit and the branch it was created for.

      There is also a button available for re-deployment that allows you to rollback to an older version of the software. The URL that was specified in the environment section will be opened when clicking the View deployment button.

      The only section defines the names of branches and tags for which the job will run. By default, GitLab will start a pipeline for each push to the repository and run all jobs (provided that the .gitlab-ci.yml file exists). The only section is one option of restricting job execution to certain branches/tags. Here you want to execute the deployment job for the master branch only. To define more complex rules on whether a job should run or not, have a look at the rules syntax.

      Note: In October 2020, GitHub has changed its naming convention for the default branch from master to main. Other providers such as GitLab and the developer community in general are starting to follow this approach. The term master branch is used in this tutorial to denote the default branch for which you may have a different name.

      Your complete .gitlab-ci.yml file will look like the following:

      .gitlab-ci.yml

      stages:
        - publish
        - deploy
      
      variables:
        TAG_LATEST: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:latest
        TAG_COMMIT: $CI_REGISTRY_IMAGE/$CI_COMMIT_REF_NAME:$CI_COMMIT_SHORT_SHA
      
      publish:
        image: docker:latest
        stage: publish
        services:
          - docker:dind
        script:
          - docker build -t $TAG_COMMIT -t $TAG_LATEST .
          - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY
          - docker push $TAG_COMMIT
          - docker push $TAG_LATEST
      
      deploy:
        image: alpine:latest
        stage: deploy
        tags:
          - deployment
        script:
          - chmod og= $ID_RSA
          - apk update && apk add openssh-client
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN $CI_REGISTRY"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker pull $TAG_COMMIT"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker container rm -f my-app || true"
          - ssh -i $ID_RSA -o StrictHostKeyChecking=no $SERVER_USER@$SERVER_IP "docker run -d -p 80:80 --name my-app $TAG_COMMIT"
        environment:
          name: production
          url: http://your_server_IP
        only:
          - master
      

      Finally click Commit changes at the bottom of the page in GitLab to create the .gitlab-ci.yml file. Alternatively, when you have cloned the Git repository locally, commit and push the file to the remote.

      You’ve created a GitLab CI/CD configuration for building a Docker image and deploying it to your server. In the next step you are validating the deployment.

      Step 7 — Validating the Deployment

      Now you’ll validate the deployment in various places of GitLab as well as on your server and in a browser.

      When a .gitlab-ci.yml file is pushed to the repository, GitLab will automatically detect it and start a CI/CD pipeline. At the time you created the .gitlab-ci.yml file, GitLab started the first pipeline.

      Go to CI/CD > Pipelines in your GitLab project to see the pipeline’s status. If the jobs are still running/pending, wait until they are complete. You will see a Passed pipeline with two green checkmarks, denoting that the publish and deploy job ran successfully.

      The pipeline overview page showing a passed pipeline

      Let’s examine the pipeline. Click the passed button in the Status column to open the pipeline’s overview page. You will get an overview of general information such as:

      • Execution duration of the whole pipeline.
      • For which commit and branch the pipeline was executed.
      • Related merge requests. If there is an open merge request for the branch in charge, it would show up here.
      • All jobs executed in this pipeline as well as their status.

      Next click the deploy button to open the result page of the deploy job.

      The result page of the deploy job

      On the job result page you can see the shell output of the job’s script. This is the place to look for when debugging a failed pipeline. In the right sidebar you’ll find the deployment tag you added to this job, and that it was executed on your Deployment Runner.

      If you scroll to the top of the page, you will find the This job is deployed to production message. GitLab recognizes that a deployment took place because of the job’s environment section. Click the production link to move over to the production environment.

      The production environment in GitLab

      You will have an overview of all production deployments. There was only a single deployment so far. For each deployment there is a re-deploy button available to the very right. A re-deployment will repeat the deploy job of that particular pipeline.

      Whether a re-deployment works as intended depends on the pipeline configuration, because it will not do more than repeating the deploy job under the same circumstances. Since you have configured to deploy a Docker image using the commit SHA as a tag, a re-deployment will work for your pipeline.

      Note: Your GitLab container registry may have an expiration policy. The expiration policy regularly removes older images and tags from the container registry. As a consequence, a deployment that is older than the expiration policy would fail to re-deploy, because the Docker image for this commit will have been removed from the registry. You can manage the expiration policy in Settings > CI/CD > Container Registry tag expiration policy. The expiration interval is usually set to something high, like 90 days. But when you run into the case of trying to deploy an image that has been removed from the registry due to the expiration policy, you can solve the problem by re-running the publish job of that particular pipeline as well, which will re-create and push the image for the given commit to registry.

      Next click the View deployment button, which will open http://your_server_IP in a browser and you should see the My Personal Website headline.

      Finally we want to check the deployed container on your server. Head over to your terminal and make sure to log in again, if you have disconnected already (it works for both users, sammy and deployer):

      Now list the running containers:

      Which will list the my-app container:

      Output

      CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 5b64df4b37f8 registry.your_gitlab.com/your_gitlab_user/your_project/master:your_commit_sha "nginx -g 'daemon of…" 4 hours ago Up 4 hours 0.0.0.0:80->80/tcp my-app

      Read the How To Install and Use Docker on Ubuntu 18.04 guide to learn more about managing Docker containers.

      You have now validated the deployment. In the next step, you will go through the process of rolling back a deployment.

      Step 8 — Rolling Back a Deployment

      Next you’ll update the web page, which will create a new deployment and then re-deploy the previous deployment using GitLab environments. This covers the use case of a deployment rollback in case of a defective deployment.

      Start by making a little change in the index.html file:

      1. In GitLab, go to the Project overview and open the index.html file.
      2. Click the Edit button to open the online editor.
      3. Change the file content to the following:

      index.html

      <html>
      <body>
      <h1>My Enhanced Personal Website</h1>
      </body>
      </html>
      

      Save the changes by clicking Commit changes at the bottom of the page.

      A new pipeline will be created to deploy the changes. In GitLab, go to CI/CD > Pipelines. When the pipeline has completed, you can open http://your_server_IP in a browser for the updated web page now showing My Enhanced Personal Website instead of My Personal Website.

      When you move over to Operations > Environments > production you will see the newly created deployment. Now click the re-deploy button of the initial, older deployment:

      A list of the deployments of the production environment in GitLab with emphasize on the re-deploy button of the first deployment

      Confirm the popup by clicking the Rollback button.

      The deploy job of that older pipeline will be restarted and you will be redirected to the job’s overview page. Wait for the job to finish, then open http://your_server_IP in a browser, where you’ll see the initial headline My Personal Website showing up again.

      Let’s summarize what you have achieved throughout this tutorial.

      Conclusion

      In this tutorial, you have configured a continuous deployment pipeline with GitLab CI/CD. You created a small web project consisting of an HTML file and a Dockerfile. Then you configured the .gitlab-ci.yml pipeline configuration to:

      1. Build the Docker image.
      2. Push the Docker image to the container registry.
      3. Log in to the server, pull the latest image, stop the current container, and start a new one.

      GitLab will now deploy the web page to your server for each push to the repository.

      Furthermore you have verified a deployment in GitLab and on your server. You have also created a second deployment and rolled back to the first deployment using GitLab environments, which demonstrates how you deal with defective deployments.

      At this point you have automated the whole deployment chain. You can now share code changes more frequently with the world and/or customer. As a result, development cycles are likely to become shorter, as less time is required to gather feedback and publish the code changes.

      As a next step you could make your service accessible by a domain name and secure the communication with HTTPS for which How To Use Traefik as a Reverse Proxy for Docker Containers is a good follow up.



      Source link

      Como Testar Seu Deployment Ansible com InSpec e Kitchen


      O autor escolheu a Diversity in Tech Fund para receber uma doação como parte do programa Write for DOnations.

      Introdução

      O InSpec é um framework open-source de auditoria e teste automatizado usado para descrever e testar preocupações, recomendações ou requisitos regulatórios. Ele foi projetado para ser inteligível e independente de plataforma. Os desenvolvedores podem trabalhar com o InSpec localmente ou usando SSH, WinRM ou Docker para executar testes, portanto, é desnecessário instalar quaisquer pacotes na infraestrutura que está sendo testada.

      Embora com o InSpec você possa executar testes diretamente em seus servidores, existe um potencial de erro humano que poderia causar problemas em sua infraestrutura. Para evitar esse cenário, os desenvolvedores podem usar o Kitchen para criar uma máquina virtual e instalar um sistema operacional de sua escolha nas máquinas em que os testes estão sendo executados. O Kitchen é um executor de testes, ou ferramenta de automação de teste, que permite testar o código de infraestrutura em uma ou mais plataformas isoladas. Ele também suporta muitos frameworks de teste e é flexível com uma arquitetura de plug-in de driver para várias plataformas, como Vagrant, AWS, DigitalOcean, Docker, LXC containers, etc.

      Neste tutorial, você escreverá testes para seus playbooks Ansible em execução em um Droplet Ubuntu 18.04 da DigitalOcean. Você usará o Kitchen como executor de teste e o InSpec para escrever os testes. No final deste tutorial, você poderá testar o deploy do seu playbook Ansible.

      Pré-requisitos

      Antes de começar com este guia, você precisará de uma conta na DigitalOcean além do seguinte:

      Passo 1 — Configurando e Inicializando o Kitchen

      Você instalou o ChefDK como parte dos pré-requisitos que vem empacotados com o kitchen. Neste passo, você configurará o Kitchen para se comunicar com a DigitalOcean.

      Antes de inicializar o Kitchen, você criará e se moverá para um diretório de projeto. Neste tutorial, o chamaremos de ansible_testing_dir.

      Execute o seguinte comando para criar o diretório:

      • mkdir ~/ansible_testing_dir

      E então passe para ele:

      Usando o gem instale o pacote kitchen-digitalocean em sua máquina local. Isso permite que você diga ao kitchen para usar o driver da DigitalOcean ao executar testes:

      • gem install kitchen-digitalocean

      No diretório do projeto, você executará o comando kitchen init especificando ansible_playbook como o provisionador e digitalocean como o driver ao inicializar o Kitchen:

      • kitchen init --provisioner=ansible_playbook --driver=digitalocean

      Você verá a seguinte saída:

      Output

      create kitchen.yml create chefignore create test/integration/default

      Isso criou o seguinte no diretório do projeto:

      • test/integration/default é o diretório no qual você salvará seus arquivos de teste.

      • chefignore é o arquivo que você usaria para garantir que certos arquivos não sejam carregados para o Chef Infra Server, mas você não o usará neste tutorial.

      • kitchen.yml é o arquivo que descreve sua configuração de teste: o que você deseja testar e as plataformas de destino.

      Agora, você precisa exportar suas credenciais da DigitalOcean como variáveis de ambiente para ter acesso para criar Droplets a partir da sua CLI. Primeiro, inicie com seu token de acesso da DigitalOcean executando o seguinte comando:

      • export DIGITALOCEAN_ACCESS_TOKEN="SEU_TOKEN_DE_ACESSO_DIGITALOCEAN"

      Você também precisa obter seu número de ID da chave SSH; note que SEU_ID_DE_CHAVE_SSH_DIGITALOCEAN deve ser o ID numérico da sua chave SSH, não o nome simbólico. Usando a API da DigitalOcean, você pode obter o ID numérico de suas chaves com o seguinte comando:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN"

      Após este comando, você verá uma lista de suas chaves SSH e metadados relacionados. Leia a saída para encontrar a chave correta e identificar o número de ID nela:

      Output

      ... {"id":seu-ID-numérico,"fingerprint":"fingerprint","public_key":"ssh-rsa sua-chave-ssh","name":"nome-da-sua-chave-ssh" ...

      Nota: Se você deseja tornar sua saída mais legível para obter seus IDs numéricos, você pode encontrar e baixar o jq com base no seu sistema operacional na página de download do jq. Agora, você pode executar o comando anterior fazendo um pipe para o jq da seguinte maneira:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN" | jq

      Você verá as informações da chave SSH formatadas de forma semelhante a:

      Output

      { "ssh_keys": [ { "id": ID_DA_SUA_CHAVE_SSH, "fingerprint": "2f:d0:16:6b", "public_key": "ssh-rsa AAAAB3NzaC1yc2 example@example.local", "name": "sannikay" } ], }

      Depois de identificar seus IDs numéricos de SSH, exporte-os com o seguinte comando:

      • export DIGITALOCEAN_SSH_KEY_IDS="SEU_ID_DE_CHAVE_SSH_DIGITALOCEAN"

      Você inicializou o kitchen e configurou as variáveis de ambiente para suas credenciais da DigitalOcean. Agora você vai criar e executar testes em seus Droplets diretamente da linha de comando.

      Passo 2 — Criando o Playbook Ansible

      Neste passo, você criará um playbook e roles (funções) que configurará o Nginx e o Node.js no Droplet criado pelo kitchen no próximo passo. Seus testes serão executados no playbook para garantir que as condições especificadas no playbook sejam atendidas.

      Para começar, crie um diretório roles para as roles ou funções Nginx e Node.js:

      • mkdir -p roles/{nginx,nodejs}/tasks

      Isso criará uma estrutura de diretórios da seguinte maneira:

      roles
      ├── nginx
      │   └── tasks
      └── nodejs
          └── tasks
      

      Agora, crie um arquivo main.yml no diretório roles/nginx/tasks usando o seu editor preferido:

      • nano roles/nginx/tasks/main.yml

      Neste arquivo, crie uma tarefa ou task que configura e inicia o Nginx adicionando o seguinte conteúdo:

      roles/nginx/tasks/main.yml

      ---
      - name: Update cache repositories and install Nginx
        apt:
          name: nginx
          update_cache: yes
      
      - name: Change nginx directory permission
        file:
          path: /etc/nginx/nginx.conf
          mode: 0750
      
      - name: start nginx
        service:
          name: nginx
          state: started
      

      Depois de adicionar o conteúdo, salve e saia do arquivo.

      Em roles/nginx/tasks/main.yml você define uma tarefa que atualizará o repositório de cache do seu Droplet, o que equivale a executar o comando apt update manualmente em um servidor. Essa tarefa também altera as permissões do arquivo de configuração do Nginx e inicia o serviço Nginx.

      Você também criará um arquivo main.yml em roles/nodejs/tasks para definir uma tarefa que configure o Node.js.

      • nano roles/nodejs/tasks/main.yml

      Adicione as seguintes tarefas a este arquivo:

      roles/nodejs/tasks/main.yml

      ---
      - name: Update caches repository
        apt:
          update_cache: yes
      
      - name: Add gpg key for NodeJS LTS
        apt_key:
          url: "https://deb.nodesource.com/gpgkey/nodesource.gpg.key"
          state: present
      
      - name: Add the NodeJS LTS repo
        apt_repository:
          repo: "deb https://deb.nodesource.com/node_{{ NODEJS_VERSION }}.x {{ ansible_distribution_release }} main"
          state: present
          update_cache: yes
      
      - name: Install Node.js
        apt:
          name: nodejs
          state: present
      
      

      Salve e saia do arquivo quando terminar.

      Em roles/nodejs/tasks/main.yml, você primeiro define uma tarefa que atualizará o repositório de cache do seu Droplet. Em seguida, na próxima tarefa, você adiciona a chave GPG para o Node.js, que serve como um meio de verificar a autenticidade do repositório apt do Node.js. As duas tarefas finais adicionam o repositório apt do Node.js e o instalam.

      Agora você definirá suas configurações do Ansible, como variáveis, a ordem em que você deseja que suas roles sejam executadas e configurações de privilégios de superusuário. Para fazer isso, você criará um arquivo chamado playbook.yml, que serve como um entry point para o Kitchen. Quando você executa seus testes, o Kitchen inicia no seu arquivo playbook.yml e procura as roles a serem executadas, que são seus arquivos roles/nginx/tasks/main.yml e roles/nodejs/tasks/main.yml.

      Execute o seguinte comando para criar o playbook.yml:

      Adicione o seguinte conteúdo ao arquivo:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      

      Salve e saia do arquivo.

      Você criou as roles do playbook do Ansible com as quais executará seus testes para garantir que as condições especificadas no playbook sejam atendidas.

      Passo 3 — Escrevendo Seus Testes InSpec

      Neste passo, você escreverá testes para verificar se o Node.js está instalado no seu Droplet. Antes de escrever seu teste, vejamos o formato de um exemplo de teste InSpec. Como em muitos frameworks de teste, o código InSpec se assemelha a uma linguagem natural. O InSpec possui dois componentes principais, o assunto a ser examinado e o estado esperado desse assunto:

      block A

      describe '<entity>' do
        it { <expectation> }
      end
      

      Em block A, as palavras-chave do e end definem um bloco ou block. A palavra-chave describe é comumente conhecida como conjuntos ou suites de testes, que contêm casos de teste. A palavra-chave it é usada para definir os casos de teste.

      <entity> é o assunto que você deseja examinar, por exemplo, um nome de pacote, serviço, arquivo ou porta de rede. O <expectation> especifica o resultado desejado ou o estado esperado, por exemplo, o Nginx deve ser instalado ou deve ter uma versão específica. Você pode verificar a documentação da InSpec DSL para aprender mais sobre a linguagem InSpec.

      Outro exemplo de bloco de teste InSpec:

      block B

      control 'Pode ser qualquer coisa única' do  
        impact 0.7                         
        title 'Um título inteligível'     
        desc  'Uma descrição opcional'
        describe '<entity>' do             
          it { <expectation> }
        end
      end
      

      A diferença entre o bloco A e o bloco B é o bloco control. O bloco control é usado como um meio de controle regulatório, recomendação ou requisito. O bloco control tem um nome; geralmente um ID único, metadados como desc, title, impact e, finalmente, agrupam blocos describe relacionados para implementar as verificações.

      desc, title, e impact definem metadados que descrevem completamente a importância do controle, seu objetivo, com uma descrição sucinta e completa. impact define um valor numérico que varia de 0.0 a 1.0 onde 0.0 a <0.01 é classificado como sem impacto, 0.01 a <0.4 é classificado como baixo impacto, 0.4 a <0.7 é classificado como médio impacto, 0,7 a <0,9 é classificado como alto impacto, 0,9 a 1,0 é classificado como controle crítico.

      Agora, vamos implementar um teste. Usando a sintaxe do bloco A, você usará o recurso package do InSpec para testar se o Node.js está instalado no sistema. Você irá criar um arquivo chamado sample.rb em seu diretório test/integration/default para seus testes.

      Crie o sample.rb:

      • nano test/integration/default/sample.rb

      Adicione o seguinte ao seu arquivo:

      test/integration/default/sample.rb

      describe package('nodejs') do
        it { should be_installed }
      end
      

      Aqui seu teste está usando o recurso package para verificar se o node.js está instalado.

      Salve e saia do arquivo quando terminar.

      Para executar este teste, você precisa editar kitchen.yml para especificar o playbook que você criou anteriormente e para adicionar às suas configurações.

      Abra seu arquivo kitchen.yml:

      • nano ansible_testing_dir/kitchen.yml

      Substitua o conteúdo de kitchen.yml com o seguinte:

      ansible_testing_dir/kitchen.yml

      ---
      driver:
        name: digitalocean
      
      provisioner:
        name: ansible_playbook
        hosts: test-kitchen
        playbook: ./playbook.yml
      
      verifier:
        name: inspec
      
      platforms:
        - name: ubuntu-18
          driver_config:
            ssh_key: CAMINHO_PARA_SUA_CHAVE_PRIVADA_SSH
            tags:
              - inspec-testing
            region: fra1
            size: 1gb
            private_networking: false
          verifier:
            inspec_tests:
              - test/integration/default
      suites:
        - name: default
      
      

      As opções de platform incluem o seguinte:

      • name: A imagem que você está usando.
      • driver_config: A configuração do seu Droplet da DigitalOcean. Você está especificando as seguintes opções para driver_config:

        • ssh_key: Caminho para SUA_CHAVE_SSH_PRIVADA. Sua SUA_CHAVE_SSH_PRIVADA está localizada no diretório que você especificou ao criar sua chave ssh.
        • tags: As tags associadas ao seu Droplet.
        • region: A region ou região onde você deseja que seu Droplet seja hospedado.
        • size: A memória que você deseja que seu Droplet tenha.
      • verifier: Isso define que o projeto contém testes InSpec.

        • A parte do inspec_tests especifica que os testes existem no diretório test/integration/default do projeto.

      Observe que name e region usam abreviações. Você pode verificar na documentação do test-kitchen as abreviações que você pode usar.

      Depois de adicionar sua configuração, salve e saia do arquivo.

      Execute o comando kitchen test para executar o teste. Isso verificará se o Node.js está instalado — ele falhará propositalmente, porque você atualmente não possui a role Node.js no seu arquivo playbook.yml:

      Você verá uma saída semelhante à seguinte:

      Output: failing test results

      -----> Starting Kitchen (v1.24.0) -----> Cleaning up any prior instances of <default-ubuntu-18> -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145268853> destroyed. Finished destroying <default-ubuntu-18> (0m2.63s). -----> Testing <default-ubuntu-18> -----> Creating <default-ubuntu-18>... DigitalOcean instance <145273424> created. Waiting for SSH service on 138.68.97.146:22, retrying in 3 seconds [SSH] Established (ssh ready) Finished creating <default-ubuntu-18> (0m51.74s). -----> Converging <default-ubuntu-18>... $$$$$$ Running legacy converge for 'Digitalocean' Driver -----> Installing Chef Omnibus to install busser to run tests PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Downloading files from <default-ubuntu-18> Finished converging <default-ubuntu-18> (0m55.05s). -----> Setting up <default-ubuntu-18>... $$$$$$ Running legacy setup for 'Digitalocean' Driver Finished setting up <default-ubuntu-18> (0m0.00s). -----> Verifying <default-ubuntu-18>... Loaded tests from {:path=>". ansible_testing_dir.test.integration.default"} Profile: tests from {:path=>"ansible_testing_dir/test/integration/default"} (tests from {:path=>"ansible_testing_dir.test.integration.default"}) Version: (not specified) Target: ssh://root@138.68.97.146:22 System Package nodejs × should be installed expected that System Package nodejs is installed Test Summary: 0 successful, 1 failure, 0 skipped >>>>>> ------Exception------- >>>>>> Class: Kitchen::ActionFailed >>>>>> Message: 1 actions failed. >>>>>> Verify failed on instance <default-ubuntu-18>. Please see .kitchen/logs/default-ubuntu-18.log for more details >>>>>> ---------------------- >>>>>> Please see .kitchen/logs/kitchen.log for more details >>>>>> Also try running `kitchen diagnose --all` for configuration 4.54s user 1.77s system 5% cpu 2:02.33 total

      A saída informa que seu teste está falhando porque você não possui o Node.js instalado no Droplet que você provisionou com o kitchen. Você corrigirá seu teste adicionando a role nodejs ao seu arquivo playbook.yml e executará o teste novamente.

      Edite o arquivo playbook.yml para incluir a role nodejs:

      Adicione as seguintes linhas destacadas ao seu arquivo:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      
         roles:
          - nodejs
      

      Salve e feche o arquivo.

      Agora, você executará novamente o teste usando o comando kitchen test:

      Você verá a seguinte saída:

      Output

      ...... Target: ssh://root@46.101.248.71:22 System Package nodejs ✔ should be installed Test Summary: 1 successful, 0 failures, 0 skipped Finished verifying <default-ubuntu-18> (0m4.89s). -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145512952> destroyed. Finished destroying <default-ubuntu-18> (0m2.23s). Finished testing <default-ubuntu-18> (2m49.78s). -----> Kitchen is finished. (2m55.14s) 4.86s user 1.77s system 3% cpu 2:56.58 total

      Seu teste agora passa porque você tem o Node.js instalado usando a role nodejs.

      Aqui está um resumo do que o Kitchen está fazendo em Test Action:

      • Destrói o Droplet se ele existir
      • Cria o Droplet
      • Converge o Droplet
      • Verifica o Droplet com o InSpec
      • Destrói o Droplet

      O Kitchen interromperá a execução em seu Droplet se encontrar algum problema. Isso significa que, se o seu playbook do Ansible falhar, o InSpec não será executado e o seu Droplet não será destruído. Isso permite que você inspecione o estado da instância e corrija quaisquer problemas. O comportamento da ação final de destruição pode ser substituído, se desejado. Verifique a ajuda da CLI para a flag --destroy executando o comando kitchen help test.

      Você escreveu seus primeiros testes e os executou no seu playbook com uma instância falhando antes de corrigir o problema. Em seguida, você estenderá seu arquivo de teste.

      Passo 4 — Adicionando Casos de Teste

      Neste passo, você adicionará mais casos de teste ao seu arquivo de teste para verificar se os módulos do Nginx estão instalados no seu Droplet e se o arquivo de configuração tem as permissões corretas.

      Edite seu arquivo sample.rb para adicionar mais casos de teste:

      • nano test/integration/default/sample.rb

      Adicione os seguintes casos de teste ao final do arquivo:

      test/integration/default/sample.rb

      . . .
      control 'nginx-modules' do
        impact 1.0
        title 'NGINX modules'
        desc 'The required NGINX modules should be installed.'
        describe nginx do
          its('modules') { should include 'http_ssl' }
          its('modules') { should include 'stream_ssl' }
          its('modules') { should include 'mail_ssl' }
        end
      end
      
      control 'nginx-conf' do
        impact 1.0
        title 'NGINX configuration'
        desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
        describe file('/etc/nginx/nginx.conf') do
          it { should be_owned_by 'root' }
          it { should be_grouped_into 'root' }
          it { should_not be_readable.by('others') }
          it { should_not be_writable.by('others') }
          it { should_not be_executable.by('others') }
        end
      end
      

      Esses casos de teste verificam se os módulos nginx-modules no seu Droplet incluem http_ssl, stream_ssl e mail_ssl. Você também está verificando as permissões do arquivo /etc/nginx/nginx.conf.

      Você está usando as palavras-chave it e its para definir seu teste. A palavra-chave its é usada apenas para acessar propriedades de resources. Por exemplo, modules é uma propriedade de nginx.

      Salve e saia do arquivo depois de adicionar os casos de teste.

      Agora execute o comando kitchen test para testar novamente:

      Você verá a seguinte saída:

      Output

      ... Target: ssh://root@104.248.131.111:22 ↺ nginx-modules: NGINX modules ↺ The `nginx` binary not found in the path provided. × nginx-conf: NGINX configuration (2 failed) × File /etc/nginx/nginx.conf should be owned by "root" expected `File /etc/nginx/nginx.conf.owned_by?("root")` to return true, got false × File /etc/nginx/nginx.conf should be grouped into "root" expected `File /etc/nginx/nginx.conf.grouped_into?("root")` to return true, got false ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 0 successful controls, 1 control failure, 1 control skipped Test Summary: 4 successful, 2 failures, 1 skipped

      Você verá que alguns dos testes estão falhando. Você irá corrigi-los adicionando a role nginx ao seu arquivo playbook e executando novamente o teste. No teste que falhou, você está verificando módulos nginx e permissões de arquivo que não estão presentes atualmente no seu servidor.

      Abra seu arquivo playbook.yml:

      • nano ansible_testing_dir/playbook.yml

      Adicione a seguinte linha destacada às suas roles:

      ansible_testing_dir/playbook.yml

      ---
      - hosts: all
        become: true
        remote_user: ubuntu
        vars:
        NODEJS_VERSION: 8
      
        roles:
        - nodejs
        - nginx
      

      Salve e feche o arquivo quando terminar.

      Em seguida, execute seus testes novamente:

      Você verá a seguinte saída:

      Output

      ... Target: ssh://root@104.248.131.111:22 ✔ nginx-modules: NGINX version ✔ Nginx Environment modules should include "http_ssl" ✔ Nginx Environment modules should include "stream_ssl" ✔ Nginx Environment modules should include "mail_ssl" ✔ nginx-conf: NGINX configuration ✔ File /etc/nginx/nginx.conf should be owned by "root" ✔ File /etc/nginx/nginx.conf should be grouped into "root" ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped Test Summary: 9 successful, 0 failures, 0 skipped

      Depois de adicionar a role nginx ao playbook, todos os seus testes agora passam. A saída mostra que os módulos http_ssl, stream_ssl e mail_ssl estão instalados em seu Droplet e as permissões corretas estão definidas para o arquivo de configuração.

      Quando terminar, ou não precisar mais do seu Droplet, você poderá destruí-lo executando o comando kitchen destroy para excluí-lo após executar seus testes:

      Após este comando, você verá uma saída semelhante a:

      Output

      -----> Starting Kitchen (v1.24.0) -----> Destroying <default-ubuntu-18>... Finished destroying <default-ubuntu-18> (0m0.00s). -----> Kitchen is finished. (0m5.07s) 3.79s user 1.50s system 82% cpu 6.432 total

      Você escreveu testes para o seu playbook, executou os testes e corrigiu os testes com falha para garantir que todos os testes sejam aprovados. Agora você está pronto para criar um ambiente virtual, escrever testes para o seu Playbook Ansible e executar seu teste no ambiente virtual usando o Kitchen.

      Conclusão

      Agora você tem uma base flexível para testar seu deployment Ansible, que lhe permite testar seus playbooks antes de executar em um servidor ativo. Você também pode empacotar seu teste em um perfil. Você pode usar perfis para compartilhar seu teste através do Github ou do Chef Supermarket e executá-lo facilmente em um servidor ativo.

      Para detalhes mais abrangentes sobre o InSpec e o Kitchen, consulte a documentação oficial do InSpec e a documentação oficial do Kitchen.



      Source link