One place for hosting & domains

      Testing

      Business Continuity and Disaster Recovery Basics: Testing 101


      “Luck is what happens when preparation meets opportunity.” – Seneca

      As I covered in another blog post, the first step to any effective business continuity and disaster recovery program is crafting a thoughtful, achievable plan.

      But having a great business continuity and disaster recovery plan on paper doesn’t mean that the work is done. After all, how do you evaluate the efficacy of your plan or make adjustments before you actually need it? The answer: by putting it to the test.

      Disaster Recovery Plan Testing

      I am fond of saying that managed services are a three-legged stool made up of technology, people and processes. If you lose any one leg, the stool falls over. And since an IT department is essentially offering managed services to the wider organization, IT management should think in terms of the same triad.

      Let’s break it down:

      • Technology: the tool or set of tools to be used
      • People: trained, knowledgeable staff to operate the technology
      • Processes: the written instructions for the people to follow when operating the technology. (See another blog I wrote for more information: “6 Processes You Need to Mature Your Managed Services.”)

      For a disaster recovery scenario, you need to test the stool to make sure that each leg is ready and that the people know what to do when the time comes. One useful tool for this is a tabletop exercise (TTX). The purpose of the TTX is to simply get people thinking about what technology they touch and what processes are already in place to support their tasks.

      Tabletop Exercise Steps

      Let’s walk through the stages of a typical TTX.

      No. 1: Develop a Narrative

      Write a quick narrative for the disaster. Start off assuming all your staff are available, and then work through threats that you may have already identified. Some examples:

      • Over the weekend, a train derailed, spilling hazardous materials. The fire department has evacuated an area that includes your headquarters, which contains important servers.
      • Just 10 minutes ago, your firm’s servers were all struck by a ransomware attack.
      • Heavy rains have occurred, and the server room in the basement is starting to flood.

      Now, some questions and prompts for your staff:

      • What should we do?
      • How do we communicate during this?
      • How do we continue to support the business?
      • What are you doing? Show me! (Pointing isn’t usually polite, but this might be a time to do so.)
      • How do we communicate the event to clients, customers, users, etc.?

      Going through the exercise, you’ll likely find that certain recovery processes are not properly documented or even completely missing. For example, your network administrator might not have a written recovery process. Have them and any other relevant staff produce and formalize the process, ready to be shared at the next TTX.

      Continue this way for all the role-players until your team can successfully work through the scenario.  You will want to thoroughly test people’s roles, whether in networking, operating systems, applications, end user access or any other area.

      No. 2: Insert Some Realism

      Unfortunately, we have all seen emergency situations and scenarios, such as the 9/11 terrorist attacks, where key personnel are either missing, incapacitated or even deceased. In less unhappy scenarios, some staff might not be able to tend to work since their home or family was affected by the disaster. For the purposes of a TTX, you can simply designate someone as being on vacation and unreachable, then have them sit out.

      Ask:

      • Who picks up their duties?
      • Does the replacement know where to find the documentation?
      • Can the replacement read and understand the written documentation?

      No. 3: “DIVE, DIVE, DIVE!”—Always Be Prepared

      Just like a submarine commander might call a crash dive drill at the most inopportune time, call a TTX drill on your own team to test the plan. For this, someone might actually be on vacation. Use that to your advantage to make sure that the whole team knows how to step in and how to communicate throughout the drill. You might even plan the drill to coincide with a key player’s vacation for added realism.

      No. 4: Break Away From the Table

      Once you’ve executed your tabletop exercise, now it’s time to do a real test! Have your team actually work through all of the steps of the process to fail over to the recovery site.

      Again, you will want to test that the servers and application can all be turned up at the recovery environment. To prevent data islands, make certain that users can successfully access your applications’ recovery site from where they would operate during a disaster. Here are some questions for user access testing:

      • Can users reach the replica site over the internet/VPN?
      • Can users use remote desktop protocol (RDP) to connect to servers in the replica environment?
      • If users in an office were displaced, could they reach the replica site from home using an SSL VPN?

      No. 5: Bring in a Trusted Service Partner

      The help that an IT service provider provides you doesn’t have to stop with managing your Disaster Recovery as a Service infrastructure or environment. With every INAP DRaaS solution, you get white glove onboarding and periodic testing to make sure that your plans are as robust as you need them to be. Between scheduled tests, you can also test your failover at will, taking your staff beyond tabletop exercises to evaluate their ability to recover the environment on their own. Staying prepared to handle disaster is a continuous process, and we can be there every step of the way to guide you through it.

      Explore INAP Disaster Recovery as a Service.

      LEARN MORE

      Paul Painter
      • Director, Solution Architecture


      Paul Painter is Director, Solution Architecture. He manages the central U.S. region, with his team supporting sales by providing quality presales engineering and optimizing customer onboarding processes. READ MORE



      Source link

      How To Implement Continuous Testing of Ansible Roles Using Molecule and Travis CI on Ubuntu 18.04


      The author selected the Mozilla Foundation to receive a donation as part of the Write for DOnations program.

      Introduction

      Ansible is an agentless configuration management tool that uses YAML templates to define a list of tasks to be performed on hosts. In Ansible, roles are a collection of variables, tasks, files, templates and modules that are used together to perform a singular, complex function.

      Molecule is a tool for performing automated testing of Ansible roles, specifically designed to support the development of consistently well-written and maintained roles. Molecule’s unit tests allow developers to test roles simultaneously against multiple environments and under different parameters. It’s important that developers continuously run tests against code that often changes; this workflow ensures that roles continue to work as you update code libraries. Running Molecule using a continuous integration tool, like Travis CI, allows for tests to run continuously, ensuring that contributions to your code do not introduce breaking changes.

      In this tutorial, you will use a pre-made base role that installs and configures an Apache web server and a firewall on Ubuntu and CentOS servers. Then, you will initialize a Molecule scenario in that role to create tests and ensure that the role performs as intended in your target environments. After configuring Molecule, you will use Travis CI to continuously test your newly created role. Every time a change is made to your code, Travis CI will run molecule test to make sure that the role still performs correctly.

      Prerequisites

      Before you begin this tutorial, you will need:

      Step 1 — Forking the Base Role Repository

      You will be using a pre-made role called ansible-apache that installs Apache and configures a firewall on Debian- and Red Hat-based distributions. You will fork and use this role as a base and then build Molecule tests on top of it. Forking allows you to create a copy of a repository so you can make changes to it without tampering with the original project.

      Start by creating a fork of the ansible-apache role. Go to the ansible-apache repository and click on the Fork button.

      Once you have forked the repository, GitHub will lead you to your fork’s page. This will be a copy of the base repository, but on your own account.

      Click on the green Clone or Download button and you’ll see a box with Clone with HTTPS.

      Copy the URL shown for your repository. You’ll use this in the next step. The URL will be similar to this:

      https://github.com/username/ansible-apache.git
      

      You will replace username with your GitHub username.

      With your fork set up, you will clone it on your server and begin preparing your role in the next section.

      Step 2 — Preparing Your Role

      Having followed Step 1 of the prerequisite How To Test Ansible Roles with Molecule on Ubuntu 18.04, you will have Molecule and Ansible installed in a virtual environment. You will use this virtual environment for developing your new role.

      First, activate the virtual environment you created while following the prerequisites by running:

      • source my_env/bin/activate

      Run the following command to clone the repository using the URL you just copied in Step 1:

      • git clone https://github.com/username/ansible-apache.git

      Your output will look similar to the following:

      Output

      Cloning into 'ansible-apache'... remote: Enumerating objects: 16, done. remote: Total 16 (delta 0), reused 0 (delta 0), pack-reused 16 Unpacking objects: 100% (16/16), done.

      Move into the newly created directory:

      The base role you've downloaded performs the following tasks:

      • Includes variables: The role starts by including all the required variables according to the distribution of the host. Ansible uses variables to handle the disparities between different systems. Since you are using Ubuntu 18.04 and CentOS 7 as hosts, the role will recognize that the OS families are Debian and Red Hat respectively and include variables from vars/Debian.yml and vars/RedHat.yml.

      • Includes distribution-relevant tasks: These tasks include tasks/install-Debian.yml and tasks/install-RedHat.yml. Depending on the specified distribution, it installs the relevant packages. For Ubuntu, these packages are apache2 and ufw. For CentOS, these packages are httpd and firewalld.

      • Ensures latest index.html is present: This task copies over a template templates/index.html.j2 that Apache will use as the web server's home page.

      • Starts relevant services and enables them on boot: Starts and enables the required services installed as part of the first task. For CentOS, these services are httpd and firewalld, and for Ubuntu, they are apache2 and ufw.

      • Configures firewall to allow traffic: This includes either tasks/configure-Debian-firewall.yml or tasks/configure-RedHat-firewall.yml. Ansible configures either Firewalld or UFW as the firewall and whitelists the http service.

      Now that you have an understanding of how this role works, you will configure Molecule to test it. You will write test cases for these tasks that cover the changes they make.

      Step 3 — Writing Your Tests

      To check that your base role performs its tasks as intended, you will start a Molecule scenario, specify your target environments, and create three custom test files.

      Begin by initializing a Molecule scenario for this role using the following command:

      • molecule init scenario -r ansible-apache

      You will see the following output:

      Output

      --> Initializing new scenario default... Initialized scenario in /home/sammy/ansible-apache/molecule/default successfully.

      You will add CentOS and Ubuntu as your target environments by including them as platforms in your Molecule configuration file. To do this, edit the molecule.yml file using a text editor:

      • nano molecule/default/molecule.yml

      Add the following highlighted content to the Molecule configuration:

      ~/ansible-apache/molecule/default/molecule.yml

      ---
      dependency:
        name: galaxy
      driver:
        name: docker
      lint:
        name: yamllint
      platforms:
        - name: centos7
          image: milcom/centos7-systemd
          privileged: true
        - name: ubuntu18
          image: solita/ubuntu-systemd
          command: /sbin/init
          privileged: true
          volumes:
            - /lib/modules:/lib/modules:ro
      provisioner:
        name: ansible
        lint:
          name: ansible-lint
      scenario:
        name: default
      verifier:
        name: testinfra
        lint:
          name: flake8
      

      Here, you're specifying two target platforms that are launched in privileged mode since you're working with systemd services:

      • centos7 is the first platform and uses the milcom/centos7-systemd image.
      • ubuntu18 is the second platform and uses the solita/ubuntu-systemd image. In addition to using privileged mode and mounting the required kernel modules, you're running /sbin/init on launch to make sure iptables is up and running.

      Save and exit the file.

      For more information on running privileged containers visit the official Molecule documentation.

      Instead of using the default Molecule test file, you will be creating three custom test files, one for each target platform, and one file for writing tests that are common between all platforms. Start by deleting the scenario's default test file test_default.py with the following command:

      • rm molecule/default/tests/test_default.py

      You can now move on to creating the three custom test files, test_common.py, test_Debian.py, and test_RedHat.py for each of your target platforms.

      The first test file, test_common.py, will contain the common tests that each of the hosts will perform. Create and edit the common test file, test_common.py:

      • nano molecule/default/tests/test_common.py

      Add the following code to the file:

      ~/ansible-apache/molecule/default/tests/test_common.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('all')
      
      
      @pytest.mark.parametrize('file, content', [
        ("/var/www/html/index.html", "Managed by Ansible")
      ])
      def test_files(host, file, content):
          file = host.file(file)
      
          assert file.exists
          assert file.contains(content)
      

      In your test_common.py file, you have imported the required libraries. You have also written a test called test_files(), which holds the only common task between distributions that your role performs: copying your template as the web servers homepage.

      The next test file, test_Debian.py, holds tests specific to Debian distributions. This test file will specifically target your Ubuntu platform.

      Create and edit the Ubuntu test file by running the following command:

      • nano molecule/default/tests/test_Debian.py

      You can now import the required libraries and define the ubuntu18 platform as the target host. Add the following code to the start of this file:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('ubuntu18')
      

      Then, in the same file, you'll add test_pkg() test.

      Add the following code to the file, which defines the test_pkg() test:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      ...
      @pytest.mark.parametrize('pkg', [
          'apache2',
          'ufw'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
          assert package.is_installed
      

      This test will check if apache2 and ufw packages are installed on the host.

      Note: When adding multiple tests to a Molecule test file, make sure there are two blank lines between each test or you'll get a syntax error from Molecule.

      To define the next test, test_svc(), add the following code under the test_pkg() test in your file:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      ...
      @pytest.mark.parametrize('svc', [
          'apache2',
          'ufw'
      ])
      def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      

      test_svc() will check if the apache2 and ufw services are running and enabled.

      Finally you will add your last test, test_ufw_rules(), to the test_Debian.py file.

      Add this code under the test_svc() test in your file to define test_ufw_rules():

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      ...
      @pytest.mark.parametrize('rule', [
          '-A ufw-user-input -p tcp -m tcp --dport 80 -j ACCEPT'
      ])
      def test_ufw_rules(host, rule):
          cmd = host.run('iptables -t filter -S')
      
          assert rule in cmd.stdout
      

      test_ufw_rules() will check that your firewall configuration permits traffic on the port used by the Apache service.

      With each of these tests added, your test_Debian.py file will look like this:

      ~/ansible-apache/molecule/default/tests/test_Debian.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('ubuntu18')
      
      
      @pytest.mark.parametrize('pkg', [
          'apache2',
          'ufw'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
          assert package.is_installed
      
      
      @pytest.mark.parametrize('svc', [
          'apache2',
          'ufw'
      ])
      def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      
      
      @pytest.mark.parametrize('rule', [
          '-A ufw-user-input -p tcp -m tcp --dport 80 -j ACCEPT'
      ])
      def test_ufw_rules(host, rule):
          cmd = host.run('iptables -t filter -S')
      
          assert rule in cmd.stdout
      

      The test_Debian.py file now includes the three tests: test_pkg(), test_svc(), and test_ufw_rules().

      Save and exit test_Debian.py.

      Next you'll create the test_RedHat.py test file, which will contain tests specific to Red Hat distributions to target your CentOS platform.

      Create and edit the CentOS test file, test_RedHat.py, by running the following command:

      • nano molecule/default/tests/test_RedHat.py

      Similarly to the Ubuntu test file, you will now write three tests to include in your test_RedHat.py file. Before adding the test code, you can import the required libraries and define the centos7 platform as the target host, by adding the following code to the beginning of your file:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('centos7')
      

      Then, add the test_pkg() test, which will check if the httpd and firewalld packages are installed on the host.

      Following the code for your library imports, add the test_pkg() test to your file. (Again, remember to include two blank lines before each new test.)

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      ...
      @pytest.mark.parametrize('pkg', [
          'httpd',
          'firewalld'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
            assert package.is_installed
      

      Now, you can add the test_svc() test to ensure that httpd and firewalld services are running and enabled.

      Add the test_svc() code to your file following the test_pkg() test:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      ...
      @pytest.mark.parametrize('svc', [
          'httpd',
          'firewalld'
      ])
        def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      

      The final test in test_RedHat.py file will be test_firewalld(), which will check if Firewalld has the http service whitelisted.

      Add the test_firewalld() test to your file after the test_svc() code:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      ...
      @pytest.mark.parametrize('file, content', [
          ("/etc/firewalld/zones/public.xml", "<service name="http"/>")
      ])
      def test_firewalld(host, file, content):
          file = host.file(file)
      
          assert file.exists
          assert file.contains(content)
      

      After importing the libraries and adding the three tests, your test_RedHat.py file will look like this:

      ~/ansible-apache/molecule/default/tests/test_RedHat.py

      import os
      import pytest
      
      import testinfra.utils.ansible_runner
      
      testinfra_hosts = testinfra.utils.ansible_runner.AnsibleRunner(
          os.environ['MOLECULE_INVENTORY_FILE']).get_hosts('centos7')
      
      
      @pytest.mark.parametrize('pkg', [
          'httpd',
          'firewalld'
      ])
      def test_pkg(host, pkg):
          package = host.package(pkg)
      
          assert package.is_installed
      
      
      @pytest.mark.parametrize('svc', [
          'httpd',
          'firewalld'
      ])
      def test_svc(host, svc):
          service = host.service(svc)
      
          assert service.is_running
          assert service.is_enabled
      
      
      @pytest.mark.parametrize('file, content', [
          ("/etc/firewalld/zones/public.xml", "<service name="http"/>")
      ])
      def test_firewalld(host, file, content):
          file = host.file(file)
      
          assert file.exists
          assert file.contains(content)
      

      Now that you've completed writing tests in all three files, test_common.py, test_Debian.py, and test_RedHat.py, your role is ready for testing. In the next step, you will use Molecule to run these tests against your newly configured role.

      Step 4 — Testing Against Your Role

      You will now execute your newly created tests against the base role ansible-apache using Molecule. To run your tests, use the following command:

      You'll see the following output once Molecule has finished running all the tests:

      Output

      ... --> Scenario: 'default' --> Action: 'verify' --> Executing Testinfra tests found in /home/sammy/ansible-apache/molecule/default/tests/... ============================= test session starts ============================== platform linux -- Python 3.6.7, pytest-4.1.1, py-1.7.0, pluggy-0.8.1 rootdir: /home/sammy/ansible-apache/molecule/default, inifile: plugins: testinfra-1.16.0 collected 12 items tests/test_common.py .. [ 16%] tests/test_RedHat.py ..... [ 58%] tests/test_Debian.py ..... [100%] ========================== 12 passed in 80.70 seconds ========================== Verifier completed successfully.

      You'll see Verifier completed successfully in your output; this means that the verifier executed all of your tests and returned them successfully.

      Now that you've successfully completed the development of your role, you can commit your changes to Git and set up Travis CI for continuous testing.

      Step 5 — Using Git to Share Your Updated Role

      In this tutorial, so far, you have cloned a role called ansible-apache and added tests to it to make sure it works against Ubuntu and CentOS hosts. To share your updated role with the public, you must commit these changes and push them to your fork.

      Run the following command to add the files and commit the changes you've made:

      This command will add all the files that you have modified in the current directory to the staging area.

      You also need to set your name and email address in the git config in order to commit successfully. You can do so using the following commands:

      • git config user.email "sammy@digitalocean.com"
      • git config user.name "John Doe"

      Commit the changed files to your repository:

      • git commit -m "Configured Molecule"

      You'll see the following output:

      Output

      [master b2d5a5c] Configured Molecule 8 files changed, 155 insertions(+), 1 deletion(-) create mode 100644 molecule/default/Dockerfile.j2 create mode 100644 molecule/default/INSTALL.rst create mode 100644 molecule/default/molecule.yml create mode 100644 molecule/default/playbook.yml create mode 100644 molecule/default/tests/test_Debian.py create mode 100644 molecule/default/tests/test_RedHat.py create mode 100644 molecule/default/tests/test_common.py

      This signifies that you have committed your changes successfully. Now, push these changes to your fork with the following command:

      • git push -u origin master

      You will see a prompt for your GitHub credentials. After entering these credentials, your code will be pushed to your repository and you'll see this output:

      Output

      Counting objects: 13, done. Compressing objects: 100% (12/12), done. Writing objects: 100% (13/13), 2.32 KiB | 2.32 MiB/s, done. Total 13 (delta 3), reused 0 (delta 0) remote: Resolving deltas: 100% (3/3), completed with 2 local objects. To https://github.com/username/ansible-apache.git 009d5d6..e4e6959 master -> master Branch 'master' set up to track remote branch 'master' from 'origin'.

      If you go to your fork's repository at github.com/username/ansible-apache, you'll see a new commit called Configured Molecule reflecting the changes you made in the files.

      Now, you can integrate Travis CI with your new repository so that any changes made to your role will automatically trigger Molecule tests. This will ensure that your role always works with Ubuntu and CentOS hosts.

      Step 6 — Integrating Travis CI

      In this step, you're going to integrate Travis CI into your workflow. Once enabled, any changes you push to your fork will trigger a Travis CI build. The purpose of this is to ensure Travis CI always runs molecule test whenever contributors make changes. If any breaking changes are made, Travis will declare the build status as such.

      Proceed to Travis CI to enable your repository. Navigate to your profile page where you can click the Activate button for GitHub.

      You can find further guidance here on activating repositories in Travis CI.

      For Travis CI to work, you must create a configuration file containing instructions for it. To create the Travis configuration file, return to your server and run the following command:

      To duplicate the environment you've created in this tutorial, you will specify parameters in the Travis configuration file. Add the following content to your file:

      ~/ansible-apache/.travis.yml

      ---
      language: python
      python:
        - "2.7"
        - "3.6"
      services:
        - docker
      install:
        - pip install molecule docker
      script:
        - molecule --version
        - ansible --version
        - molecule test
      

      The parameters you've specified in this file are:

      • language: When you specify Python as the language, the CI environment uses separate virtualenv instances for each Python version you specify under the python key.
      • python: Here, you're specifying that Travis will use both Python 2.7 and Python 3.6 to run your tests.
      • services: You need Docker to run tests in Molecule. You're specifying that Travis should ensure Docker is present in your CI environment.
      • install: Here, you're specifying preliminary installation steps that Travis CI will carry out in your virtualenv.
        • pip install molecule docker to check that Ansible and Molecule are present along with the Python library for the Docker remote API.
      • script: This is to specify the steps that Travis CI needs to carry out. In your file, you're specifying three steps:
        • molecule --version prints the Molecule version if Molecule has been successfully installed.
        • ansible --version prints the Ansible version if Ansible has been successfully installed.
        • molecule test finally runs your Molecule tests.

      The reason you specify molecule --version and ansible --version is to catch errors in case the build fails as a result of ansible or molecule misconfiguration due to versioning.

      Once you've added the content to the Travis CI configuration file, save and exit .travis.yml.

      Now, every time you push any changes to your repository, Travis CI will automatically run a build based on the above configuration file. If any of the commands in the script block fail, Travis CI will report the build status as such.

      To make it easier to see the build status, you can add a badge indicating the build status to the README of your role. Open the README.md file using a text editor:

      Add the following line to the README.md to display the build status:

      ~/ansible-apache/README.md

      [![Build Status](https://travis-ci.org/username/ansible-apache.svg?branch=master)](https://travis-ci.org/username/ansible-apache)
      

      Replace username with your GitHub username. Commit and push the changes to your repository as you did earlier.

      First, run the following command to add .travis.yml and README.md to the staging area:

      • git add .travis.yml README.md

      Now commit the changes to your repository by executing:

      • git commit -m "Configured Travis"

      Finally, push these changes to your fork with the following command:

      • git push -u origin master

      If you navigate over to your GitHub repository, you will see that it initially reports build: unknown.

      build-status-unknown

      Within a few minutes, Travis will initiate a build that you can monitor at the Travis CI website. Once the build is a success, GitHub will report the status as such on your repository as well — using the badge you've placed in your README file:

      build-status-passing

      You can access the complete details of the builds by going to the Travis CI website:

      travis-build-status

      Now that you've successfully set up Travis CI for your new role, you can continuously test and integrate changes to your Ansible roles.

      Conclusion

      In this tutorial, you forked a role that installs and configures an Apache web server from GitHub and added integrations for Molecule by writing tests and configuring these tests to work on Docker containers running Ubuntu and CentOS. By pushing your newly created role to GitHub, you have allowed other users to access your role. When there are changes to your role by contributors, Travis CI will automatically run Molecule to test your role.

      Once you're comfortable with the creation of roles and testing them with Molecule, you can integrate this with Ansible Galaxy so that roles are automatically pushed once the build is successful.



      Source link

      Use Buildbot for Software Testing on Ubuntu 18.04


      Updated by Linode Written by Tyler Langlois

      Use promo code DOCS10 for $10 credit on a new account.

      Buildbot is an open source system for testing software projects. In this guide, you will set up a Linode as a Buildbot server to use as a continuous integration platform to test code. Similarly to hosted solutions like Travis CI, Buildbot is an automated testing platform that can watch for code changes, test a project’s code, and send notifications regarding build failures.

      Before you Begin

      1. Familiarize yourself with Linode’s Getting Started guide and complete the steps for deploying and setting up a Linode running Ubuntu 18.04, including setting the hostname and timezone.

      2. This guide uses sudo wherever possible. Complete the sections of our Securing Your Server guide to create a standard user account, harden SSH access and remove unnecessary network services.

      3. Ensure your system is up to date:

        sudo apt update && sudo apt upgrade
        
      4. Complete the Add DNS Records steps to register a domain name that will point to your Linode instance hosting Buildbot.

        Note

        Replace each instance of example.com in this guide with your Buildbot site’s domain name.

      5. Your Buildbot site will serve its content over HTTPS, so you will need to obtain an SSL/TLS certificate. Use Certbot to request and download a free certificate from Let’s Encrypt.

        sudo apt install software-properties-common
        sudo add-apt-repository ppa:certbot/certbot
        sudo apt update
        sudo apt install certbot
        sudo certbot certonly --standalone -d example.com
        

        These commands will download a certificate to /etc/letsencrypt/live/example.com/ on your Linode.

        Note

      Install Buildbot

      Install the Buildbot Master

      Since Buildbot is provided as an Ubuntu package, install the software from the official Ubuntu repositories.

      1. Install the buildbot package along with pip3, which will be used to install additional python packages:

        sudo apt-get install -y buildbot python3-pip
        
      2. Install the required Buildbot Python packages:

        sudo pip3 install buildbot-www buildbot-waterfall-view buildbot-console-view buildbot-grid-view
        
      3. The buildbot package sets up several file paths and services to run persistently on your host. In order to create a new configuration for a Buildbot master, enter the directory for Buildbot master configurations and create a new master called ci (for “continuous integration”).

        cd /var/lib/buildbot/masters
        sudo -u buildbot -- buildbot create-master ci
        

        The generated master configuration file’s location is /var/lib/buildbot/masters/ci/master.cfg.sample.

      4. Make a copy of the default configuration to the path that Buildbot expects for its configuration file:

        sudo cp ci/master.cfg.sample ci/master.cfg
        
      5. Change the permissions for this configuration file so that the buildbot user has rights for the configuration file:

        sudo chown buildbot:buildbot ci/master.cfg
        

      Configure the Buildbot Master

      In order to secure and customize Buildbot, you will change a few settings in the master configuration file before using the application. The master configuration file’s location is /var/lib/buildbot/masters/ci/master.cfg.

      Buildbot has a number of concepts that are represented in the master build configuration file. Open this file in your preferred text editor and browse the Buildbot configuration. The Buildbot configuration is written in Python instead of a markup language like Yaml.

      1. Generate a random string to serve as the password that workers will use to authenticate against the Buildbot master. This is accomplished by using openssl to create a random sequence of characters.

        openssl rand -hex 16
        <a random string>
        
      2. Update the following line in the master.cfg file and replace pass with the randomly-generated password:

        /var/lib/buildbot/masters/ci/master.cfg
        1
        2
        3
        4
        5
        6
        7
        
        ...
        # The 'workers' list defines the set of recognized workers. Each element is
        # a Worker object, specifying a unique worker name and password.  The same
        # worker name and password must be configured on the worker.
        c['workers'] = [worker.Worker("example-worker", "pass")]
        ...
            
      3. Uncomment the cUse Buildbot for Software Testing on Ubuntu 18.04 and the c[titleURL] lines. If desired, change the name of the Buildbot installation by updating the value of cUse Buildbot for Software Testing on Ubuntu 18.04. Replace the c[titleURL] value with the URL of your Buildbot instance. In the example, the URL value is replaced with example.com.

        /var/lib/buildbot/masters/ci/master.cfg
        1
        2
        3
        4
        5
        
        ...
        c['title'] = "My CI"
        c['titleURL'] = "https://example.com"
        ...
            
      4. Uncomment the c['buildbotURL'] line and replace the URL value with the your Buildbot instance’s URL:

        /var/lib/buildbot/masters/ci/master.cfg
        1
        2
        3
        4
        
        ...
        c['buildbotURL'] = "https://example.com/"
        ...
            

        These options assume that you will use a custom domain secured with Let’s Encrypt certificates from certbot as outlined in the Before You Begin section of this guide.

      5. Uncomment the web interface configuration lines and keep the default options:

        /var/lib/buildbot/masters/ci/master.cfg
        1
        2
        3
        4
        5
        
        ...
        c['www'] = dict(port=8010,
                        plugins=dict(waterfall_view={}, console_view={}, grid_view={}))
        ...
            
      6. By default, Buildbot does not require people to authenticate in order to access control features in the web UI. To secure Buildbot, you will need to configure an authentication plugin.

        Configure users for the Buildbot master web interface. Add the following lines below the web interface configuration lines and replace the myusername and password values with the ones you would like to use.

        /var/lib/buildbot/masters/ci/master.cfg
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        
        ...
        c['www'] = dict(port=8010,
                        plugins=dict(waterfall_view={}, console_view={}, grid_view={}))
        
        # user configurations
        c['www']['authz'] = util.Authz(
                allowRules = [
                    util.AnyEndpointMatcher(role="admins")
                ],
                roleMatchers = [
                    util.RolesFromUsername(roles=['admins'], usernames=['myusername'])
                ]
        )
        c['www']['auth'] = util.UserPasswordAuth([('myusername','password')])
        ...
            
      7. Buildbot supports building repositories based on GitHub activity. This is done with a GitHub webhook. Generate a random string to serve as a webhook secret token to validate payloads.

        openssl rand -hex 16
        <a random string>
        
      8. Configure Buildbot to recognize GitHub webhooks as a change source. Add the following snippet to the end of the master.cfg file and replace webhook secret with the random string generated in the previous step.

        /var/lib/buildbot/masters/ci/master.cfg
        1
        2
        3
        4
        5
        6
        
        c['www']['change_hook_dialects'] = {
            'github': {
                'secret': 'webhook_secret',
            }
        }
            
      9. Finally, start the Buildbot master. This command will start the Buildbot process and persist it across reboots.

        sudo systemctl enable --now buildmaster@ci.service
        

      Set up the Buildbot Master Web Interface

      Buildbot is now running and listening on HTTP without encryption. To secure the connection, install NGINX to terminate SSL and reverse proxy traffic to the Buildbot master process.

      These steps install NGINX Mainline on Ubuntu from NGINX Inc’s official repository. For other distributions, see the NGINX admin guide. For information on configuring NGINX for production environments, see our Getting Started with NGINX series.

      1. Open /etc/apt/sources.list in a text editor and add the following line to the bottom. Replace CODENAME in this example with the codename of your Ubuntu release. For example, for Ubuntu 18.04, named Bionic Beaver, insert bionic in place of CODENAME below:

        /etc/apt/sources.list
        1
        
        deb http://nginx.org/packages/mainline/ubuntu/ CODENAME nginx
      2. Import the repository’s package signing key and add it to apt:

        sudo wget http://nginx.org/keys/nginx_signing.key
        sudo apt-key add nginx_signing.key
        
      3. Install NGINX:

        sudo apt update
        sudo apt install nginx
        
      4. Ensure NGINX is running and and enabled to start automatically on reboot:

        sudo systemctl start nginx
        sudo systemctl enable nginx
        

      Now that NGINX is installed, configure NGINX to talk to the local Buildbot port. NGINX will listen for SSL traffic using the Let’s Encrypt certificate for your domain.

      1. Create your site’s NGINX configuration file. Ensure that you replace the configuration file’s name example.com.conf with your domain name. Replace all instances of example.com with your Buildbot instance’s URL.

        /etc/nginx/conf.d/example.com.conf
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        25
        26
        27
        28
        29
        30
        31
        32
        33
        34
        35
        36
        37
        38
        39
        40
        41
        42
        43
        44
        45
        46
        47
        48
        
        server {
          # Enable SSL and http2
          listen 443 ssl http2 default_server;
        
          server_name example.com;
        
          root html;
          index index.html index.htm;
        
          ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
          ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
        
          # put a one day session timeout for websockets to stay longer
          ssl_session_cache      shared:SSL:10m;
          ssl_session_timeout  1440m;
        
          ssl_protocols TLSv1.2 TLSv1.3;
          ssl_ciphers ECDHE-RSA-AES256-GCM-SHA512:DHE-RSA-AES256-GCM-SHA512:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384;
          ssl_prefer_server_ciphers   on;
        
          # force https
          add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;";
          spdy_headers_comp 5;
        
          proxy_set_header HOST $host;
          proxy_set_header X-Real-IP $remote_addr;
          proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
          proxy_set_header X-Forwarded-Proto  $scheme;
          proxy_set_header X-Forwarded-Server  $host;
          proxy_set_header X-Forwarded-Host  $host;
        
          location / {
              proxy_pass http://127.0.0.1:8010/;
          }
          location /sse/ {
              # proxy buffering will prevent sse to work
              proxy_buffering off;
              proxy_pass http://127.0.0.1:8010/sse/;
          }
          location /ws {
              proxy_http_version 1.1;
              proxy_set_header Upgrade $http_upgrade;
              proxy_set_header Connection "upgrade";
              proxy_pass http://127.0.0.1:8010/ws;
              # raise the proxy timeout for the websocket
              proxy_read_timeout 6000s;
          }
        }
      2. Disable NGINX’s default configuration file:

        mv /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.disabled
        
      3. Restart NGINX to apply the Buildbot reverse proxy configuration:

        sudo systemctl restart nginx
        
      4. Navigate to your Buildbot instance’s URL over HTTPS. You will see the Buildbot homepage:

        Buildbot Landing Page

        Your continuous integration test server is now up and running.

      5. Ensure that you can log into your Buildbot instance with the admin credentials you created in the Configure Buildbot Master section. Click on the top right hand dropdown menu entitled Anonymous and then, click on Login. A Sign In modal will appear. Enter your credentials to log in to Buildbot as the admin user.

      Install the Buildbot Worker

      In order for Buildbot to execute test builds, the Buildbot master will require a worker. The following steps will setup a worker on the same host as the master.

      1. Install the buildbot-slave Ubuntu package:

        sudo apt-get install -y buildbot-slave
        
      2. Navigate to the directory which will store the Buildbot worker configurations:

        cd /var/lib/buildbot/workers
        
      3. Create the configuration directory for the Buildbot worker. Replace example-worker and my-worker-password with the values used for the c[worker] configuration in the master.cfg file.

        sudo -u buildbot -- buildbot-worker create-worker default localhost example-worker my-worker-password
        
      4. The Buildbot worker is ready to connect to the Buildbot master. Enable the worker process.

        sudo systemctl enable --now buildbot-worker@default.service
        

        Confirm that the worker has connected by going to your Buildbot site and navigating to Builds -> Workers in the sidebar menu:

        Buildbot Workers Page

      Configuring Builds

      Now that Buildbot is installed, you can configure it to run builds. In this tutorial, we will use a forked GitHub repository for the Linode Guides and Tutorials repository to illustrate how to use Buildbot as a system to run tests against a repository.

      Configuring GitHub

      Before creating the build configuration, fork the linode/docs repository into your GitHub account. This is the repository that will be used to run tests against. The repository will also require webhooks to be configured to send push or PR events to Buildbot.

      Note

      The actions you take to fork, add webhook, and push changes to your fork of linode/docs will not affect the parent (or upstream), so you can safely experiment with it. Any changes you make to branches of your fork will remain separate until you submit a pull request to the original linode/docs repository.

      Forking and Configuring the Repository

      1. Log in to your GitHub account and navigate to https://github.com/linode/docs. Click the Fork button:

        GitHub Fork Button

      2. Choose the account to fork the repository into (typically just your username). GitHub will bring you to the page for your own fork of the linode/docs repository.

        Select Settings to browse your fork’s settings:

        GitHub Fork Settings

        Then, select Webhooks from the sidebar:

        GitHub Webhook Settings

      3. Click on the Add webhook button. There are several fields to populate:

        • Under Payload URL enter the domain name for your Buildbot server with the change hook URL path appended to it: https://example.com/change_hook/github.
        • Leave the default value for Content type: application/x-www-form-urlencoded.
        • Under the Secret field, enter the secret value for the c['www']['change_hook_dialects'] option you configure in the master.cfg file.
        • Leave Enable SSL Verification selected.
        • For the Which events would you like to trigger this webhook?, select Let me select individual events and ensure that only the following boxes are checked:
        • Leave Active selected to indicate that GitHub should be configured to send webhooks to Buildbot.
      4. Click on the Add webhook button to save your settings.

        GitHub will return your browser to the list of webhooks for your repository. After configuring a new webhook, GitHub will send a test webhook to the configured payload URL. To indicate whether GitHub was able to send a webhook without errors, it adds a checkmark to the webhook item:

        GitHub Webhook Success

        Github will now send any new pushes made to your fork to your instance of Buildbot for testing.

      Build Prerequisites

      This guide runs builds as a simple process on the Buildbot worker, however, it is possible to execute builds within a Docker container, if desired. Consult the official Buildbot documentation for more information on configuring a Docker set up.

      Most software projects will define several prerequisites and tests for a project build. The Linode Guides and Tutorials repository defines several different tests to run for each build. This example will use one test defined in a python script named blueberry.py. This test checks for broken links, missing images, and more. This test’s dependencies can be installed via pip in a virtualenv.

      On your Linode, install the packages necessary to permit the worker to use a Python virtualenv to create a sandbox during the build.

      sudo apt-get install -y build-essential python3-dev python3-venv
      

      Writing Builds

      The /var/lib/buildbot/masters/ci/master.cfg file contains options to configure builds. The specific sections in the file that include these configurations are the following:

      • WORKERS, define the worker executors the master will connect to in order to run builds.
      • SCHEDULERS, specify how to react to incoming changes.
      • BUILDERS, outline the steps and build tests to run.

      Because the worker has already been configured and connected to the Buildbot master, the only settings necessary to define a custom build are the SCHEDULERS and BUILDERS.

      1. Add the following lines to the end of the /var/lib/buildbot/masters/ci/master.cfg file to define the custom build. Ensure you replace my-username and my-git-repo-name with the values for your own GitHub fork of the linode/docs repository and example-worker with the name of your Buildbot instance’s worker:

        /var/lib/buildbot/masters/ci/master.cfg
         1
         2
         3
         4
         5
         6
         7
         8
         9
        10
        11
        12
        13
        14
        15
        16
        17
        18
        19
        20
        21
        22
        23
        24
        
        docs_blueberry_test = util.BuildFactory()
        # Clone the repository
        docs_blueberry_test.addStep(
            steps.Git(
                repourl='git://github.com/my-username/my-git-repo-name.git',
                mode='incremental'))
        # Create virtualenv
        docs_blueberry_test.addStep(
            steps.ShellCommand(
                command=["python3", "-m", "venv", ".venv"]))
        # Install test dependencies
        docs_blueberry_test.addStep(
            steps.ShellCommand(
                command=["./.venv/bin/pip", "install", "-r", "ci/requirements.txt"]))
        # Run tests
        docs_blueberry_test.addStep(
            steps.ShellCommand(
                command=["./.venv/bin/python3", "ci/blueberry.py"]))
        # Add the BuildFactory configuration to the master
        c['builders'].append(
            util.BuilderConfig(name="linode-docs",
              workernames=["example-worker"],
              factory=docs_blueberry_test))
            

        The configuration code does the following:

        • A new Build Factory is instantiated. Build Factories define how builds are run.
        • Then, instructions are added to the Build Factory. The Build Factory clones the GitHub fork of the linode/docs repository.
        • Next, a Python virtualenv is setup. This ensures that the dependencies and libraries used for testing are kept separate, in a dedicated sandbox, from the Python libraries on the worker machine.
        • The necessary Python packages used in testing are then installed into the build’s virtualenv.
        • Finally, the blueberry.py testing script is run using the python3 executable from the virtualenv sandbox.
        • The defined Build Factory is then added to the configuration for the master.
      2. Define a simple scheduler to build any branch that is pushed to the GitHub repository. Add the following lines to the end of the master.cfg file:

        ~/buildbox-master/master/master.cfg
        1
        2
        3
        4
        5
        
            ...
        c['schedulers'].append(schedulers.AnyBranchScheduler(
            name="build-docs",
            builderNames=["linode-docs"]))
            

        This code instructs the Buildbot master to create a scheduler that builds any branch for the linode-docs builder. This scheduler will be invoked by the change hook defined for GitHub, which is triggered by the GitHub webhook configured in the GitHub interface.

      3. Restart the Buildbot master now that the custom scheduler and builder have been defined:

        sudo systemctl restart buildmaster@ci.service
        

      Running Builds

      Navigate to your Buildbot site to view the Builder and Scheduler created in the previous section. In the sidebar click on Build -> Builders. You will see linode-docs listed under the Builder Name heading:

      Buildbot Custom Builder

      A new build can be started for the linode-docs builder. Recall that the GitHub webhook configuration for your fork of linode/docs is set to call Buildbot upon any push or pull request event. To demonstrate how this works:

      1. Clone your fork of the linode/docs repository on your local machine (do not run the following commands on your Buildbot server) and navigate into the cloned repository. Replace username and repository with your own fork’s values:

        git clone https://github.com/username/repository.git
        cd repository
        
      2. Like many git repositories, the linode-docs repository changes often. To ensure that the remaining instructions work as expected, start at a specific revision in the code that is in a known state. Check out revision 76cd31a5271b41ff5a80dee2137dcb5e76296b93:

        git checkout 76cd31a5271b41ff5a80dee2137dcb5e76296b93
        
      3. Create a branch starting at this revision, which is where you will create dummy commits to test your Buildbot master:

        git checkout -b linode-tutorial-demo
        
      4. Create an empty commit so that you have something to push to your fork:

        git commit --allow-empty -m 'Buildbot test'
        
      5. Push your branch to your forked remote GitHub repository:

        git push --set-upstream origin linode-tutorial-demo
        
      6. Navigate to your Buildbot site and go to your running builds. The Home button on the sidebar displays currently executing builds.

        Buildbot running Builds

      7. Click on the running build to view more details. The build will display each step along with logging output:

        Buildbot Build Page

        Each step of the build process can be followed as the build progresses. While the build is running, click on a step to view standard output logs. A successful build will complete each step with an exit code of 0.

        Your Buildbot host will now actively build pushes to any branch or any pull requests to your repository.

      Features to Explore

      Now that you have a simple build configuration for your Buildbot instance, you can continue to add features to your CI server. Some useful functions that Buildbot supports include:

      • Reporters, which can notify you about build failures over IRC, GitHub comments, or email.
      • Workers that execute builds in Docker containers or in temporary cloud instances instead of static hosts.
      • Web server features, including the ability to generate badges for your repository indicating the current build status of the project.

      More Information

      You may wish to consult the following resources for additional information on this topic. While these are provided in the hope that they will be useful, please note that we cannot vouch for the accuracy or timeliness of externally hosted materials.

      Join our Community

      Find answers, ask questions, and help others.

      This guide is published under a CC BY-ND 4.0 license.



      Source link