One place for hosting & domains

      How To Test a Node.js Module with Mocha and Assert


      The author selected the Open Internet/Free Speech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      Testing is an integral part of software development. It’s common for programmers to run code that tests their application as they make changes in order to confirm it’s behaving as they’d like. With the right test setup, this process can even be automated, saving a lot of time. Running tests consistently after writing new code ensures that new changes don’t break pre-existing features. This gives the developer confidence in their code base, especially when it gets deployed to production so users can interact with it.

      A test framework structures the way we create test cases. Mocha is a popular JavaScript test framework that organizes our test cases and runs them for us. However, Mocha does not verify our code’s behavior. To compare values in a test, we can use the Node.js assert module.

      In this article, you’ll write tests for a Node.js TODO list module. You will set up and use the Mocha test framework to structure your tests. Then you’ll use the Node.js assert module to create the tests themselves. In this sense, you will be using Mocha as a plan builder, and assert to implement the plan.

      Prerequisites

      Step 1 — Writing a Node Module

      Let’s begin this article by writing the Node.js module we’d like to test. This module will manage a list of TODO items. Using this module, we will be able to list all the TODOs that we are keeping track of, add new items, and mark some as complete. Additionally, we’ll be able to export a list of TODO items to a CSV file. If you’d like a refresher on writing Node.js modules, you can read our article on How To Create a Node.js Module.

      First, we need to set up the coding environment. Create a folder with the name of your project in your terminal. This tutorial will use the name todos:

      Then enter that folder:

      Now initialize npm, since we’ll be using its CLI functionality to run the tests later:

      We only have one dependency, Mocha, which we will use to organize and run our tests. To download and install Mocha, use the following:

      • npm i request --save-dev mocha

      We install Mocha as a dev dependency, as it’s not required by the module in a production setting. If you would like to learn more about Node.js packages or npm, check out our guide on How To Use Node.js Modules with npm and package.json.

      Finally, let’s create our file that will contain our module’s code:

      With that, we’re ready to create our module. Open index.js in a text editor like nano:

      Let’s begin by defining the Todos class. This class contains all the functions that we need to manage our TODO list. Add the following lines of code to index.js:

      todos/index.js

      class Todos {
          constructor() {
              this.todos = [];
          }
      }
      
      module.exports = Todos;
      

      We begin the file by creating a Todos class. Its constructor() function takes no arguments, therefore we don’t need to provide any values to instantiate an object for this class. All we do when we initialize a Todos object is create a todos property that’s an empty array.

      The modules line allows other Node.js modules to require our Todos class. Without explicitly exporting the class, the test file that we will create later would not be able to use it.

      Let’s add a function to return the array of todos we have stored. Write in the following highlighted lines:

      todos/index.js

      class Todos {
          constructor() {
              this.todos = [];
          }
      
          list() {
              return [...this.todos];
          }
      }
      
      module.exports = Todos;
      

      Our list() function returns a copy of the array that’s used by the class. It makes a copy of the array by using JavaScript’s destructuring syntax. We make a copy of the array so that changes the user makes to the array returned by list() does not affect the array used by the Todos object.

      Note: JavaScript arrays are reference types. This means that for any variable assignment to an array or function invocation with an array as a parameter, JavaScript refers to the original array that was created. For example, if we have an array with three items called x, and create a new variable y such that y = x, y and x both refer to the same thing. Any changes we make to the array with y impacts variable x and vice versa.

      Now let’s write the add() function, which adds a new TODO item:

      todos/index.js

      class Todos {
          constructor() {
              this.todos = [];
          }
      
          list() {
              return [...this.todos];
          }
      
          add(title) {
              let todo = {
                  title: title,
                  completed: false,
              }
      
              this.todos.push(todo);
          }
      }
      
      module.exports = Todos;
      

      Our add() function takes a string, and places it in a new JavaScript object’s title property. The new object also has a completed property, which is set to false by default. We then add this new object to our array of TODOs.

      Important functionality in a TODO manager is to mark items as completed. For this implementation, we will loop through our todos array to find the TODO item the user is searching for. If one is found, we’ll mark it as completed. If none is found, we’ll throw an error.

      Add the complete() function like this:

      todos/index.js

      class Todos {
          constructor() {
              this.todos = [];
          }
      
          list() {
              return [...this.todos];
          }
      
          add(title) {
              let todo = {
                  title: title,
                  completed: false,
              }
      
              this.todos.push(todo);
          }
      
          complete(title) {
              let todoFound = false;
              this.todos.forEach((todo) => {
                  if (todo.title === title) {
                      todo.completed = true;
                      todoFound = true;
                      return;
                  }
              });
      
              if (!todoFound) {
                  throw new Error(`No TODO was found with the title: "${title}"`);
              }
          }
      }
      
      module.exports = Todos;
      

      Save the file and exit from the text editor.

      We now have a basic TODO manager that we can experiment with. Next, let’s manually test our code to see if the application is working.

      Step 2 — Manually Testing the Code

      In this step, we will run our code’s functions and observe the output to ensure it matches our expectations. This is called manual testing. It’s likely the most common testing methodology programmers apply. Although we will automate our testing later with Mocha, we will first manually test our code to give a better sense of how manual testing differs from testing frameworks.

      Let’s add two TODO items to our app and mark one as complete. Start the Node.js REPL in the same folder as the index.js file:

      You will see the > prompt in the REPL that tells us we can enter JavaScript code. Type the following at the prompt:

      • const Todos = require('./index');

      With require(), we load the TODOs module into a Todos variable. Recall that our module returns the Todos class by default.

      Now, let’s instantiate an object for that class. In the REPL, add this line of code:

      • const todos = new Todos();

      We can use the todos object to verify our implementation works. Let’s add our first TODO item:

      So far we have not seen any output in our terminal. Let’s verify that we’ve stored our "run code" TODO item by getting a list of all our TODOs:

      You will see this output in your REPL:

      Output

      [ { title: 'run code', completed: false } ]

      This is the expected result: We have one TODO item in our array of TODOs, and it’s not completed by default.

      Let’s add another TODO item:

      • todos.add("test everything");

      Mark the first TODO item as completed:

      • todos.complete("run code");

      Our todos object will now be managing two items: "run code" and "test everything". The "run code" TODO will be completed as well. Let’s confirm this by calling list() once again:

      The REPL will output:

      Output

      [ { title: 'run code', completed: true }, { title: 'test everything', completed: false } ]

      Now, exit the REPL with the following:

      We’ve confirmed that our module behaves as we expect it to. While we didn’t put our code in a test file or use a testing library, we did test our code manually. Unfortunately, this form of testing becomes time consuming to do every time we make a change. Next, let’s use automated testing in Node.js and see if we can solve this problem with the Mocha testing framework.

      Step 3 — Writing Your First Test with Mocha and Assert

      In the last step, we manually tested our application. This will work for individual use cases, but as our module scales, this method becomes less viable. As we test new features, we must be certain that the added functionality has not created problems in the old functionality. We would like to test each feature over again for every change in the code, but doing this by hand would take a lot of effort and would be prone to error.

      A more efficient practice would be to set up automated tests. These are scripted tests written like any other code block. We run our functions with defined inputs and inspect their effects to ensure they behave as we expect. As our codebase grows, so will our automated tests. When we write new tests alongside the features, we can verify the entire module still works—all without having to remember how to use each function every time.

      In this tutorial, we’re using the Mocha testing framework with the Node.js assert module. Let’s get some hands-on experience to see how they work together.

      To begin, create a new file to store our test code:

      Now use your preferred text editor to open the test file. You can use nano like before:

      In the first line of the text file, we will load the TODOs module like we did in the Node.js shell. We will then load the assert module for when we write our tests. Add the following lines:

      todos/index.test.js

      const Todos = require('./index');
      const assert = require('assert').strict;
      

      The strict property of the assert module will allow us to use special equality tests that are recommended by Node.js and are good for future-proofing, since they account for more use cases.

      Before we go into writing tests, let’s discuss how Mocha organizes our code. Tests structured in Mocha usually follow this template:

      describe([String with Test Group Name], function() {
          it([String with Test Name], function() {
              [Test Code]
          });
      });
      

      Notice two key functions: describe() and it(). The describe() function is used to group similar tests. It’s not required for Mocha to run tests, but grouping tests make our test code easier to maintain. It’s recommended that you group your tests in a way that’s easy for you to update similar ones together.

      The it() contains our test code. This is where we would interact with our module’s functions and use the assert library. Many it() functions can be defined in a describe() function.

      Our goal in this section is to use Mocha and assert to automate our manual test. We’ll do this step-by-step, beginning with our describe block. Add the following to your file after the module lines:

      todos/index.test.js

      ...
      describe("integration test", function() {
      });
      

      With this code block, we’ve created a grouping for our integrated tests. Unit tests would test one function at a time. Integration tests verify how well functions within or across modules work together. When Mocha runs our test, all the tests within that describe block will run under the "integration test" group.

      Let’s add an it() function so we can begin testing our module’s code:

      todos/index.test.js

      ...
      describe("integration test", function() {
          it("should be able to add and complete TODOs", function() {
          });
      });
      

      Notice how descriptive we made the test’s name. If anyone runs our test, it will be immediately clear what’s passing or failing. A well-tested application is typically a well-documented application, and tests can sometimes be an effective kind of documentation.

      For our first test, we will create a new Todos object and verify it has no items in it:

      todos/index.test.js

      ...
      describe("integration test", function() {
          it("should be able to add and complete TODOs", function() {
              let todos = new Todos();
              assert.notStrictEqual(todos.list().length, 1);
          });
      });
      

      The first new line of code instantiated a new Todos object as we would do in the Node.js REPL or another module. In the second new line, we use the assert module.

      From the assert module we use the notStrictEqual() method. This function takes two parameters: the value that we want to test (called the actual value) and the value we expect to get (called the expected value). If both arguments are the same, notStrictEqual() throws an error to fail the test.

      Save and exit from index.test.js.

      The base case will be true as the length should be 0, which isn’t 1. Let’s confirm this by running Mocha. To do this, we need to modify our package.json file. Open your package.json file with your text editor:

      Now, in your scripts property, change it so it looks like this:

      todos/package.json

      ...
      "scripts": {
          "test": "mocha index.test.js"
      },
      ...
      

      We have just changed the behavior of npm’s CLI test command. When we run npm test, npm will review the command we just entered in package.json. It will look for the Mocha library in our node_modules folder and run the mocha command with our test file.

      Save and exit package.json.

      Let’s see what happens when we run our test. In your terminal, enter:

      The command will produce the following output:

      Output

      > todos@1.0.0 test your_file_path/todos > mocha index.test.js integrated test ✓ should be able to add and complete TODOs 1 passing (16ms)

      This output first shows us which group of tests it is about to run. For every individual test within a group, the test case is indented. We see our test name as we described it in the it() function. The tick at the left side of the test case indicates that the test passed.

      At the bottom, we get a summary of all our tests. In our case, our one test is passing and was completed in 16ms (the time varies from computer to computer).

      Our testing has started with success. However, this current test case can allow for false-positives. A false-positive is a test case that passes when it should fail.

      We currently check that the length of the array is not equal to 1. Let’s modify the test so that this condition holds true when it should not. Add the following lines to index.test.js:

      todos/index.test.js

      ...
      describe("integration test", function() {
          it("should be able to add and complete TODOs", function() {
              let todos = new Todos();
              todos.add("get up from bed");
              todos.add("make up bed");
              assert.notStrictEqual(todos.list().length, 1);
          });
      });
      

      Save and exit the file.

      We added two TODO items. Let’s run the test to see what happens:

      This will give the following:

      Output

      ... integrated test ✓ should be able to add and complete TODOs 1 passing (8ms)

      This passes as expected, as the length is greater than 1. However, it defeats the original purpose of having that first test. The first test is meant to confirm that we start on a blank state. A better test will confirm that in all cases.

      Let’s change the test so it only passes if we have absolutely no TODOs in store. Make the following changes to index.test.js:

      todos/index.test.js

      ...
      describe("integration test", function() {
          it("should be able to add and complete TODOs", function() {
              let todos = new Todos();
              todos.add("get up from bed");
              todos.add("make up bed");
              assert.strictEqual(todos.list().length, 0);
          });
      });
      

      You changed notStrictEqual() to strictEqual(), a function that checks for equality between its actual and expected argument. Strict equal will fail if our arguments are not exactly the same.

      Save and exit, then run the test so we can see what happens:

      This time, the output will show an error:

      Output

      ... integration test 1) should be able to add and complete TODOs 0 passing (16ms) 1 failing 1) integration test should be able to add and complete TODOs: AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B: + expected - actual - 2 + 0 + expected - actual -2 +0 at Context.<anonymous> (index.test.js:9:10) npm ERR! Test failed. See above for more details.

      This text will be useful for us to debug why the test failed. Notice that since the test failed there was no tick at the beginning of the test case.

      Our test summary is no longer at the bottom of the output, but right after our list of test cases were displayed:

      ...
      0 passing (29ms)
        1 failing
      ...
      

      The remaining output provides us with data about our failing tests. First, we see what test case has failed:

      ...
      1) integrated test
             should be able to add and complete TODOs:
      ...
      

      Then, we see why our test failed:

      ...
            AssertionError [ERR_ASSERTION]: Input A expected to strictly equal input B:
      + expected - actual
      
      - 2
      + 0
            + expected - actual
      
            -2
            +0
      
            at Context.<anonymous> (index.test.js:9:10)
      ...
      

      An AssertionError is thrown when strictEqual() fails. We see that the expected value, 0, is different from the actual value, 2.

      We then see the line in our test file where the code fails. In this case, it’s line 10.

      Now, we’ve seen for ourselves that our test will fail if we expect incorrect values. Let’s change our test case back to its right value. First, open the file:

      Then take out the todos.add lines so that your code looks like the following:

      todos/index.test.js

      ...
      describe("integration test", function () {
          it("should be able to add and complete TODOs", function () {
              let todos = new Todos();
              assert.strictEqual(todos.list().length, 0);
          });
      });
      

      Save and exit the file.

      Run it once more to confirm that it passes without any potential false-positives:

      The output will be as follows:

      Output

      ... integration test ✓ should be able to add and complete TODOs 1 passing (15ms)

      We’ve now improved our test’s resiliency quite a bit. Let’s move forward with our integration test. The next step is to add a new TODO item to index.test.js:

      todos/index.test.js

      ...
      describe("integration test", function() {
          it("should be able to add and complete TODOs", function() {
              let todos = new Todos();
              assert.strictEqual(todos.list().length, 0);
      
              todos.add("run code");
              assert.strictEqual(todos.list().length, 1);
              assert.deepStrictEqual(todos.list(), [{title: "run code", completed: false}]);
          });
      });
      

      After using the add() function, we confirm that we now have one TODO being managed by our todos object with strictEqual(). Our next test confirms the data in the todos with deepStrictEqual(). The deepStrictEqual() function recursively tests that our expected and actual objects have the same properties. In this case, it tests that the arrays we expect both have a JavaScript object within them. It then checks that their JavaScript objects have the same properties, that is, that both their title properties are "run code" and both their completed properties are false.

      We then complete the remaining tests using these two equality checks as needed by adding the following highlighted lines:

      todos/index.test.js

      ...
      describe("integration test", function() {
          it("should be able to add and complete TODOs", function() {
              let todos = new Todos();
              assert.strictEqual(todos.list().length, 0);
      
              todos.add("run code");
              assert.strictEqual(todos.list().length, 1);
              assert.deepStrictEqual(todos.list(), [{title: "run code", completed: false}]);
      
              todos.add("test everything");
              assert.strictEqual(todos.list().length, 2);
              assert.deepStrictEqual(todos.list(),
                  [
                      { title: "run code", completed: false },
                      { title: "test everything", completed: false }
                  ]
              );
      
              todos.complete("run code");
              assert.deepStrictEqual(todos.list(),
                  [
                      { title: "run code", completed: true },
                      { title: "test everything", completed: false }
                  ]
          );
        });
      });
      

      Save and exit the file.

      Our test now mimics our manual test. With these programmatic tests, we don’t need to check the output continuously if our tests pass when we run them. You typically want to test every aspect of use to make sure the code is tested properly.

      Let’s run our test with npm test once more to get this familiar output:

      Output

      ... integrated test ✓ should be able to add and complete TODOs 1 passing (9ms)

      You’ve now set up an integrated test with the Mocha framework and the assert library.

      Let’s consider a situation where we’ve shared our module with some other developers and they’re now giving us feedback. A good portion of our users would like the complete() function to return an error if no TODOs were added as of yet. Let’s add this functionality in our complete() function.

      Open index.js in your text editor:

      Add the following to the function:

      todos/index.js

      ...
      complete(title) {
          if (this.todos.length === 0) {
              throw new Error("You have no TODOs stored. Why don't you add one first?");
          }
      
          let todoFound = false
          this.todos.forEach((todo) => {
              if (todo.title === title) {
                  todo.completed = true;
                  todoFound = true;
                  return;
              }
          });
      
          if (!todoFound) {
              throw new Error(`No TODO was found with the title: "${title}"`);
          }
      }
      ...
      

      Save and exit the file.

      Now let’s add a new test for this new feature. We want to verify that if we call complete on a Todos object that has no items, it will return our special error.

      Go back into index.test.js:

      At the end of the file, add the following code:

      todos/index.test.js

      ...
      describe("complete()", function() {
          it("should fail if there are no TODOs", function() {
              let todos = new Todos();
              const expectedError = new Error("You have no TODOs stored. Why don't you add one first?");
      
              assert.throws(() => {
                  todos.complete("doesn't exist");
              }, expectedError);
          });
      });
      

      We use describe() and it() like before. Our test begins with creating a new todos object. We then define the error we are expecting to receive when we call the complete() function.

      Next, we use the throws() function of the assert module. This function was created so we can verify the errors that are thrown in our code. Its first argument is a function that contains the code that throws the error. The second argument is the error we are expecting to receive.

      In your terminal, run the tests with npm test once again and you will now see the following output:

      Output

      ... integrated test ✓ should be able to add and complete TODOs complete() ✓ should fail if there are no TODOs 2 passing (25ms)

      This output highlights the benefit of why we do automated testing with Mocha and assert. Because our tests are scripted, every time we run npm test, we verify that all our tests are passing. We did not need to manually check if the other code is still working; we know that it is because the test we have still passed.

      So far, our tests have verified the results of synchronous code. Let’s see how we would need to adapt our newfound testing habits to work with asynchronous code.

      Step 4 — Testing Asynchronous Code

      One of the features we want in our TODO module is a CSV export feature. This will print all the TODOs we have in store along with the completed status to a file. This requires that we use the fs module—a built-in Node.js module for working with the file system.

      Writing to a file is an asynchronous operation. There are many ways to write to a file in Node.js. We can use callbacks, Promises, or the async/await keywords. In this section, we’ll look at how we write tests for those different methods.

      Callbacks

      A callback function is one used as an argument to an asynchronous function. It is called when the asynchronous operation is completed.

      Let’s add a function to our Todos class called saveToFile(). This function will build a string by looping through all our TODO items and writing that string to a file.

      Open your index.js file:

      In this file, add the following highlighted code:

      todos/index.js

      const fs = require('fs');
      
      class Todos {
          constructor() {
              this.todos = [];
          }
      
          list() {
              return [...this.todos];
          }
      
          add(title) {
              let todo = {
                  title: title,
                  completed: false,
              }
              this.todos.push(todo);
          }
      
          complete(title) {
              if (this.todos.length === 0) {
                  throw new Error("You have no TODOs stored. Why don't you add one first?");
              }
      
              let todoFound = false
              this.todos.forEach((todo) => {
                  if (todo.title === title) {
                      todo.completed = true;
                      todoFound = true;
                      return;
                  }
              });
      
              if (!todoFound) {
                  throw new Error(`No TODO was found with the title: "${title}"`);
              }
          }
      
          saveToFile(callback) {
              let fileContents = 'Title,Completedn';
              this.todos.forEach((todo) => {
                  fileContents += `${todo.title},${todo.completed}n`
              });
      
              fs.writeFile('todos.csv', fileContents, callback);
          }
      }
      
      module.exports = Todos;
      

      We first have to import the fs module in our file. Then we added our new saveToFile() function. Our function takes a callback function that will be used once the file write operation is complete. In that function, we create a fileContents variable that stores the entire string we want to be saved as a file. It’s initialized with the CSV’s headers. We then loop through each TODO item with the internal array’s forEach() method. As we iterate, we add the title and completed properties of the individual todos objects.

      Finally, we use the fs module to write the file with the writeFile() function. Our first argument is the file name: todos.csv. The second is the contents of the file, in this case, our fileContents variable. Our last argument is our callback function, which handles any file writing errors.

      Save and exit the file.

      Let’s now write a test for our saveToFile() function. Our test will do two things: confirm that the file exists in the first place, and then verify that it has the right contents.

      Open the index.test.js file:

      let’s begin by loading the fs module at the top of the file, as we’ll use it to help test our results:

      todos/index.test.js

      const Todos = require('./index');
      const assert = require('assert').strict;
      const fs = require('fs');
      ...
      

      Now, at the end of the file let’s add our new test case:

      todos/index.test.js

      ...
      describe("saveToFile()", function() {
          it("should save a single TODO", function(done) {
              let todos = new Todos();
              todos.add("save a CSV");
              todos.saveToFile((err) => {
                  assert.strictEqual(fs.existsSync('todos.csv'), true);
                  let expectedFileContents = "Title,Completednsave a CSV,falsen";
                  let content = fs.readFileSync("todos.csv").toString();
                  assert.strictEqual(content, expectedFileContents);
                  done(err);
              });
          });
      });
      

      Like before, we use describe() to group our test separately from the others as it involves new functionality. The it() function is slightly different from our other ones. Usually, the callback function we use has no arguments. This time, we have done as an argument. We need this argument when testing functions with callbacks. The done() callback function is used by Mocha to tell it when an asynchronous function is completed.

      All callback functions being tested in Mocha must call the done() callback. If not, Mocha would never know when the function was complete and would be stuck waiting for a signal.

      Continuing, we create our Todos instance and add a single item to it. We then call the saveToFile() function, with a callback that captures a file writing error. Note how our test for this function resides in the callback. If our test code was outside the callback, it would fail as long as the code was called before the file writing completed.

      In our callback function, we first check that our file exists:

      todos/index.test.js

      ...
      assert.strictEqual(fs.existsSync('todos.csv'), true);
      ...
      

      The fs.existsSync() function returns true if the file path in its argument exists, false otherwise.

      Note: The fs module’s functions are asynchronous by default. However, for key functions, they made synchronous counterparts. This test is simpler by using synchronous functions, as we don’t have to nest the asynchronous code to ensure it works. In the fs module, synchronous functions usually end with "Sync" at the end of their names.

      We then create a variable to store our expected value:

      todos/index.test.js

      ...
      let expectedFileContents = "Title,Completednsave a CSV,falsen";
      ...
      

      We use readFileSync() of the fs module to read the file synchronously:

      todos/index.test.js

      ...
      let content = fs.readFileSync("todos.csv").toString();
      ...
      

      We now provide readFileSync() with the right path for the file: todos.csv. As readFileSync() returns a Buffer object, which stores binary data, we use its toString() method so we can compare its value with the string we expect to have saved.

      Like before, we use the assert module’s strictEqual to do a comparison:

      todos/index.test.js

      ...
      assert.strictEqual(content, expectedFileContents);
      ...
      

      We end our test by calling the done() callback, ensuring that Mocha knows to stop testing that case:

      todos/index.test.js

      ...
      done(err);
      ...
      

      We provide the err object to done() so Mocha can fail the test in the case an error occurred.

      Save and exit from index.test.js.

      Let’s run this test with npm test like before. Your console will display this output:

      Output

      ... integrated test ✓ should be able to add and complete TODOs complete() ✓ should fail if there are no TODOs saveToFile() ✓ should save a single TODO 3 passing (15ms)

      You’ve now tested your first asynchronous function with Mocha using callbacks. But at the time of writing this tutorial, Promises are more prevalent than callbacks in new Node.js code, as explained in our How To Write Asynchronous Code in Node.js article. Next, let’s learn how we can test them with Mocha as well.

      Promises

      A Promise is a JavaScript object that will eventually return a value. When a Promise is successful, it is resolved. When it encounters an error, it is rejected.

      Let’s modify the saveToFile() function so that it uses Promises instead of callbacks. Open up index.js:

      First, we need to change how the fs module is loaded. In your index.js file, change the require() statement at the top of the file to look like this:

      todos/index.js

      ...
      const fs = require('fs').promises;
      ...
      

      We just imported the fs module that uses Promises instead of callbacks. Now, we need to make some changes to saveToFile() so that it works with Promises instead.

      In your text editor, make the following changes to the saveToFile() function to remove the callbacks:

      todos/index.js

      ...
      saveToFile() {
          let fileContents = 'Title,Completedn';
          this.todos.forEach((todo) => {
              fileContents += `${todo.title},${todo.completed}n`
          });
      
          return fs.writeFile('todos.csv', fileContents);
      }
      ...
      

      The first difference is that our function no longer accepts any arguments. With Promises we don’t need a callback function. The second change concerns how the file is written. We now return the result of the writeFile() promise.

      Save and close out of index.js.

      Let’s now adapt our test so that it works with Promises. Open up index.test.js:

      Change the saveToFile() test to this:

      todos/index.js

      ...
      describe("saveToFile()", function() {
          it("should save a single TODO", function() {
              let todos = new Todos();
              todos.add("save a CSV");
              return todos.saveToFile().then(() => {
                  assert.strictEqual(fs.existsSync('todos.csv'), true);
                  let expectedFileContents = "Title,Completednsave a CSV,falsen";
                  let content = fs.readFileSync("todos.csv").toString();
                  assert.strictEqual(content, expectedFileContents);
              });
          });
      });
      

      The first change we need to make is to remove the done() callback from its arguments. If Mocha passes the done() argument, it needs to be called or it will throw an error like this:

      1) saveToFile()
             should save a single TODO:
           Error: Timeout of 2000ms exceeded. For async tests and hooks, ensure "done()" is called; if returning a Promise, ensure it resolves. (/home/ubuntu/todos/index.test.js)
            at listOnTimeout (internal/timers.js:536:17)
            at processTimers (internal/timers.js:480:7)
      

      When testing Promises, do not include the done() callback in it().

      To test our promise, we need to put our assertion code in the then() function. Notice that we return this promise in the test, and we don’t have a catch() function to catch when the Promise is rejected.

      We return the promise so that any errors that are thrown in the then() function are bubbled up to the it() function. If the errors are not bubbled up, Mocha will not fail the test case. When testing Promises, you need to use return on the Promise being tested. If not, you run the risk of getting a false-positive.

      We also omit the catch() clause because Mocha can detect when a promise is rejected. If rejected, it automatically fails the test.

      Now that we have our test in place, save and exit the file, then run Mocha with npm test and to confirm we get a successful result:

      Output

      ... integrated test ✓ should be able to add and complete TODOs complete() ✓ should fail if there are no TODOs saveToFile() ✓ should save a single TODO 3 passing (18ms)

      We’ve changed our code and test to use Promises, and now we know for sure that it works. But the most recent asynchronous patterns use async/await keywords so we don’t have to create multiple then() functions to handle successful results. Let’s see how we can test with async/await.

      async/await

      The async/await keywords make working with Promises less verbose. Once we define a function as asynchronous with the async keyword, we can get any future results in that function with the await keyword. This way we can use Promises without having to use the then() or catch() functions.

      We can simplify our saveToFile() test that’s promise based with async/await. In your text editor, make these minor edits to the saveToFile() test in index.test.js:

      todos/index.test.js

      ...
      describe("saveToFile()", function() {
          it("should save a single TODO", async function() {
              let todos = new Todos();
              todos.add("save a CSV");
              await todos.saveToFile();
      
              assert.strictEqual(fs.existsSync('todos.csv'), true);
              let expectedFileContents = "Title,Completednsave a CSV,falsen";
              let content = fs.readFileSync("todos.csv").toString();
              assert.strictEqual(content, expectedFileContents);
          });
      });
      

      The first change is that the function used by the it() function now has the async keyword when it’s defined. This allows us to the use the await keyword inside its body.

      The second change is found when we call saveToFile(). The await keyword is used before it is called. Now Node.js knows to wait until this function is resolved before continuing the test.

      Our function code is easier to read now that we moved the code that was in the then() function to the it() function’s body. Running this code with npm test produces this output:

      Output

      ... integrated test ✓ should be able to add and complete TODOs complete() ✓ should fail if there are no TODOs saveToFile() ✓ should save a single TODO 3 passing (30ms)

      We can now test asynchronous functions using any of three asynchronous paradigms appropriately.

      We have covered a lot of ground with testing synchronous and asynchronous code with Mocha. Next, let’s dive in a bit deeper to some other functionality that Mocha offers to improve our testing experience, particularly how hooks can change test environments.

      Step 5 — Using Hooks to Improve Test Cases

      Hooks are a useful feature of Mocha that allows us to configure the environment before and after a test. We typically add hooks within a describe() function block, as they contain setup and teardown logic specific to some test cases.

      Mocha provides four hooks that we can use in our tests:

      • before: This hook is run once before the first test begins.
      • beforeEach: This hook is run before every test case.
      • after: This hook is run once after the last test case is complete.
      • afterEach: This hook is run after every test case.

      When we test a function or feature multiple times, hooks come in handy as they allow us to separate the test’s setup code (like creating the todos object) from the test’s assertion code.

      To see the value of hooks, let’s add more tests to our saveToFile() test block.

      While we have confirmed that we can save our TODO items to a file, we only saved one item. Furthermore, the item was not marked as completed. Let’s add more tests to be sure that the various aspects of our module works.

      First, let’s add a second test to confirm that our file is saved correctly when we have a completed a TODO item. Open your index.test.js file in your text editor:

      Change the last test to the following:

      todos/index.test.js

      ...
      describe("saveToFile()", function () {
          it("should save a single TODO", async function () {
              let todos = new Todos();
              todos.add("save a CSV");
              await todos.saveToFile();
      
              assert.strictEqual(fs.existsSync('todos.csv'), true);
              let expectedFileContents = "Title,Completednsave a CSV,falsen";
              let content = fs.readFileSync("todos.csv").toString();
              assert.strictEqual(content, expectedFileContents);
          });
      
          it("should save a single TODO that's completed", async function () {
              let todos = new Todos();
              todos.add("save a CSV");
              todos.complete("save a CSV");
              await todos.saveToFile();
      
              assert.strictEqual(fs.existsSync('todos.csv'), true);
              let expectedFileContents = "Title,Completednsave a CSV,truen";
              let content = fs.readFileSync("todos.csv").toString();
              assert.strictEqual(content, expectedFileContents);
          });
      });
      

      The test is similar to what we had before. The key differences are that we call the complete() function before we call saveToFile(), and that our expectedFileContents now have true instead of false for the completed column’s value.

      Save and exit the file.

      Let’s run our new test, and all the others, with npm test:

      This will give the following:

      Output

      ... integrated test ✓ should be able to add and complete TODOs complete() ✓ should fail if there are no TODOs saveToFile() ✓ should save a single TODO ✓ should save a single TODO that's completed 4 passing (26ms)

      It works as expected. There is, however, room for improvement. They both have to instantiate a Todos object at the beginning of the test. As we add more test cases, this quickly becomes repetitive and memory-wasteful. Also, each time we run the test, it creates a file. This can be mistaken for real output by someone less familiar with the module. It would be nice if we cleaned up our output files after testing.

      Let’s make these improvements using test hooks. We’ll use the beforeEach() hook to set up our test fixture of TODO items. A test fixture is any consistent state used in a test. In our case, our test fixture is a new todos object that has one TODO item added to it already. We will then use afterEach() to remove the file created by the test.

      In index.test.js, make the following changes to your last test for saveToFile():

      todos/index.test.js

      ...
      describe("saveToFile()", function () {
          beforeEach(function () {
              this.todos = new Todos();
              this.todos.add("save a CSV");
          });
      
          afterEach(function () {
              if (fs.existsSync("todos.csv")) {
                  fs.unlinkSync("todos.csv");
              }
          });
      
          it("should save a single TODO without error", async function () {
              await this.todos.saveToFile();
      
              assert.strictEqual(fs.existsSync("todos.csv"), true);
              let expectedFileContents = "Title,Completednsave a CSV,falsen";
              let content = fs.readFileSync("todos.csv").toString();
              assert.strictEqual(content, expectedFileContents);
          });
      
          it("should save a single TODO that's completed", async function () {
              this.todos.complete("save a CSV");
              await this.todos.saveToFile();
      
              assert.strictEqual(fs.existsSync('todos.csv'), true);
              let expectedFileContents = "Title,Completednsave a CSV,truen";
              let content = fs.readFileSync("todos.csv").toString();
              assert.strictEqual(content, expectedFileContents);
          });
      });
      

      Let’s break down all the changes we’ve made. We added a beforeEach() block to the test block:

      todos/index.test.js

      ...
      beforeEach(function () {
          this.todos = new Todos();
          this.todos.add("save a CSV");
      });
      ...
      

      These two lines of code create a new Todos object that will be available in each of our tests. With Mocha, the this object in beforeEach() refers to the same this object in it(). this is the same for every code block inside the describe() block. For more information on this, see our tutorial Understanding This, Bind, Call, and Apply in JavaScript.

      This powerful context sharing is why we can quickly create test fixtures that work for both of our tests.

      We then clean up our CSV file in the afterEach() function:

      todos/index.test.js

      ...
      afterEach(function () {
          if (fs.existsSync("todos.csv")) {
              fs.unlinkSync("todos.csv");
          }
      });
      ...
      

      If our test failed, then it may not have created a file. That’s why we check if the file exists before we use the unlinkSync() function to delete it.

      The remaining changes switch the reference from todos, which were previously created in the it() function, to this.todos which is available in the Mocha context. We also deleted the lines that previously instantiated todos in the individual test cases.

      Now, let’s run this file to confirm our tests still work. Enter npm test in your terminal to get:

      Output

      ... integrated test ✓ should be able to add and complete TODOs complete() ✓ should fail if there are no TODOs saveToFile() ✓ should save a single TODO without error ✓ should save a single TODO that's completed 4 passing (20ms)

      The results are the same, and as a benefit, we have slightly reduced the setup time for new tests for the saveToFile() function and found a solution to the residual CSV file.

      Conclusion

      In this tutorial, you wrote a Node.js module to manage TODO items and tested the code manually using the Node.js REPL. You then created a test file and used the Mocha framework to run automated tests. With the assert module, you were able to verify that your code works. You also tested synchronous and asynchronous functions with Mocha. Finally, you created hooks with Mocha that make writing multiple related test cases much more readable and maintainable.

      Equipped with this understanding, challenge yourself to write tests for new Node.js modules that you are creating. Can you think about the inputs and outputs of your function and write your test before you write your code?

      If you would like more information about the Mocha testing framework, check out the official Mocha documentation. If you’d like to continue learning Node.js, you can return to the How To Code in Node.js series page.



      Source link

      How To Install and Use Radamsa to Fuzz Test Programs and Network Services on Ubuntu 18.04


      The author selected the Electronic Frontier Foundation Inc to receive a donation as part of the Write for DOnations program.

      Introduction

      Security threats are continually becoming more sophisticated, so developers and systems administrators need to take a proactive approach in defending and testing the security of their applications.

      A common method for testing the security of client applications or network services is fuzzing, which involves repeatedly sending invalid or malformed data to the application and analyzing its response. This is useful to help test how resilient and robust the application is to unexpected input, which may include corrupted data or actual attacks.

      Radamsa is an open-source fuzzing tool that can generate test cases based on user-specified input data. Radamsa is fully scriptable, and so far has been successful in finding vulnerabilities in real-world applications, such as Gzip.

      In this tutorial, you will install and use Radamsa to fuzz test command-line and network-based applications using your own test cases.

      Warning: Radamsa is a penetration testing tool which may allow you to identify vulnerabilities or weaknesses in certain systems or applications. You must not use vulnerabilities found with Radamsa for any form of reckless behavior, harm, or malicious exploitation. Vulnerabilities should be ethically reported to the maintainer of the affected application, and not disclosed publicly without explicit permission.

      Prerequisites

      Before you begin this guide you’ll need the following:

      • One Ubuntu 18.04 server set up by following the Initial Server Setup with Ubuntu 18.04, including a sudo non-root user and enabled firewall to block non-essential ports.
      • A command-line or network-based application that you wish to test, for example Gzip, Tcpdump, Bind, Apache, jq, or any other application of your choice. As an example for the purposes of this tutorial, we’ll use jq.

      Warning: Radamsa can cause applications or systems to run unstably or crash, so only run Radamsa in an environment where you are prepared for this, such as a dedicated server. Please also ensure that you have explicit written permission from the owner of a system before conducting fuzz testing against it.

      Once you have these ready, log in to your server as your non-root user to begin.

      Step 1 — Installing Radamsa

      Firstly, you will download and compile Radamsa in order to begin using it on your system. The Radamsa source code is available in the official repository on GitLab.

      Begin by updating the local package index to reflect any new upstream changes:

      Then, install the gcc, git, make, and wget packages needed to compile the source code into an executable binary:

      • sudo apt install gcc git make wget

      After confirming the installation, apt will download and install the specified packages and all of their required dependencies.

      Next, you’ll download a copy of the source code for Radamsa by cloning it from the repository hosted on GitLab:

      • git clone https://gitlab.com/akihe/radamsa.git

      This will create a directory called radamsa, containing the source code for the application. Move into the directory to begin compiling the code:

      Next, you can start the compilation process using make:

      Finally, you can install the compiled Radamsa binary to your $PATH:

      Once this is complete, you can check the installed version to make sure that everything is working:

      Your output will look similar to the following:

      Output

      Radamsa 0.6

      If you see a radamsa: command not found error, double-check that all required dependencies were installed and that there were no errors during compilation.

      Now that you’ve installed Radamsa, you can begin to generate some sample test cases to understand how Radamsa works and what it can be used for.

      Step 2 — Generating Fuzzing Test Cases

      Now that Radamsa has been installed, you can use it to generate some fuzzing test cases.

      A test case is a piece of data that will be used as input to the program that you are testing. For example, if you are fuzz testing an archiving program such as Gzip, a test case may be a file archive that you are attempting to decompress.

      Note: Radamsa will manipulate input data in a wide variety of unexpected ways, including extreme repetition, bit flips, control character injection, and so on. This may cause your terminal session to break or become unstable, so be aware of this before proceeding.

      Firstly, pass a simple piece of text to Radamsa to see what happens:

      • echo "Hello, world!" | radamsa

      This will manipulate (or fuzz) the inputted data and output a test case, for example:

      Output

      Hello,, world!

      In this case, Radamsa added an extra comma between Hello and world. This may not seem like a significant change, but in some applications this may cause the data to be interpreted incorrectly.

      Let’s try again by running the same command. You’ll see different output:

      Output

      Hello, '''''''wor'd!

      This time, multiple single quotes (') were inserted into the string, including one that overwrote the l in world. This particular test case is more likely to result in problems for an application, as single/double quotes are often used to separate different pieces of data in a list.

      Let’s try one more time:

      Output

      Hello, $+$PATHu0000`xcalc`world!

      In this case, Radamsa inserted a shell injection string, which will be useful to test for command injection vulnerabilities in the application that you are testing.

      You’ve used Radamsa to fuzz an input string and produce a series of test cases. Next, you will use Radamsa to fuzz a command-line application.

      Step 3 — Fuzzing a Command-line Application

      In this step, you’ll use Radamsa to fuzz a command-line application and report on any crashes that occur.

      The exact technique for fuzzing each program varies massively, and different methods will be most effective for different programs. However, in this tutorial we will use the example of jq, which is a command-line program for processing JSON data.

      You may use any other similar program as long as it follows the general principle of taking some form of structured or unstructured data, doing something with it, and then outputting a result. For instance this example would also work with Gzip, Grep, bc, tr, and so on.

      If you don’t already have jq installed, you can install it using apt:

      jq will now be installed.

      To begin fuzzing, create a sample JSON file that you’ll use as the input to Radamsa:

      Then, add the following sample JSON data to the file:

      test.json

      {
        "test": "test",
        "array": [
          "item1: foo",
          "item2: bar"
        ]
      }
      

      You can parse this file using jq if you wish to check that the JSON syntax is valid:

      If the JSON is valid, jq will output the file. Otherwise, it will display an error, which you can use to correct the syntax where required.

      Next, fuzz the test JSON file using Radamsa and then pass it to jq. This will cause jq to read the fuzzed/manipulated test case, rather than the original valid JSON data:

      If Radamsa fuzzes the JSON data in a way that it is still syntactically valid, jq will output the data, but with whatever changes Radamsa made to it.

      Alternatively, if Radamsa causes the JSON data to become invalid, jq will display a relevant error. For example:

      Output

      parse error: Expected separator between values at line 5, column 16

      The alternate outcome would be that jq is unable to correctly handle the fuzzed data, causing it to crash or misbehave. This is what you’re really looking for with fuzzing, as this may be indicative of a security vulnerability such as a buffer overflow or command injection.

      In order to more efficiently test for vulnerabilities like this, a Bash script can be used to automate the fuzzing process, including generating test cases, passing them to the target program and capturing any relevant output.

      Create a file named jq-fuzz.sh:

      The exact script content will vary depending on the type of program that you’re fuzzing and the input data, but in the case of jq and other similar programs, the following script suffices.

      Copy the script into your jq-fuzz.sh file:

      jq-fuzz.sh

      #!/bin/bash
      while true; do
        radamsa test.json > input.txt
        jq . input.txt > /dev/null 2>&1
        if [ $? -gt 127 ]; then
          cp input.txt crash-`date +s%.%N`.txt
          echo "Crash found!"
        fi
      done
      

      This script contains a while to make the contents loop repeatedly. Each time the script loops, Radamsa will generate a test case based on test.json and save it to input.txt.

      The input.txt test case will then be run through jq, with all standard and error output redirected to /dev/null to avoid filling up the terminal screen.

      Finally, the exit value of jq is checked. If the exit value is greater than 127, this is indicative of a fatal termination (a crash), then the input data is saved for review at a later date in a file named crash- followed by the current date in Unix seconds and nanoseconds.

      Mark the script as executable and set it running in order to begin automatically fuzz testing jq:

      • chmod +x jq-fuzz.sh
      • ./jq-fuzz.sh

      You can issue CTRL+C at any time to terminate the script. You can then check whether any crashes have been found by using ls to display a directory listing containing any crash files that have been created.

      You may wish to improve your JSON input data since using a more complex input file is likely to improve the quality of your fuzzing results. Avoid using a large file or one that contains a lot of repeated data—an ideal input file is one that is small in size, yet still contains as many ‘complex’ elements as possible. For example, a good input file will contain samples of data stored in all formats, including strings, integers, booleans, lists, and objects, as well as nested data where possible.

      You’ve used Radamsa to fuzz a command-line application. Next, you’ll use Radamsa to fuzz requests to network services.

      Step 4 — Fuzzing Requests to Network Services

      Radamsa can also be used to fuzz network services, either acting as a network client or server. In this step, you’ll use Radamsa to fuzz a network service, with Radamsa acting as the client.

      The purpose of fuzzing network services is to test how resilient a particular network service is to clients sending it malformed and/or malicious data. Many network services such as web servers or DNS servers are usually exposed to the internet, meaning that they are a common target for attackers. A network service that is not sufficiently resistant to receiving malformed data may crash, or even worse fail in an open state, allowing attackers to read sensitive data such as encryption keys or user data.

      The specific technique for fuzzing network services varies enormously depending on the network service in question, however in this example we will use Radamsa to fuzz a basic web server serving static HTML content.

      Firstly, you need to set up the web server to use for testing. You can do this using the built-in development server that comes with the php-cli package. You’ll also need curl in order to test your web server.

      If you don’t have php-cli and/or curl installed, you can install them using apt:

      • sudo apt install php-cli curl

      Next, create a directory to store your web server files in and move into it:

      Then, create a HTML file containing some sample text:

      Add the following to the file:

      index.html

      <h1>Hello, world!</h1>
      

      You can now run your PHP web server. You’ll need to be able to view the web server log while still using another terminal session, so open another terminal session and SSH to your server for this:

      • cd ~/www
      • php -S localhost:8080

      This will output something similar to the following:

      Output

      PHP 7.2.24-0ubuntu0.18.04.1 Development Server started at Wed Jan 1 16:06:41 2020 Listening on http://localhost:8080 Document root is /home/user/www Press Ctrl-C to quit.

      You can now switch back to your original terminal session and test that the web server is working using curl:

      This will output the sample index.html file that you created earlier:

      Output

      <h1>Hello, world!</h1>

      Your web server only needs to be accessible locally, so you should not open any ports on your firewall for it.

      Now that you’ve set up your test web server, you can begin to fuzz test it using Radamsa.

      First, you’ll need to create a sample HTTP request to use as the input data for Radamsa. Create a new file to store this in:

      Then, copy the following sample HTTP request into the file:

      http-request.txt

      GET / HTTP/1.1
      Host: localhost:8080
      User-Agent: test
      Accept: */*
      

      Next, you can use Radamsa to submit this HTTP request to your local web server. In order to do this, you’ll need to use Radamsa as a TCP client, which can be done by specifying an IP address and port to connect to:

      • radamsa -o 127.0.0.1:8080 http-request.txt

      Note: Be aware that using Radamsa as a TCP client will potentially cause malformed/malicious data to be transmitted over the network. This may break things, so be very careful to only access networks that you are authorized to test, or preferably, stick to using the localhost (127.0.0.1) address.

      Finally, if you view the outputted logs for your local web server, you’ll see that it has received the requests, but most likely not processed them as they were invalid/malformed.

      The outputted logs will be visible in your second terminal window:

      Output

      [Wed Jan 1 16:26:49 2020] 127.0.0.1:49334 Invalid request (Unexpected EOF) [Wed Jan 1 16:28:04 2020] 127.0.0.1:49336 Invalid request (Malformed HTTP request) [Wed Jan 1 16:28:05 2020] 127.0.0.1:49338 Invalid request (Malformed HTTP request) [Wed Jan 1 16:28:07 2020] 127.0.0.1:49340 Invalid request (Unexpected EOF) [Wed Jan 1 16:28:08 2020] 127.0.0.1:49342 Invalid request (Malformed HTTP request)

      For optimal results and to ensure that crashes are recorded, you may wish to write an automation script similar to the one used in Step 3. You should also consider using a more complex input file, which may contain additions such as extra HTTP headers.

      You’ve fuzzed a network service using Radamsa acting as a TCP client. Next, you will fuzz a network client with Radamsa acting as a server.

      Step 5 — Fuzzing Network Client Applications

      In this step, you will use Radamsa to fuzz test a network client application. This is achieved by intercepting responses from a network service and fuzzing them before they are received by the client.

      The purpose of this kind of fuzzing is to test how resilient network client applications are to receiving malformed or malicious data from network services. For example, testing a web browser (client) receiving malformed HTML from a web server (network service), or testing a DNS client receiving malformed DNS responses from a DNS server.

      As was the case with fuzzing command-line applications or network services, the exact technique for fuzzing each network client application varies considerably, however in this example you will use whois, which is a simple TCP-based send/receive application.

      The whois application is used to make requests to WHOIS servers and receive WHOIS records as responses. WHOIS operates over TCP port 43 in clear text, making it a good candidate for network-based fuzz testing.

      If you don’t already have whois available, you can install it using apt:

      First, you’ll need to acquire a sample whois response to use as your input data. You can do this by making a whois request and saving the output to a file. You can use any domain you wish here as you’re testing the whois program locally using sample data:

      • whois example.com > whois.txt

      Next, you’ll need to set up Radamsa as a server that serves fuzzed versions of this whois response. You’ll need to be able to continue using your terminal once Radamsa is running in server mode, so it is recommended to open another terminal session and SSH connection to your server for this:

      • radamsa -o :4343 whois.txt -n inf

      Radamsa will now be running in TCP server mode, and will serve a fuzzed version of whois.txt each time a connection is made to the server, no matter what request data is received.

      You can now proceed to testing the whois client application. You’ll need to make a normal whois request for any domain of your choice (it doesn’t have to be the same one that the sample data is for), but with whois pointed to your local Radamsa server:

      • whois -h localhost:4343 example.com

      The response will be your sample data, but fuzzed by Radamsa. You can continue to make requests to the local server as long as Radamsa is running, and it will serve a different fuzzed response each time.

      As with fuzzing network services, to improve the efficiency of this network client fuzz testing and ensure that any crashes are captured, you may wish to write an automation script similar to the one used in Step 3.

      In this final step, you used Radamsa to conduct fuzz testing of a network client application.

      Conclusion

      In this article you set up Radamsa and used it to fuzz a command-line application, a network service, and a network client. You now have the foundational knowledge required to fuzz test your own applications, hopefully with the result of improving their robustness and resistance to attack.

      If you wish to explore Radamsa further, you may wish to review the Radamsa README file in detail, as it contains further technical information and examples of how the tool can be used:

      You may also wish to check out some other fuzzing tools such as American Fuzzy Lop (AFL), which is an advanced fuzzing tool designed for testing binary applications at extremely high speed and accuracy:



      Source link

      How To Test Your Ansible Deployment with InSpec and Kitchen


      The author selected the Diversity in Tech Fund to receive a donation as part of the Write for DOnations program.

      Introduction

      InSpec is an open-source auditing and automated testing framework used to describe and test for regulatory concerns, recommendations, or requirements. It is designed to be human-readable and platform-agnostic. Developers can work with InSpec locally or using SSH, WinRM, or Docker to run testing, so it’s unnecessary to install any packages on the infrastructure that is being tested.

      Although with InSpec you can run tests directly on your servers, there is a potential for human error that could cause issues in your infrastructure. To avoid this scenario, developers can use Kitchen to create a virtual machine and install an OS of their choice on the machines where tests are running. Kitchen is a test runner, or test automation tool, that allows you to test infrastructure code on one or more isolated platforms. It also supports many testing frameworks and is flexible with a driver plugin architecture for various platforms such as Vagrant, AWS, DigitalOcean, Docker, LXC containers, etc.

      In this tutorial, you’ll write tests for your Ansible playbooks running on a DigitalOcean Ubuntu 18.04 Droplet. You’ll use Kitchen as the test-runner and InSpec for writing the tests. By the end of this tutorial, you’ll be able to test your Ansible playbook deployment.

      Prerequisites

      Before you begin with this guide, you’ll need a DigitalOcean account in addition to the following:

      Step 1 — Setting Up and Initializing Kitchen

      You’ve installed ChefDK as part of the prerequisites that comes packaged with kitchen. In this step, you’ll set up Kitchen to communicate with DigitalOcean.

      Before initializing Kitchen, you’ll create and move into a project directory. In this tutorial, we’ll call it ansible_testing_dir.

      Run the following command to create the directory:

      • mkdir ~/ansible_testing_dir

      And then move into it:

      Using gem install the kitchen-digitalocean package on your local machine. This allows you to tell kitchen to use the DigitalOcean driver when running tests:

      • gem install kitchen-digitalocean

      Within the project directory, you’ll run the kitchen init command specifying ansible_playbook as the provisioner and digitalocean as the driver when initializing Kitchen:

      • kitchen init --provisioner=ansible_playbook --driver=digitalocean

      You’ll see the following output:

      Output

      create kitchen.yml create chefignore create test/integration/default

      This has created the following within your project directory:

      • test/integration/default is the directory to which you’ll save your test files.

      • chefignore is the file you would use to ensure certain files are not uploaded to the Chef Infra Server, but you won’t be using it in this tutorial.

      • kitchen.yml is the file that describes your testing configuration: what you want to test and the target platforms.

      Now, you need to export your DigitalOcean credentials as environment variables to have access to create Droplets from your CLI. First, start with your DigitalOcean access token by running the following command:

      • export DIGITALOCEAN_ACCESS_TOKEN="YOUR_DIGITALOCEAN_ACCESS_TOKEN"

      You also need to get your SSH Key ID number; note that YOUR_DIGITALOCEAN_SSH_KEY_IDS must be the numeric ID of your SSH key, not the symbolic name. Using the DigitalOcean API, you can get the numeric ID of your keys with the following command:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN"

      From this command you’ll see a list of your SSH Keys and related metadata. Read through the output to find the correct key and identify the ID number within the output:

      Output

      ... {"id":your-ID-number,"fingerprint":"fingerprint","public_key":"ssh-rsa your-ssh-key","name":"your-ssh-key-name" ...

      Note: If you would like to make your output more readable to obtain your numeric IDs, you can find and download jq based on your OS on the jq download page. Now, you can run the previous command piped into jq as following:

      • curl -X GET https://api.digitalocean.com/v2/account/keys -H "Authorization: Bearer $DIGITALOCEAN_ACCESS_TOKEN" | jq

      You’ll see your SSH Key information formatted similarly to:

      Output

      { "ssh_keys": [ { "id": YOUR_SSH_KEY_ID, "fingerprint": "2f:d0:16:6b", "public_key": "ssh-rsa AAAAB3NzaC1yc2 example@example.local", "name": "sannikay" } ], }

      Once you’ve identified your SSH numeric IDs, export them with the following command:

      • export DIGITALOCEAN_SSH_KEY_IDS="YOUR_DIGITALOCEAN_SSH_KEY_ID"

      You’ve initialized kitchen and set up the environment variables for your DigitalOcean credentials. Now you’ll move on to create and run tests on your DigitalOcean Droplets directly from the command line.

      Step 2 — Creating the Ansible Playbook

      In this step, you’ll create a playbook and roles that set up Nginx and Node.js on the Droplet created by kitchen in the next step. Your tests will be run against the playbook to ensure the conditions specified in the playbook are met.

      To begin, create a roles directory for both the Nginx and Node.js roles:

      • mkdir -p roles/{nginx,nodejs}/tasks

      This will create a directory structure as follows:

      roles
      ├── nginx
      │   └── tasks
      └── nodejs
          └── tasks
      

      Now, create a main.yml file in the roles/nginx/tasks directory using your preferred editor:

      • nano roles/nginx/tasks/main.yml

      In this file, create a task that sets up and starts Nginx by adding the following content:

      roles/nginx/tasks/main.yml

      ---
      - name: Update cache repositories and install Nginx
        apt:
          name: nginx
          update_cache: yes
      
      - name: Change nginx directory permission
        file:
          path: /etc/nginx/nginx.conf
          mode: 0750
      
      - name: start nginx
        service:
          name: nginx
          state: started
      

      Once you’ve added the content, save and exit the file.

      In roles/nginx/tasks/main.yml, you define a task that will update the cache repository of your Droplet, which is an equivalent of running the apt update command manually on a server. This task also changes the Nginx configuration file permissions and starts the Nginx service.

      You are also going to create a main.yml file in roles/nodejs/tasks to define a task that sets up Node.js:

      • nano roles/nodejs/tasks/main.yml

      Add the following tasks to this file:

      roles/nodejs/tasks/main.yml

      ---
      - name: Update caches repository
        apt:
          update_cache: yes
      
      - name: Add gpg key for NodeJS LTS
        apt_key:
          url: "https://deb.nodesource.com/gpgkey/nodesource.gpg.key"
          state: present
      
      - name: Add the NodeJS LTS repo
        apt_repository:
          repo: "deb https://deb.nodesource.com/node_{{ NODEJS_VERSION }}.x {{ ansible_distribution_release }} main"
          state: present
          update_cache: yes
      
      - name: Install Node.js
        apt:
          name: nodejs
          state: present
      
      

      Save and exit the file when you’re finished.

      In roles/nodejs/tasks/main.yml, you first define a task that will update the cache repository of your Droplet. Then with the next task you add the GPG key for Node.js that serves as a means of verifying the authenticity of the Node.js apt repository. The final two tasks add the Node.js apt repository and install Node.js.

      Now you’ll define your Ansible configurations, such as variables, the order in which you want your roles to run, and super user privilege settings. To do this, you’ll create a file named playbook.yml, which serves as an entry point for Kitchen. When you run your tests, Kitchen starts from your playbook.yml file and looks for the roles to run, which are your roles/nginx/tasks/main.yml and roles/nodejs/tasks/main.yml files.

      Run the following command to create playbook.yml:

      Add the following content to the file:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      

      Save and exit the file.

      You’ve created the Ansible playbook roles that you’ll be running your tests against to ensure conditions specified in the playbook are met.

      Step 3 — Writing Your InSpec Tests

      In this step, you’ll write tests to check if Node.js is installed on your Droplet. Before writing your test, let’s look at the format of an example InSpec test. As with many test frameworks, InSpec code resembles a natural language. InSpec has two main components, the subject to examine and the subject’s expected state:

      block A

      describe '<entity>' do
        it { <expectation> }
      end
      

      In block A, the keywords do and end define a block. The describe keyword is commonly known as test suites, which contain test cases. The it keyword is used for defining the test cases.

      <entity> is the subject you want to examine, for example, a package name, service, file, or network port. The <expectation> specifies the desired result or expected state, for example, Nginx should be installed or should have a specific version. You can check the InSpec DSL documentation to learn more about the InSpec language.

      Another example InSpec test block:

      block B

      control 'Can be anything unique' do  
        impact 0.7                         
        title 'A human-readable title'     
        desc  'An optional description'
        describe '<entity>' do             
          it { <expectation> }
        end
      end
      

      The difference between block A and block B is the control block. The control block is used as a means of regulatory control, recommendation or requirement. The control block has a name; usually a unique ID, metadata such as desc, title, impact, and finally group together related describe block to implement the checks.

      desc, title, and impact define metadata that fully describe the importance of the control, its purpose, with a succinct and complete description. impact defines a numeric value that ranges from 0.0 to 1.0 where 0.0 to <0.01 is classified as no impact, 0.01 to <0.4 is classified as low impact, 0.4 to <0.7 is classified as medium impact, 0.7 to <0.9 is classified as high impact, 0.9 to 1.0 is classified as critical control.

      Now to implement a test. Using the syntax of block A, you’ll use InSpec’s package resource to test if Node.js is installed on the system. You’ll create a file named sample.rb in your test/integration/default directory for your tests.

      Create sample.rb:

      • nano test/integration/default/sample.rb

      Add the following to your file:

      test/integration/default/sample.rb

      describe package('nodejs') do
        it { should be_installed }
      end
      

      Here your test is using the package resource to check Node.js is installed.

      Save and exit the file when you’re finished.

      To run this test, you need to edit kitchen.yml to specify the playbook you created earlier and to add to your configurations.

      Open your kitchen.yml file:

      • nano ansible_testing_dir/kitchen.yml

      Replace the content of kitchen.yml with the following:

      ansible_testing_dir/kitchen.yml

      ---
      driver:
        name: digitalocean
      
      provisioner:
        name: ansible_playbook
        hosts: test-kitchen
        playbook: ./playbook.yml
      
      verifier:
        name: inspec
      
      platforms:
        - name: ubuntu-18
          driver_config:
            ssh_key: PATH_TO_YOUR_PRIVATE_SSH_KEY
            tags:
              - inspec-testing
            region: fra1
            size: 1gb
            private_networking: false
          verifier:
            inspec_tests:
              - test/integration/default
      suites:
        - name: default
      
      

      The platform options include the following:

      • name: The image you’re using.
      • driver_config: Your DigitalOcean Droplet configuration. You’re specifying the following options for the driver_config:

        • ssh_key: Path to YOUR_PRIVATE_SSH_KEY. Your YOUR_PRIVATE_SSH_KEY is located in the directory you specified when creating your ssh key.
        • tags: The tags associated with your Droplet.
        • region: The region where you want your Droplet to be hosted.
        • size: The memory you want your Droplet to have.
      • verifier: This defines that the project contains InSpec tests.

        • The inspec_tests part specifies that the tests exist under the project test/integration/default directory.

      Note that the name and region use abbreviations. You can check on the test-kitchen documentation for the abbreviations you can use.

      Once you’ve added your configuration, save and exit the file.

      Run the kitchen test command to run the test. This will check to see if Node.js is installed—this will purposefully fail because you don’t currently have the Node.js role in your playbook.yml file:

      You’ll see output similar to the following:

      Output: failing test results

      -----> Starting Kitchen (v1.24.0) -----> Cleaning up any prior instances of <default-ubuntu-18> -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145268853> destroyed. Finished destroying <default-ubuntu-18> (0m2.63s). -----> Testing <default-ubuntu-18> -----> Creating <default-ubuntu-18>... DigitalOcean instance <145273424> created. Waiting for SSH service on 138.68.97.146:22, retrying in 3 seconds [SSH] Established (ssh ready) Finished creating <default-ubuntu-18> (0m51.74s). -----> Converging <default-ubuntu-18>... $$$$$$ Running legacy converge for 'Digitalocean' Driver -----> Installing Chef Omnibus to install busser to run tests PLAY [all] ********************************************************************* TASK [Gathering Facts] ********************************************************* ok: [localhost] PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 Downloading files from <default-ubuntu-18> Finished converging <default-ubuntu-18> (0m55.05s). -----> Setting up <default-ubuntu-18>... $$$$$$ Running legacy setup for 'Digitalocean' Driver Finished setting up <default-ubuntu-18> (0m0.00s). -----> Verifying <default-ubuntu-18>... Loaded tests from {:path=>". ansible_testing_dir.test.integration.default"} Profile: tests from {:path=>"ansible_testing_dir/test/integration/default"} (tests from {:path=>"ansible_testing_dir.test.integration.default"}) Version: (not specified) Target: ssh://root@138.68.97.146:22 System Package nodejs × should be installed expected that System Package nodejs is installed Test Summary: 0 successful, 1 failure, 0 skipped >>>>>> ------Exception------- >>>>>> Class: Kitchen::ActionFailed >>>>>> Message: 1 actions failed. >>>>>> Verify failed on instance <default-ubuntu-18>. Please see .kitchen/logs/default-ubuntu-18.log for more details >>>>>> ---------------------- >>>>>> Please see .kitchen/logs/kitchen.log for more details >>>>>> Also try running `kitchen diagnose --all` for configuration 4.54s user 1.77s system 5% cpu 2:02.33 total

      The output notes that your test is failing because you don’t have Node.js installed on the Droplet you provisioned with kitchen. You’ll fix your test by adding the nodejs role to your playbook.yml file and run the test again.

      Edit the playbook.yml file to include the nodejs role:

      Add the following highlighted lines to your file:

      ansible_testing_dir/playbook.yml

      ---
       - hosts: all
         become: true
         remote_user: ubuntu
         vars:
          NODEJS_VERSION: 8
      
         roles:
          - nodejs
      

      Save and close the file.

      Now, you’ll rerun the test using the kitchen test command:

      You’ll see the following output:

      Output

      ...... Target: ssh://root@46.101.248.71:22 System Package nodejs ✔ should be installed Test Summary: 1 successful, 0 failures, 0 skipped Finished verifying <default-ubuntu-18> (0m4.89s). -----> Destroying <default-ubuntu-18>... DigitalOcean instance <145512952> destroyed. Finished destroying <default-ubuntu-18> (0m2.23s). Finished testing <default-ubuntu-18> (2m49.78s). -----> Kitchen is finished. (2m55.14s) 4.86s user 1.77s system 3% cpu 2:56.58 total

      Your test now passes because you have Node.js installed using the nodejs role.

      Here is a summary of what Kitchen is doing in the Test Action:

      • Destroys the Droplet if it exists
      • Creates the Droplet
      • Converges the Droplet
      • Verifies the Droplet with InSpec
      • Destroys the Droplet

      Kitchen will abort the run on your Droplet if it encounters any issues. This means if your Ansible playbook fails, InSpec won’t run and your Droplet won’t be destroyed. This gives you a chance to inspect the state of the instance and fix any issues. The behavior of the final destroy action can be overridden if desired. Check out the CLI help for the --destroy flag by running the kitchen help test command.

      You’ve written your first tests and run them against your playbook with one instance failing before fixing the issue. Next you’ll extend your test file.

      Step 4 — Adding Test Cases

      In this step, you’ll add more test cases to your test file to check if Nginx modules are installed on your Droplet and the configuration file has the right permissions.

      Edit your sample.rb file to add more test cases:

      • nano test/integration/default/sample.rb

      Add the following test cases to the end of the file:

      test/integration/default/sample.rb

      . . .
      control 'nginx-modules' do
        impact 1.0
        title 'NGINX modules'
        desc 'The required NGINX modules should be installed.'
        describe nginx do
          its('modules') { should include 'http_ssl' }
          its('modules') { should include 'stream_ssl' }
          its('modules') { should include 'mail_ssl' }
        end
      end
      
      control 'nginx-conf' do
        impact 1.0
        title 'NGINX configuration'
        desc 'The NGINX config file should owned by root, be writable only by owner, and not writeable or and readable by others.'
        describe file('/etc/nginx/nginx.conf') do
          it { should be_owned_by 'root' }
          it { should be_grouped_into 'root' }
          it { should_not be_readable.by('others') }
          it { should_not be_writable.by('others') }
          it { should_not be_executable.by('others') }
        end
      end
      

      These test cases check that the nginx-modules on your Droplet include http_ssl, stream_ssl, and mail_ssl. You are also checking for /etc/nginx/nginx.conf file permissions.

      You are using both the it and its keywords to define your test. The keyword its is only used to access properties of the resources. For example, modules is a property of nginx.

      Save and exit the file once you’ve added the test cases.

      Now run the kitchen test command to test again:

      You’ll see the following output:

      Output

      ... Target: ssh://root@104.248.131.111:22 ↺ nginx-modules: NGINX modules ↺ The `nginx` binary not found in the path provided. × nginx-conf: NGINX configuration (2 failed) × File /etc/nginx/nginx.conf should be owned by "root" expected `File /etc/nginx/nginx.conf.owned_by?("root")` to return true, got false × File /etc/nginx/nginx.conf should be grouped into "root" expected `File /etc/nginx/nginx.conf.grouped_into?("root")` to return true, got false ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 0 successful controls, 1 control failure, 1 control skipped Test Summary: 4 successful, 2 failures, 1 skipped

      You’ll see that some of the tests are failing. You’re going to fix those by adding the nginx role to your playbook file and rerunning the test. In the failing test, you’re checking for nginx modules and file permissions that are currently not present on your server.

      Open your playbook.yml file:

      • nano ansible_testing_dir/playbook.yml

      Add the following highlighted line to your roles:

      ansible_testing_dir/playbook.yml

      ---
      - hosts: all
        become: true
        remote_user: ubuntu
        vars:
        NODEJS_VERSION: 8
      
        roles:
        - nodejs
        - nginx
      

      Save and close the file when you’re finished.

      Then run your tests again:

      You’ll see the following output:

      Output

      ... Target: ssh://root@104.248.131.111:22 ✔ nginx-modules: NGINX version ✔ Nginx Environment modules should include "http_ssl" ✔ Nginx Environment modules should include "stream_ssl" ✔ Nginx Environment modules should include "mail_ssl" ✔ nginx-conf: NGINX configuration ✔ File /etc/nginx/nginx.conf should be owned by "root" ✔ File /etc/nginx/nginx.conf should be grouped into "root" ✔ File /etc/nginx/nginx.conf should not be readable by others ✔ File /etc/nginx/nginx.conf should not be writable by others ✔ File /etc/nginx/nginx.conf should not be executable by others System Package nodejs ✔ should be installed Profile Summary: 2 successful controls, 0 control failures, 0 controls skipped Test Summary: 9 successful, 0 failures, 0 skipped

      After adding the nginx role to the playbook all your tests now pass. The output shows that the http_ssl, stream_ssl, and mail_ssl modules are installed on your Droplet and the right permissions are set for the configuration file.

      Once you’re finished, or you no longer need your Droplet, you can destroy it by running the kitchen destroy command to delete it after running your tests:

      Following this command you’ll see output similar to:

      Output

      -----> Starting Kitchen (v1.24.0) -----> Destroying <default-ubuntu-18>... Finished destroying <default-ubuntu-18> (0m0.00s). -----> Kitchen is finished. (0m5.07s) 3.79s user 1.50s system 82% cpu 6.432 total

      You’ve written tests for your playbook, run the tests, and fixed the failing tests to ensure all the tests are passing. You’re now set up to create a virtual environment, write tests for your Ansible Playbook, and run your test on the virtual environment using Kitchen.

      Conclusion

      You now have a flexible foundation for testing your Ansible deployment, which allows you to test your playbooks before running on a live server. You can also package your test into a profile. You can use profiles to share your test through Github or the Chef Supermarket and easily run it on a live server.

      For more comprehensive details on InSpec and Kitchen, refer to the official InSpec documentation and the official Kitchen documentation.



      Source link