One place for hosting & domains

      Testing

      Application Security Testing Tools


      Application security testing tools help you build applications that are less vulnerable to attacks by automating security testing, and by verifying your applications are secured against known vulnerabilities.

      In this guide, you learn what application security testing is; why you need application security tools; what types of tools exist; and what best practices your organization can use in deploying them.

      What Is Application Security Testing?

      Application Security Testing (AST) is the process of making code more resistant to attack by verifying the absence of known vulnerabilities. Applying security testing practices to all areas of your application’s stack and software development life-cycle can decrease the risk of an incident. Security testing began with manual source code reviews, but that’s no longer feasible in most cases.

      Automated testing with AST tools is a necessity today, for several reasons. These include the complexity of applications, especially web-based and mobile software; the frequent use of third-party components; time-to-market pressures; and the seemingly infinite universe of known attacks.

      The Importance of Security Testing

      You can never completely eliminate risk for your application, but you can use AST tools to greatly reduce that risk. It’s much less difficult and less expensive to detect and fix security flaws early in the development cycle than it is in production.

      Security testing tools also keep you current because they’re regularly updated to check for the latest known vulnerabilities. This is especially important considering that
      2021 saw a record number of zero-day vulnerabilities
      .

      Compared with time consuming code reviews and conventional unit and system test, AST tools provide much more speed and convenience. AST tools also classify and triage test results, helping you quickly identify the most serious vulnerabilities.

      Because they automate testing, software security tools scale well, and ensure repeatable results. AST tools also extend the breadth of security coverage by checking for new classes of vulnerabilities you previously might not have considered. Depending on your industry, there may be cases where you must perform security testing for regulatory and compliance reasons. And perhaps most important of all, AST tools help you think the way attackers do.

      Unlike source code reviews, AST tools work at every stage of an application’s lifecycle. This extends security testing throughout your organization, regardless of whether you’re on a development, devops, or IT management team.

      Types of Application Security Testing

      Static Application Security Testing

      Static application security testing (SAST) tools examine code to detect possible vulnerabilities. SAST tools are a form of white-box testing. In the white-box model, a test tool has access to all aspects of an application’s structure, including its architecture and source code. Armed with this inside knowledge, SAST tools can spot design flaws, identify logic problems, and verify code correctness. These tools optionally may perform negative testing as well, offering illegal values to test input validation and exception handling.

      SAST tools run automated scanning of source code, byte code, or compiled binaries, or some combination of these. The central tenet of all SAST tools is that they examine code at rest. Because SAST tools use a white-box model, they can analyze virtually any aspect of software, including individual functions, classes, and entire applications.

      Most AST tools, including SAST products, compare code against libraries of known vulnerabilities such as the
      Common Vulnerability and Exposures (CVE) list
      or
      VulnDB
      . A SAST tool that checks for vulnerabilities in this way might search for coding errors that could lead to privilege escalation, memory leaks, buffer overflows, and other faults.

      Example SAST products include
      AppScan Source
      ,
      Checkmarx SAST
      ,
      Coverity SAST
      ,
      Klocwork
      , and the open-source
      Insider
      and
      LGMT
      projects.

      Dynamic Application Security Testing

      Dynamic application security testing (DAST) tools examine applications while they’re running. In contrast to SAST tools, DAST takes a “black-box” approach, where the test tool has no visibility into application architecture or coding. Instead, DAST tools must discover vulnerabilities through externally observable means.

      One popular technique employed by DAST tools is the use of fuzzing. This is the practice of deliberately providing software with unexpected or illegal values, often at high rates and/or in high volumes.

      Consider the example of network routing software. A fuzzing tool might bombard routing software with illegal and constantly iterating values for every field in the
      IP header of every packet
      . Fuzzing tests often expose memory leaks or trigger hangs and reboots. They represent an excellent way to detect problems relatively early in development.

      Examples of DAST tools include
      Acunetix
      ,
      AppSider
      ,
      CheckMarx AST
      ,
      GitLab
      ,
      InsightAppSec
      ,
      Stackhawk
      , and
      Veracode
      .

      As with SAST tools, most DAST products check software integrity against a known set of vulnerabilities and exposures. An interesting, but less common, method is to use a so-called anomaly-based approach, where a test tool monitors application traffic to determine a normal baseline, and then logs behavior outside that baseline.

      Project Ava
      represents an example of the anomaly-based approach.

      While DAST tools work with any type of software, a subset of tools focuses on web application testing. These tools may use some combination of SQL injection (described in detail below), spoofing, cross-site scripting attacks, URL manipulation, password cracking, and other web-specific vulnerabilities.

      Example products include
      Detectify
      ,
      Invicti
      ,
      Nessus
      ,
      Portswigger
      , and the
      OWASP Zed Attack Proxy (ZAP)
      .

      SQL Injection Testing

      SQL injection test tools exist as a standalone category because injection attacks are so common, especially against web-based applications. SQL injection attacks work by inserting, or “injecting”, data into SQL queries to compromise a target database.

      For example, a successful SQL injection attack modifies a database by adding, updating, or deleting fields. It may expose personally identifiable information (PII) such as credit-card numbers or medical records. In some cases, SQL injection attacks also send commands to the underlying operating system.

      Because SQL injection attacks are so common, numerous tools exist to automate testing of this class of vulnerabilities. Some examples include
      SQLMap
      ,
      jSQL Injection
      , and
      BBQSQL
      . Another open-source tool,
      NoSQLMap
      , automates testing of code-injection vulnerabilities in NoSQL databases such as
      CouchDB
      and
      MongoDB
      .

      Software Composition Analysis

      Software composition analysis (SCA) tools examine every component and library used by an application, including third-party software. SCA test tools help detect problems in the open-source components or libraries found in the vast majority of networked applications.

      SCA testing uses a hybrid of SAST and DAST approaches. One caveat with SCA tools (and indeed, with any AST tool that uses a set of known vulnerabilities) is that they cannot detect problems they don’t know about. For example, SCA tools cannot detect problems in proprietary libraries developed in-house. Still, SCA tools are invaluable not only to identify vulnerabilities but also for risk management and license compliance needs.

      Vendors of SCA tools include
      Contrast Security
      ,
      Fossa
      , and
      Revenera
      .

      Mobile application Security Testing

      As the name suggests, mobile application security testing (MAST) tools look specifically for vulnerabilities in software built for mobile devices. Attackers may target a mobile device’s operating system, or its applications, or both. Some tools focus on apps on mobile devices, while others test back-end services such as cloud platforms and databases.

      Some examples of MAST tools include
      Fortify on Demand
      ,
      NowSecure
      , and the open-source
      MobSF
      project.

      Runtime Application Self-Protection

      Runtime application self-protection (RASP) tools work in production settings by analyzing application traffic and user behavior. RASP uses a hybrid of SAST and DAST approaches, analyzing both source code and live binaries to identify attacks as they happen, and block attacks in real time. For example, a RASP tool may identify an attack that targets a specific API, and then block access to that API. RASP tools also log attempted exploits to external security event and information management (SIEM) systems, allowing for real-time notification.

      Example products include
      Fortify
      ,
      Imperva
      ,
      Signal Sciences
      , and
      Sqreen
      .

      Security Testing Best Practices

      The list below includes five ways that you can make optimal use of AST tools.

      • Shift left. Even with modern software development practices, it’s still common for security testing to begin well after initial coding starts. This is often due to development and test teams working in separate silos. It’s far safer and more efficient to integrate security testing into every development phase – that is, to shift left on project timelines. By shifting left you can reduce bug count, increase code quality, and lessen the chance of discovering critical issues later on during deployment. Security testers should be involved in initial planning, and should be an integral part of any development plan.

      • Don’t trust third-party code. Virtually all networked applications today include third-party components.
        As a famous comic wryly observed
        , modern infrastructure today might well depend on, “a project some random person in Nebraska has been thanklessly maintaining since 2003.” There are many excellent third-party components available, but the onus is on development teams to ensure any outsourced code is free from known vulnerabilities and kept up to date. SCA tools should be an essential part of any AST toolkit.

      • Integrate patch management into CI/CD processes. With the proliferation of zero-day vulnerabilities, it’s no longer sufficient to task IT managers with patch management, the practice of continually updating software to guard against newly discovered attack vectors in software. Certainly patch management is important in production settings, but it’s also critical in earlier stages of the software lifecycle.
        Continuous development and integration (CI/CD)
        teams need to include patching as part of their development processes, and ensure vulnerabilities are mitigated as soon as they’re discovered. This is particularly true when incorporating third-party components such as open-source libraries; those also need to be patched as soon as those projects announce fixes for known vulnerabilities.

      • Think negative thoughts. Especially in early-stage unit testing, it’s all too common to design tests that merely verify a component works as intended. Attackers don’t think this way, and neither should developers. Negative testing – presenting applications with unexpected values – should be part of every test plan.

      • Use all the tools. Information security depends on defense in depth, the concept of employing multiple safeguards to ensure no one component’s failure leads to compromise. In an AST context, this means integrating multiple types of security testing tools into the development process. As aforementioned, there are a wide variety of tools available. Developers, devops teams, and IT managers can greatly improve code security by learning to use these tools, and by implementing them through the application lifecycle.

      Conclusion

      To reduce the risk of malicious attacks on your applications, it’s important to use application security testing tools to mitigate any vulnerabilities. This guide covered some of the most important areas of AST, like static application security testing, dynamic application security testing, and SQL injecting testing. These areas help cover security throughout an application’s technology stack and the software development lifecycle. See the
      security basics
      section our documentation library to learn more about security best practices in information technology.



      Source link

      Disaster Recovery (DR) Testing: The Why, What and Who


      The importance of disaster recovery (DR) and business continuity plans can’t be overstated. Here on the ThinkIT blog, we’ve covered how to get started, the basics of making a plan, table-top exercises to help your staff test your DR plan and more. In this piece, I’ll explore the importance of DR testing—why it’s so important, what elements need to be considered and whether you should handle testing on your own or outsource it to a third-party DRaaS provider. We’ll also review the options for running a failover test.

      Why DR Testing: Imagine the Worst-Case Scenario

      Those of us in the DR business on 9/11 remember the devastating, tragic story of Cantor Fitzgerald, a financial services firm that lost 656 of their 960 employees that morning. The company occupied floors 101- 105 of the north tower at the World Trade Center. At the time, I’m sure IT disaster recovery and business continuity were not top of mind for those involved in the painful tragedy.

      From an IT and business continuity standpoint, however, Cantor Fitzgerald was able to get systems online 48 hours after the attacks. They used a DR company at that time called Comdisco and were able to get remaining employees up and running answering phones, emails and stock trading within five days of the attacks. Today, Cantor Fitzgerald is still in business with about 10,000 employees. The company also followed through on CEO Howard Lutnick’s promises that were made to families and surviving employees.

      What: DR Testing Essential Elements

      How did the Cantor Fitzgerald survive? They had a true, tested and documented DR and business continuity plan. A combination of internal experts and assets and a third-party vendor helped the company create and test a detailed, scripted recovery plan and methodology.

      Let’s briefly explore two key elements that need to be considered in DR testing processes, whether you choose to handle testing yourself, outsource it to a third-party or use a combination of both.

      Recovery Point Objectives

      Recovery Point Objectives (RPOs) preserve the company’s critical data with point-in-time backups or real-time replication to off-site media, like tape or online storage. Determine the impact of data loss in time (how long since the last good backup) and money (how many transactions and how much revenue will be lost because of it) and preserve your data accordingly. This is the “easier” element of DR testing, although nothing in IT or DR is exactly easy.

      Recovery Time Objective

      The tougher challenge in testing is determining Recovery Time Objective (RTO), which is how long will it take to restore enough functionality to keep the business running, and how quickly employees, vendors and customers can be tracked and connected in those DR systems. Once that has been established, the next important step is to determine how long it will take to get all that data back into the production environment and re-connect people once the disaster is over.

      Who: DIY and Third-Party DR Considerations

      One of the more logical IT initiatives to throw in the cloud or hand over to a third-party vendor is disaster recovery. After all, the old joke from CIOs down to IT Directors is “DR is #4 on my top three ‘To-Do List’ right now.”

      Personally, however, I would caution giving all the keys away to a third-party provider. An outside provider is less likely to have the passion for your business that a proud employee has, and they will not know the intricacies of the business, such as revenue drivers and the IT applications.

      On the other side, choosing a good third-party vendor and solution will save you time and probably money, preventing you from buying and owning two environments. A third-party DRaaS provider can also manage the infrastructure behind the scenes, allowing your team to focus on production, customers and vendors.

      My advice? Choose a solution provider that allows some co-management with your team. You should work hand in hand with your third-party DR vendor. Make them an extension of your team and manage them as you would any employee or critical application in your environment. Leverage them for what they are best at, while at the same time holding yourself and your organization accountable for your vendor’s participation in and seamless execution of your DR plan.

      Running a Failover Test

      You have many options of what and how to test. Some companies will do a full-blown failover of the entire environment, while others test subsets of their environment in a crawl, walk, run methodology. In either case, I find it most effective to work with a provider to isolate your test environment from the replication environment. That way, you can continue to replicate valuable information while you are testing. In the event of a disaster while you are testing, you will still be able to achieve your required RPO/RTOs.

      The test environment should be validated with transaction and remote connectivity from users and departments as if the production data center is no longer accessible. Here you will find (and actually may hope to find) holes in your plan and be able to document improvements and changes from your previous test.  A DR Plan is an ever improving, ever evolving, living, breathing document.

      Failing or not having a perfect DR test is not necessarily a bad thing. You are of course striving for a perfect test leading to a perfect failover in a real disaster. But the idea of testing is to find holes in your plan, update changes in your DR Plan since the previous test and continue to improve your recovery plan.

      You should definitely choose a third-party DR provider who bundles in test time—either one or two tests per year at a minimum—and who also provides documentation and runs books back to you following a test. You both need to be in sync should a disaster strike and to make the next test a success.  There is nothing worse than re-inventing the wheel with your third-party provider at the start of every new test.

      Always test, keep your critical disaster recovery systems up to date with your DR systems and test again. Disasters don’t occur very often, but when they do, the effects can be devastating. Be ready.

      Explore INAP Disaster Recovery as a Service.

      LEARN MORE

      Carleton Hall


      READ MORE



      Source link

      Testing Angular with Jasmine and Karma (Part 1)


      Our goal

      In this tutorial we will be building and testing an employee directory for a fictional company. This directory will have a view to show all of our users along with another view to serve as a profile page for individual users. Within this part of the tutorial we’ll focus on building the service and its tests that will be used for these users.

      In following tutorials, we’ll populate the user profile page with an image of the user’s favorite Pokemon using the Pokeapi and learn how to test services that make HTTP requests.

      What you should know

      The primary focus for this tutorial is testing so my assumption is that you’re comfortable working with TypeScript and Angular applications. As a result of this I won’t be taking the time to explain what a service is and how it’s used. Instead, I’ll provide you with code as we work through our tests.

      Why Test?

      From personal experience, tests are the best way to prevent software defects. I’ve been on many teams in the past where a small piece of code is updated and the developer manually opens their browser or Postman to verify that it still works. This approach (manual QA) is begging for a disaster.

      Tests are the best way to prevent software defects.

      As features and codebases grow, manual QA becomes more expensive, time consuming, and error prone. If a feature or function is removed does every developer remember all of its potential side-effects? Are all developers manually testing in the same way? Probably not.

      The reason we test our code is to verify that it behaves as we expect it to. As a result of this process you’ll find you have better feature documentation for yourself and other developers as well as a design aid for your APIs.

      Why Karma?

      Karma is a direct product of the AngularJS team from struggling to test their own framework features with existing tools. As a result of this, they made Karma and have transitioned it to Angular as the default test runner for applications created with the Angular CLI.

      In addition to playing nicely with Angular, it also provides flexibility for you to tailor Karma to your workflow. This includes the option to test your code on various browsers and devices such as phones, tablets, and even a PS3 like the YouTube team.

      Karma also provides you options to replace Jasmine with other testing frameworks such as Mocha and QUnit or integrate with various continuous integration services like Jenkins, TravisCI, or CircleCI.

      Unless you add some additional configuration your typical interaction with Karma will be to run ng test in a terminal window.

      Why Jasmine?

      Jasmine is a behavior-driven development framework for testing JavaScript code that plays very well with Karma. Similar to Karma, it’s also the recommended testing framework within the Angular documentation as it’s setup for you with the Angular CLI. Jasmine is also dependency free and doesn’t require a DOM.

      As far as features go, I love that Jasmine has almost everything I need for testing built into it. The most notable example would be spies. A spy allows us to “spy” on a function and track attributes about it such as whether or not it was called, how many times it was called, and with which arguments it was called. With a framework like Mocha, spies are not built-in and would require pairing it with a separate library like Sinon.js.

      The good news is that the switching costs between testing frameworks is relatively low with differences in syntax as small as Jasmine’s toEqual() and Mocha’s to.equal().

      A Simple Test Example

      Imagine you had an alien servant named Adder who follows you everywhere you go. Other than being a cute alien companion Adder can really only do one thing, add two numbers together.

      To verify Adder’s ability to add two numbers we could generate a set of test cases to see if Adder provides us the correct answer.

      Within Jasmine, this would begin with what’s referred to as a “suite” which groups a related set of tests by calling the function describe.

      // A Jasmine suite
      describe('Adder', () => {
      
      });
      

      From here we could provide Adder with a set of test cases such as two positive numbers (2, 4), a positive number and a zero (3, 0), a positive number and a negative number (5, -2), and so on.

      Within Jasmine, these are referred to as “specs” which we create by calling the function it, passing it a string to describe the functionality that’s being tested.

      describe('Adder', () => {
        // A jasmine spec
        it('should be able to add two whole numbers', () => {
          expect(Adder.add(2, 2)).toEqual(4);
        });
      
        it('should be able to add a whole number and a negative number', () => {
          expect(Adder.add(2, -1)).toEqual(1);
        });
      
        it('should be able to add a whole number and a zero', () => {
          expect(Adder.add(2, 0)).toEqual(2);
        });
      });
      

      Within each spec we call expect and provide it what is referred to as an “actual”—the call site of our actual code. After the expectation, or expect, is the chained “matcher” function, such as toEqual, which the testing developer provides with the expected output of the code being tested.

      There are many other matchers available to us other than toEqual. You can see a full list within Jasmine’s documentation.

      Our tests aren’t concerned with how Adder arrives at the answer. We only care about the answer Adder provides us. For all we know, this may be Adder’s implementation of add.

      function add(first, second) {
        if (true) { // why?
          if (true) { // why??
            if (1 === 1) { // why?!?1
              return first + second;
            }
          }
        }
      }
      

      In other words, we only care that Adder behaves as expected—we have no concern for Adder’s implementation.

      This is what makes a practice like test-driven development (TDD) so powerful. You can first write a test for a function and its expected behavior and get it to pass. Then, once it’s passing, you can refactor your function to a different implementation and if the test is still passing, you know your function is still behaving as specified within your tests even with a different implementation. Adder’s add function would be a good example!

      Angular setup

      We’ll begin by creating our new application using the Angular CLI.

      ng new angular-testing --routing
      

      Since we’ll have multiple views in this application we use the --routing flag so the CLI automatically generates a routing module for us.

      From here we can verify everything is working correctly by moving into the new angular-testing directory and running the application.

      cd angular-testing
      ng serve -o
      

      You can also verify the application’s tests are currently in a passing state.

      ng test
      

      Adding a home page

      Before creating a service to populate our home page with users, we’ll start by creating the home page.

      ng g component home
      

      Now that our component has been created, we can update our routing module’s (app-routing.module.ts) root path to HomeComponent.

      import { NgModule } from '@angular/core';
      import { RouterModule, Routes } from '@angular/router';
      import { HomeComponent } from './home/home.component';
      
      const routes: Routes = [
        { path: '', component: HomeComponent }
      ];
      
      @NgModule({
        imports: [RouterModule.forRoot(routes)],
        exports: [RouterModule]
      })
      export class AppRoutingModule { }
      

      Run the application if it isn’t already and you should now see “home works!” below the default template in app.component.html which was created by the CLI.

      Removing AppComponent tests

      Since we no longer need the default contents of AppComponent, let’s update it by removing some unnecessary code.

      First, remove everything in app.component.html so that only the router-outlet directive remains.

      <router-outlet></router-outlet>
      

      Within app.component.ts, you can also remove the title property.

      import { Component } from '@angular/core';
      
      @Component({
        selector: 'app-root',
        templateUrl: './app.component.html',
        styleUrls: ['./app.component.css']
      })
      export class AppComponent { }
      

      Finally, you can update the tests in app.component.spec.ts by removing the two tests for some of the contents that were previously in app.component.html.

      import { async, TestBed } from '@angular/core/testing';
      import { RouterTestingModule } from '@angular/router/testing';
      import { AppComponent } from './app.component';
      describe('AppComponent', () => {
        beforeEach(async(() => {
          TestBed.configureTestingModule({
            imports: [
              RouterTestingModule
            ],
            declarations: [
              AppComponent
            ],
          }).compileComponents();
        }));
        it('should create the app', async(() => {
          const fixture = TestBed.createComponent(AppComponent);
          const app = fixture.debugElement.componentInstance;
          expect(app).toBeTruthy();
        }));
      });
      

      Testing an Angular service

      Now that our home page is set up we can work on creating a service to populate this page with our directory of employees.

      ng g service services/users/users
      

      Here we’ve created our users service within a new services/users directory to keep our services away from the default app directory which can get messy quick.

      Now that our service is created, we can make a few small changes to the test file services/users/users.service.spec.ts.

      I personally find injecting dependencies within it() to be a bit repetitive and harder to read as it’s done in the default scaffolding for our test file as shown below:

      it('should be created', inject([TestService], (service: TestService) => {
        expect(service).toBeTruthy();
      }));
      

      With a few minor changes, we can move this into the beforeEach removing the duplication from each it.

      import { TestBed } from '@angular/core/testing';
      import { UsersService } from './users.service';
      
      describe('UsersService', () => {
        let usersService: UsersService; // Add this
      
        beforeEach(() => {
          TestBed.configureTestingModule({
            providers: [UsersService]
          });
      
          usersService = TestBed.get(UsersService); // Add this
        });
      
        it('should be created', () => { // Remove inject()
          expect(usersService).toBeTruthy();
        });
      });
      

      In the code above, TestBed.configureTestingModule({}) sets up the service we want to test with UsersService set in providers. We then inject the service into our test suite using TestBed.get() with the service we want to test as the argument. We set the return value to our local usersService variable which will allow us to interact with this service within our tests just as we would within a component.

      Now that our test setup is restructured, we can add a test for an all method which will return a collection of users.

      import { of } from 'rxjs'; // Add import
      
      describe('UsersService', () => {
        ...
      
        it('should be created', () => {
          expect(usersService).toBeTruthy();
        });
      
        // Add tests for all() method
        describe('all', () => {
          it('should return a collection of users', () => {
            const userResponse = [
              {
                id: '1',
                name: 'Jane',
                role: 'Designer',
                pokemon: 'Blastoise'
              },
              {
                id: '2',
                name: 'Bob',
                role: 'Developer',
                pokemon: 'Charizard'
              }
            ];
            let response;
            spyOn(usersService, 'all').and.returnValue(of(userResponse));
      
            usersService.all().subscribe(res => {
              response = res;
            });
      
            expect(response).toEqual(userResponse);
          });
        });
      });
      

      Here we add a test for the expectation that all will return a collection of users. We declare a userResponse variable set to a mocked response of our service method. Then we use the spyOn() method to spy on usersService.all and chain .returnValue() to return our mocked userResponse variable wrapping it with of() to return this value as an observable.

      With our spy set, we call our service method as we would within a component, subscribe to the observable, and set its return value to response.

      Finally, we add our expectation that response will be set to the return value of the service call, userResponse.

      Why mock?

      At this point many people ask, “Why are we mocking the response?” Why did we provide our test a return value userResponse that we created ourselves, to manually set what’s being returned from our service? Shouldn’t the service call return the real response from the service, whether it’s a hard-coded set of users or a response from an HTTP request?

      This is a perfectly reasonable question to ask and one that can be hard to wrap your head around when you first begin testing. I find this concept is easiest to illustrate with a real world example.

      Imagine you own a restaurant and it’s the night before opening day. You gather everyone you’ve hired for a “test run” of the restaurant. You invite a few friends to come in and pretend they’re customers who will sit down and order a meal.

      No dishes will actually be served in your test run. You’ve already worked with your cooks and are satisfied they can make the dishes correctly. In this test run you want to test the transition from the customer ordering their dish, to the waiter sending that to the kitchen, and then the waiters fulfilling the kitchen’s response to the customer. This response from the kitchen may be one of a few options.

      1. The meal is ready.
      2. The meal is delayed.
      3. The meal cannot be made. We ran out of ingredients for the dish.

      If the meal is ready, the waiter delivers the meal to the customer. However, in the event that a meal is late or cannot be made, the waiter will have to go back to the customer, apologize, and potentially ask for a second dish.

      In our test run, it wouldn’t make sense to actually create the meals when we want to test the front-end’s (waiter’s) ability to fulfill the requests received from the backend (kitchen). More importantly, if we wanted to test our waiters could actually apologize to customers in the cases where a meal is delayed or cannot be made we would literally be waiting until our cooks were too slow or we ran out of ingredients before our tests for those cases could be confirmed. For this reason, we would “mock” the backend (kitchen) and give the waiters whatever scenario it is that we want to test.

      Similarly in code, we don’t actually hit the API when we’re testing various scenarios. We mock the response we may expect to receive and verify that our application can handle that response accordingly. Just like our kitchen example, if we were testing our application’s ability to handle an API call that failed we would literally have to wait for our API to fail to verify it could handle that case—a scenario that hopefully won’t be happening that often!

      Adding users

      To get this test to pass, we need to implement the service method in users.service.ts.

      First, we’ll start by adding our imports and a collection of employees to the service.

      import { Injectable } from '@angular/core';
      import { Observable, of } from 'rxjs'; // Add imports
      
      @Injectable({
        providedIn: 'root'
      })
      export class UsersService {
        users: Array<object> = [  // Add employee object
          {
            id: '1',
            name: 'Jane',
            role: 'Designer',
            pokemon: 'Blastoise'
          },
          {
            id: '2',
            name: 'Bob',
            role: 'Developer',
            pokemon: 'Charizard'
          },
          {
            id: '3',
            name: 'Jim',
            role: 'Developer',
            pokemon: 'Venusaur'
          },
          {
            id: '4',
            name: 'Adam',
            role: 'Designer',
            pokemon: 'Yoshi'
          }
        ];
      
        constructor() { }
      }
      

      Then, just below our constructor, we can implement all.

      all(): Observable<Array<object>> {
        return of(this.users);
      }
      

      Run ng test again and you should now have passing tests including the new tests for our service method.

      Add users to the home page

      Now that our service method is ready to use, we can work towards populating our home page with these users.

      First, we’ll update index.html with Bulma to help us with some styling.

      <!doctype html>
      <html lang="en">
      <head>
        <meta charset="utf-8">
        <title>AngularTesting</title>
        <base href="https://www.digitalocean.com/">
      
        <meta name="viewport" content="width=device-width, initial-scale=1">
        <link rel="icon" type="image/x-icon" href="favicon.ico">
        <!--Add these-->
        <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.7.1/css/bulma.min.css">
        <script defer src="https://use.fontawesome.com/releases/v5.1.0/js/all.js"></script>
      </head>
      <body>
        <app-root></app-root>
      </body>
      </html>
      

      Then within home/home.component.ts we can add a call to our new service.

      import { Component, OnInit } from '@angular/core';
      import { UsersService } from '../services/users/users.service';
      
      @Component({
        selector: 'app-home',
        templateUrl: './home.component.html',
        styleUrls: ['./home.component.css']
      })
      export class HomeComponent implements OnInit {
        users;
      
        constructor(private usersService: UsersService) { }
      
        ngOnInit() {
          this.usersService.all().subscribe(res => {
            this.users = res;
          });
        }
      
      }
      

      First we import our service and inject it into our component’s constructor. Then we add a call to the service method within ngOnInit and set the return value to our component’s users property.

      To display these users to the view, update the template in home/home.component.html.

      <section class="section is-small">
        <div class="container">
          <div class="columns">
            <div class="column" *ngFor="let user of users">
              <div class="box">
                <div class="content">
                  <p class="has-text-centered is-size-5">{% raw %}{{user.name}}{% endraw %}</p>
                  <ul>
                    <li><strong>Role:</strong> {% raw %}{{user.role}}{% endraw %}</li>
                    <li><strong>Pokemon:</strong> {% raw %}{{user.pokemon}}{% endraw %}</li>
                  </ul>
                </div>
              </div>
            </div>
          </div>
        </div>
      </section>
      

      Now when you run ng serve and view the home page, you should see the users displayed within Bulma boxes.

      Finding a single user

      Now that our users are being populated into our home page, we’ll add one more service method for finding a single user that will be used for the user profile pages.

      First we’ll add the tests for our new service method.

      describe('all', () => {
        ...
      });
      
      describe('findOne', () => {
        it('should return a single user', () => {
          const userResponse = {
            id: '2',
            name: 'Bob',
            role: 'Developer',
            pokemon: 'Charizard'
          };
          let response;
          spyOn(usersService, 'findOne').and.returnValue(of(userResponse));
      
          usersService.findOne('2').subscribe(res => {
            response = res;
          });
      
          expect(response).toEqual(userResponse);
        });
      });
      

      Here we add a test for the expectation that findOne will return a single user. We declare a userResponse variable set to a mocked response of our service method, a single object from the collection of users.

      We then create a spy for usersService.findOne and return our mocked userResponse variable. With our spy set, we call our service method and set its return value to response.

      Finally, we add our assertion that response will be set to the return value of the service call, userResponse.

      To get this test to pass, we can add the following implementation to users.service.ts.

      all(): Observable<Array<object>> {
        return of(this.users);
      }
      
      findOne(id: string): Observable<object> {
        const user = this.users.find((u: any) => {
          return u.id === id;
        });
        return of(user);
      }
      

      Now when you run ng test you should see all of the tests in a passing state.

      Conclusion

      At this point I hope testing, both its benefits and the reason for writing them, is starting to become a bit more clear. Personally, I pushed off testing for the longest time and my reasons were primarily because I didn’t understand the why behind them and resources for testing were limited.

      What we’ve created in this tutorial isn’t the most visually impressive application but it’s a step in the right direction.

      In the next tutorial, we’ll create the user profile page and a service to retrieve a Pokemon image using the Pokeapi. We’ll learn how to test service methods that make HTTP requests and how to test components.

      If you want the tests to display in a more readable format within your terminal, there’s an npm package for this.

      First, install the package.

      npm install karma-spec-reporter --save-dev
      

      Once that’s finished, open src/karma.conf.js, add the new package to plugins, and update the progress value within reporters to spec.

      module.exports = function (config) {
        config.set({
          basePath: '',
          frameworks: ['jasmine', '@angular-devkit/build-angular'],
          plugins: [
            require('karma-jasmine'),
            require('karma-chrome-launcher'),
            require('karma-jasmine-html-reporter'),
            require('karma-coverage-istanbul-reporter'),
            require('@angular-devkit/build-angular/plugins/karma'),
            require('karma-spec-reporter') // Add this
          ],
          client: {
            clearContext: false // leave Jasmine Spec Runner output visible in browser
          },
          coverageIstanbulReporter: {
            dir: require('path').join(__dirname, '../coverage'),
            reports: ['html', 'lcovonly'],
            fixWebpackSourcePaths: true
          },
          reporters: ['spec', 'kjhtml'], // Update progress to spec
          port: 9876,
          colors: true,
          logLevel: config.LOG_INFO,
          autoWatch: true,
          browsers: ['Chrome'],
          singleRun: false
        });
      };
      

      Now when you run ng test you should have a more visually appealing output for your test suite.



      Source link