Introduction to testing C++ code

Introduction to testing

Overview

Teaching: 25 min
Exercises: 5 min
Questions
  • Why test my software?

  • How can I test my software?

  • How much testing is ‘enough’?

Objectives
  • Appreciate the benefits of testing research software

  • Understand what testing can and can’t achieve

  • Describe various approaches to testing, and relevant trade-offs

  • Understand the concept of test coverage, and how it relates to software quality and sustainability

  • Appreciate the benefits of test automation

Why Test?

There are a number of compelling reasons to properly test research code:

Whilst testing might seem like an intimidating topic the chances are you’re already doing testing in some form. No matter the level of experience, no programmer ever just sits down and writes some code, is perfectly confident that it works and proceeds to use it straight away in research. Instead development is in practice more piecemeal - you generally think about a simple input and the expected output then write some simple code that works. Then, iteratively, you think about more complicated example inputs and outputs and flesh out the code until those work as well.

When developers talk about testing all this means is formalising the above process and making it automatically repeatable on demand.

This has numerous advantages over a more ad hoc approach:

As you’re performing checks on your code anyway it’s worth putting in the time to formalise your tests and take advantage of the above.

A Hypothetical Scenario

Your supervisor has tasked you with implementing an obscure statistical method to use for some data analysis. Wanting to avoid unnecessary work you check online to see if an implementation exists. Success! Another researcher has already implemented and published the code.

You move to hit the download button, but a worrying thought occurs. How do you know this code is right? You don’t know the author or their level of programming skill. Why should you trust the code?

Now turn this question on its head. Why should your colleagues or supervisor trust any implementation of the method that you write? Why should you trust work you did a year ago? What about a reviewer for a paper?

This scenario illustrates the sociological value of automated testing. If published code has tests then you have instant assurance that its authors have invested time in the checking the correctness of their code. You can even see exactly the tests they’ve tried and add your own if you’re not satisfied. Conversely, any code that lacks tests should be viewed with suspicion as you have no record of what quality assurance steps have been taken.

Types of Testing

Once testing is formalised, the inevitable consequence is that there would be different types of testing for different purposes. In this course, we will put the focus on unit tests and integration tests, but you might hear about:

The jargon here can be intimidating, but you do not need to to worry about most of these for this course.

You can get more information about most of these types of testing as well as in general about several techniques used when writing tests in Software Testing Fundamentals.

Unit Testing

This is the main type of testing we will be dealing with in this course. Unit testing refers to taking a component of a program and testing it in isolation. Generally this means testing an individual class or function.

For this kind of testing to make sense, or even just to work, your code needs to be modular, written in small, independent components that can be easily unit tested. Therefore, a side effect of writing unit tests is that it forces you, in a sense, to improve the quality and sustainability of your code because, otherwise, it will not be testable!

Integration Testing

Integration goes a step forward and tests if multiple components working as a group work as expected. They are typically designed to expose faults in the interaction of the different units, e.g. inconsistent number or type of inputs/outputs, wrong structure of these, inconsistent physical units, etc.

Testing Done Right

It’s important to be clear about what software tests can provide and what they can’t. Unfortunately it isn’t possible to write tests that completely guarantee that your code is bug free or provides a one hundred percent faithful implementation of a particular model. In fact it’s perfectly possible to write an impressive looking collection of tests that have very little value at all. What should be the aim therefore when developing software tests?

In practice this is difficult to define universally but one useful mantra is that good tests thoroughly exercise critical code. One way to achieve this is to design test examples of increasing complexity that cover the most general case the unit should encounter. Also try to consider examples of special or edge cases that your function needs to handle especially.

A useful quantitative metric to consider is test coverage. Using additional tools it is possible to determine, on a line-by-line basis, the proportion of a codebase that is being exercised by its tests. This can be useful to ensure, for instance, that all logical branching points within the code are being used by the test inputs.

Testing and Coverage

Consider the following C++ function:

/*Returns the n'th term of the Fibonacci sequence.*/
int recursive_fibonacci(int n)
{
    if (n <= 1) {
        return n;
    } else {
        return recursive_fibonacci(n - 1) + recursive_fibonacci(n - 2);
    }
}

Try to think up some test cases of increasing complexity, there are four distinct cases worth considering. What input value would you use for each case and what output value would you expect? Which lines of code will be exercised by each test case? How many cases would be required to reach 100% coverage?

For convenience, some initial terms from the Fibonacci sequence are given below: 0, 1, 1, 2, 3, 5, 8, 13, 21

Solution

Case 1 - Use either 0 or 1 as input

*Correct output:- Same as input *Coverage:- First section of if-block *Reason:- This represents the simplest possible test for the function. The value of this test is that it exercises only the special case tested for by the if-block.

Case 2 - Use a value > 1 as input

*Correct output:- Appropriate value from the Fibonacci sequence *Coverage:- All of the code *Reason:- This is a more fully fledged case that is representative of the majority of the possible range of input values for the function. It covers not only the special case represented by the first if-block but the general case where recursion is invoked.

Case 3 - Use a negative value as input

*Correct output:- Depends… *Coverage:- First section of if-block *Reason:- This represents the case of a possible input to the function that is outside of its intended usage. At the moment the function will just return the input value, but whether this is the correct behaviour depends on the wider context in which it will be used. It might be better for this type of input value to throw an exception, however. The value of this test case is that it encourages you to think about this scenario and what the behaviour should be. It also demonstrates to others that you’ve considered this scenario and the function behaviour is as intended.

Case 4 - Use a non-integer input e.g. 3.5

Luckily, this is not possible in C++, being a strongly typed language, so one less thing to test. However, it can be an issue in other programming languages like Python, in which case some sort of defensive programming should be put in place to handle these cases.

Summary

The importance of automated testing for software development is difficult to overstate. As testing on some level is always carried out there is relatively low cost in formalising the process and much to be gained. The rest of this course will focus on how to carry out unit testing.

Key Points

  • Testing is the standard approach to software quality assurance

  • Testing helps to ensure that code performs its intended function: well-tested code is likely to be more reliable, correct and flexible

  • Good tests thoroughly exercise critical code

  • Code without any tests should arouse suspicion, but it is entirely possible to write a comprehensive but practically worthless test suite

  • Testing can contribute to performance, security and long-term stability as the size of the codebase and its network of contributors grows

  • Testing can ensure that software has been installed correctly, is portable to new platforms, and is compatible with new versions of its dependencies

  • In the context of research software, testing can be used to validate code i.e. ensure that it faithfully implements scientific theory

  • Unit (e.g. a function); Functional (e.g. a library); and Regression, (e.g. a bug) are three commonly used types of tests

  • Test coverage can provide a coarse- or fine-grained metric of comprehensiveness, which often provides a signal of code quality

  • Automated testing is another such signal: it lowers friction; ensures that breakage is identified sooner and isn’t released; and implies that machine-readable instructions exist for building and code and running the tests

  • Testing ultimately contributes to sustainability i.e. that software is (and remains) fit for purpose as its functionality and/or contributor-base grows, and its dependencies and/or runtime environments change


Introduction to Unit Testing using GoogleTest

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • What framework/libraries can be used for unit testing in C++?

  • What is GoogleTest?

  • How can I write and run unit tests with GoogleTest?

Objectives
  • Understand the basic components of a unit test

  • Learn how to write, compile and run a unit test based on GoogleTest

  • Explain some of the commonly used macros in a unit test based on GoogleTest

Testing frameworks help in writing and executing tests for code validation and quality assurance. They provide structures and tools for creating test cases, running tests, and generating reports. There are a number of popular frameworks that can be used for testing in C++. A few of them are mentioned below:-

In this course, we will use GoogleTest, as it is popular and easy to use, though the concepts are the same as for other testing frameworks.

2. A brief introduction to GoogleTest

Simply defined, GoogleTest is a testing framework developed by Google’s testing technology team to develop and write C++ tests. GoogleTest offers multiple advantages over other frameworks:-

  1. Comprehensive Features: GoogleTest provides a rich set of features, including a wide range of assertion macros, test fixtures, parameterised tests, test discovery, test filtering, and powerful mocking capabilities. It offers a complete testing framework that can handle various testing scenarios.
  2. Large and Active Community: GoogleTest has a large and active community of developers. This means that there is ample support available in terms of documentation, tutorials, forums, and online resources.
  3. Mature and Stable: It is a mature and stable framework that has been used extensively in industry projects and open-source software.
  4. Wide Platform Support: GoogleTest supports multiple platforms, including Windows, Linux, macOS, and various compilers. It is compatible with popular development environments and build systems, making it suitable for a wide range of C++ projects.
  5. Flexible and Extensible: GoogleTest provides flexibility in test organization and customization. It allows you to structure your tests using test cases and test suites. You can also define custom test fixtures and customise test execution and reporting. Additionally, GoogleTest can be extended with custom assertion macros and utilities to suit your specific testing needs.

3. Writing unit tests using GoogleTest

For this episode, we will consider the same example fibonacci.cpp that we used previously. There we identified 3 possible cases when using the recursive_fibonacci function for which we wrote some manual tests for when:

To demonstrate how to use GoogleTest, we will simply be converting these tests that we wrote manually in the same file using GoogleTest framework. Section 5 below will describe the anatomy of a test, but first let’s see what they look like in the practical case. For using GoogleTest in your code, you need to follow the following steps in general.

3.1. Adding the required header files

The first step is to add the required header files in your program. For GoogleTest, you need to add the following line to your code:

#include "gtest/gtest.h"

3.2. Create your tests

The next step is to define your test. GoogleTest uses the following convention in naming various tests:

TEST(TestSuiteName, TestName)
{
    // Test logic and assertions
}

The different parts in the above cell have the following meanings:

3.3. Initialise GoogleTest in your main function

When you install GoogleTest you would get two libraries namely libgtest.a and libgtest_main.a (The extension .a means it is a static library and is applicable for Linux based systems). The first one, i.e. libgtest.a, is the library which provides all the necessary testing features such as assertions, test discovery, collection of results etc. The second library, i.e. libgtest_main.a, provides a main function so that you do not need to write your own main function for testing. For more details, refer to Difference between libgtest and libgtest-main.

For this sub-section, we are assuming that you are writing your own main function, while the next section describes how to run your tests without your own main function by linking to the main function provided in lgtest_main. You will need to use the following as your main function:

int main(int argc, char **argv)
{
    // Initialise GoogleTest Framework
    testing::InitGoogleTest(&argc, argv);
    
    // Instruction to Run the test cases.
    return RUN_ALL_TESTS();
}

RUN_ALL_TESTS() is typically called from main and when it is invoked, it scans the program for all the test cases and test methods defined using the TEST macro. It then executes each test case and captures the results of each individual test method within the test case. After running all the tests, it provides a summary of the test results, including the number of tests run, passed, and failed.

3.4. Compile and run your program

In order to compile your program and link with the required libraries, you can use the following instruction as a template. Please remember to modify the paths according to your system and location of the files.

# For compiling and linking, please use the instruction below
$ g++ your_code.cpp -I path_to_your_gtest.h -L path_to_your_lgtest.a -lgtest -lpthread -o your_executable_name
 
# Run your code  
$ ./your_executable_name

In the above cell, lpthread refers to pthread library which is an acronym for “POSIX Threads”. It is a library in C and C++ that provides an interface for creating and managing threads in a multi-threaded program. It is based on the POSIX (Portable Operating System Interface) standard for thread management.

Although, we are not using any multi-threaded feature explicity in this course, we still need to link the progam with this library. This is because some of the function described in lgtest depends on lpthread and without it the linker will give an error.

If you do not want to write your main function, you can link against the one provided in the libgtest_main.a library by using the following instruction.

$ g++ your_code.cpp -I path_to_your_gtest.h -L path_to_your_lgtest.a -lgtest_main -lgtest -lpthread -o your_executable_name

4. Test Assertions in GoogleTest

Assertions in C++ are statements used to validate assumptions or conditions during program execution. They are primarily used for debugging and testing purposes to check if certain conditions are true. Assertions help detect programming errors and provide a mechanism to halt the program’s execution or display an error message when a condition is not satisfied.

GoogleTest offers many types of assertions as described in GoogleTest Assertions. A few of them are described below as they will be used in our upcoming sections/chapters.

  1. Equality Assertions: Equality assertions are used to compare values for equality. The most commonly used assertion is ASSERT_EQ(expected, actual), which verifies that the expected and actual values are equal. For example:

     ASSERT_EQ(expected_value, your_function(function_arguments));
     // Verify that the expected value is equal to the value returned by your function.
    
     // You can alo change the order if you prefer.
     ASSERT_EQ(your_function(function_arguments), expected_value);
    

    Other useful equality assertions include ASSERT_NE, ASSERT_LT, ASSERT_LE, ASSERT_GT, and ASSERT_GE for performing inequality comparisons.

  2. Boolean Assertions: Boolean assertions are used to verify boolean conditions. For example, ASSERT_TRUE(condition) checks that the condition is true, while ASSERT_FALSE(condition) ensures that the condition is false.

     ASSERT_TRUE(isValid);  // Verify that the isValid flag is true
     ASSERT_FALSE(hasError);  // Verify that the hasError flag is false
    
  3. Exception Assertions: Exception assertions are used to validate that specific exceptions are thrown during the execution of code. In GoogleTest, you can use the ASSERT_THROW(statement, exceptionType) assertion. For example:

     ASSERT_THROW(throwException(), std::runtime_error);  // Verify that throwException() throws a std::runtime_error
    
  4. String Assertions: String assertions are used to compare string values. GoogleTest provides various string assertions, such as ASSERT_STREQ, ASSERT_STRNE, ASSERT_STRCASEEQ, and ASSERT_STRCASENE. These assertions allow you to compare strings for equality, inequality, or case-insensitive equality.

     ASSERT_STREQ("Hello", getString());  // Verify that getString() returns the exact string "Hello"
     ASSERT_STRCASEEQ("hello", getString());  // Verify that getString() returns "hello" in a case-insensitive manner
    

The majority of the macros listed above come as a pair with an EXPECT_ variant and an ASSERT_ variant. Upon failure, EXPECT_ macros generate nonfatal failures and allow the current function to continue running, while ASSERT_ macros generate fatal failures and abort the current function.

All assertion macros support streaming a custom failure message into them with the << operator, for example:

EXPECT_TRUE(my_condition) << "My condition is not true";

5. Anatomy of a Unit Test

A unit test in general follows a three step structure:

  1. Setup: In the setup phase, we prepare the necessary preconditions for the unit test. This involves creating any required objects, initialising variables, and setting up the environment to mimic the desired test scenario. The setup step ensures that the unit being tested has the necessary dependencies and context to execute successfully.
  2. Execution: The execution step involves invoking the unit under test with the specified inputs or parameters. This is the actual execution of the code being tested. The unit is executed with the predetermined inputs, and its output or behavior is observed.
  3. Verification: In the verification step, we check whether the actual output or behaviour matches the expected result. This typically involves making assertions or comparisons between the observed output and the expected output. If the assertions pass, the test is considered successful. Otherwise, if any assertion fails, it indicates a discrepancy between the expected and actual outcomes, highlighting a potential issue in the unit being tested.

This 3 step structure if often referred as Arrange-Act-Assert (AAA) in some textbooks and online resources.

6. Writing Effective Unit Tests

For writing your Unit Tests, it is advisable to follow the guidelines given below.

  1. Clear and Descriptive Names: Choose meaningful names for your test methods that accurately describe the scenario being tested. A good test method name should clearly convey the input, expected behavior, or outcome being verified. Avoid vague or generic names that don’t provide sufficient information about the purpose of the test.

    a. calculateTotal_WithValidInputs_ShouldReturnCorrectSum b. validateEmail_WithInvalidFormat_ShouldReturnFalse

  2. Consistent Formatting: Consistency in naming conventions helps maintain a uniform and predictable test suite. Choose a naming style and stick to it throughout your test methods. Some common conventions include using CamelCase or underscore-separated words. Additionally, consider using a prefix like test_ or a suffix like should to distinguish test methods from regular code.

  3. Single Responsibility: Each test method should focus on testing a single aspect or behaviour of the unit under test. Avoid testing multiple scenarios within a single test method, as it can make the test less readable and harder to diagnose when failures occur. Instead, break down complex scenarios into multiple smaller tests, each targeting a specific case or condition.

  4. Test Data Organization: Separate the test data from the test methods. Consider using dedicated variables or data structures to store test data. This allows for better readability and maintainability, as changes to the test data can be easily managed without modifying the test methods themselves.

7. Unit Test for the Fibonacci Sequence

With the above guidelines in mind, we can write some of the unit tests for our fibonacci.cpp that we learnt in Chapter 1. The complete code is given below and can be found in Chapter2.

#include <iostream>
#include "gtest/gtest.h"

/*Returns the n'th term of the Fibonacci sequence.*/
int recursive_fibonacci(int n)
{
    if (n <= 1) {
        return n;
    } else {
        return recursive_fibonacci(n - 1) + recursive_fibonacci(n - 2);
    }
}

TEST(FibonacciTest, HandlesZeroInput) {
    EXPECT_EQ(recursive_fibonacci(0), 0);
}

TEST(FibonacciTest, HandlesValueOneAsInput) {
    EXPECT_EQ(recursive_fibonacci(1), 1);
}

TEST(FibonacciTest, HandlesPositiveInput) {
    EXPECT_EQ(recursive_fibonacci(5), 5);
}

int main(int argc, char **argv)
{
    testing::InitGoogleTest(&argc, argv);
    return RUN_ALL_TESTS();
}

On compiling and running above program, we get the following output.

[==========] Running 3 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 3 tests from FibonacciTest
[ RUN      ] FibonacciTest.HandlesZeroInput
[       OK ] FibonacciTest.HandlesZeroInput (0 ms)
[ RUN      ] FibonacciTest.HandlesValueOneAsInput
[       OK ] FibonacciTest.HandlesValueOneAsInput (0 ms)
[ RUN      ] FibonacciTest.HandlesPositiveInput
[       OK ] FibonacciTest.HandlesPositiveInput (0 ms)
[----------] 3 tests from FibonacciTest (0 ms total)

[----------] Global test environment tear-down
[==========] 3 tests from 1 test suite ran. (0 ms total)
[  PASSED  ] 3 tests.

Exercise: Test for negative values

Modify the above program to throw an exception for negative values and write a test for this.

Solution

Add the following lines in your code. The full solution is given in Solution.

// Change the function as shown below.
int recursive_fibonacci(int n)
{
    if(n < 0)
        throw std::invalid_argument("Input must be a non-negative number");
    else if (n <= 1) {
        return n;
    } else {
        return recursive_fibonacci(n - 1) + recursive_fibonacci(n - 2);
    }
}

// Add the required test.
TEST(FibonacciTest, ThrowsExceptionNegativeInput) {
    EXPECT_THROW(recursive_fibonacci(-3), std::invalid_argument);
}

Summary

In this chapter, we learnt about the basics of GoogleTest and how to use it to write tests in C++.

Key Points

  • Amongst the various libraries available for testing in C++, GoogleTest is one of the most popular and easy to use library.

  • GoogleTest offers various macros such as equality assertions, boolean assertions, exception assertions etc. to write your unit tests.

  • A unit test typically follows a three step structure: Setup, Execution and Verification also widely termed as Arrange, Act and Assert (AAA) technique.

  • GoogleTest allows you to write your tests with and without your own main function. In case you do not want to write your own main function, you can link against the one provided in the libgtest_main.a library.


Introduction to Test Fixtures using GoogleTest

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • What is a test fixture?

  • How can I create my own test fixture using GoogleTest?

  • What do Setup and Teardown mean in relation to test fixtures?

Objectives
  • Understand the basics of test fixtures.

  • Write some test fixtures using GoogleTest.

  • Analyze the advantages of test fixtures.

  • Learn why we need Setup and Teardown functions.

  • Create your own Setup and Teardown functions.

  • Learn how to run a subset of tests.

1. Brief Introduction of Test Fixtures

A test fixture, in the context of unit testing, refers to the preparation and configuration needed to run a set of test cases. It includes the setup of the test environment, the creation of necessary objects or resources, and the cleanup steps after the tests are executed. Test fixtures help ensure that the tests are performed in a controlled and consistent environment, providing reliable and reproducible results.

In GoogleTest, a test fixture is implemented using a test fixture class. This class serves as a container for shared setup and cleanup logic, as well as any shared objects or resources required by the test cases within the fixture.

1.1 When is a Test Fixture Needed?

A test fixture is typically used in the following scenarios:

  1. Shared Setup and Teardown: When multiple test cases require identical preparation (Setup) and cleanup (Teardown), a test fixture is beneficial. Instead of duplicating the setup and teardown code in each test case, we can define it once in the test fixture and reuse it across all the tests.

  2. Reducing Code Duplication: Test fixtures help in reducing code duplication. By encapsulating the common setup and teardown logic within a fixture, we avoid duplicating the same code in multiple test cases. This improves code maintainability and reduces the chances of errors due to inconsistent or incomplete setup/teardown.

  3. Isolation and Independence: Test fixtures provide a level of isolation and independence for each test case. Each test case within a fixture runs in its own instance of the fixture class, ensuring that changes made by one test case do not affect the others. This allows for parallel execution of test cases without interference.

Let us understand how to create a test fixture, setup and teardown functions with examples.

2. Problem under consideration for testing

In order to create our own test fixture, we will first explain the context or problem that we are trying to solve. We will be adding tests to check that our code works as intended by making use of test fixtures.

Consider that you want to write a program to manage the details of an employee. The program should allow you to add basic details of an employee such as:

  1. Name
  2. Age
  3. Basic Salary
  4. Number of years of employment
  5. Basic Bonus the employee has received in this year.

The program should calculate the Net Bonus, Tax and Salary based on the following rules:

  1. Bonus Rule: An employee gets an additional £1000 bonus if she or he has worked for more than 10 years.
  2. Tax Rule: Tax is calculated on a combination of basic salary and net bonus as shown below.

    • 0 if salary is less than 10k GBP.
    • 10% for salary between 10K-20K GBP.
    • 20% for salary between 20K-50K GBP
    • 50% for salary greater than 50K GBP

Based on above, we can declare the employee class in employee.h as shown below (Declaration of Employee class).

class Employee
{
private:
    std::string name;
    float age;
    double base_salary; //salary before calculating tax and adjusting bonus.
    double number_years_employed;

    double basic_bonus; //bonus for current year.
    double net_bonus; //bonus after adjusting for experience.
    
    double tax_amount;
    double net_salary; //salary after calculating tax and adjusting bonus.

public:
    // Constructor.
    Employee(const std::string& employee_name, float employee_age, 
             double employeeSalary, double employeeNumberYearsEmployed,
             double employeeBonus);

    // Public member functions to set values.
    void setName(const std::string& employee_name); 
    void setAge(float employee_age) ;
    void setBaseSalary(double employeeSalary); 
    void setNumberYearsEmployed(double employeeNumberYearsEmployed); 
    void SetBasicBonus(double employeeBonus);

    void calcNetBonus(); //To calculate net bonus while considering experience.
    void calcTaxAmount(); // To calculate tax to be paid based on salary with bonus
    void calcNetSalary(); // To calculate net salary after adjusting tax and bonus.

    // Getter functions.
    std::string getName() const;
    float getAge() const;
    double getBasicSalary() const;
    double getNumberYearsEmployed() const;
    double getBasicBonus() const;
    double getNetBonus() const;
    double getTaxAmount() const;
    double getNetSalary() const;

    void displayInfo() const;

    // Destructor
    ~Employee();
};

For the definition part, we include only a few functions here. You can find the complete definition of this class in employee.cpp.

Employee::Employee(const std::string& employee_name, float employee_age,
                   double employeeSalary, double employeeNumberYearsEmployed,
                   double employeeBonus)
                    : age(employee_age),
                      base_salary(employeeSalary),
                      number_years_employed(employeeNumberYearsEmployed),
                      basic_bonus(employeeBonus),
                      net_bonus(0),
                      tax_amount(0),
                      net_salary(0)
{
    setName(employee_name);
    calcNetBonus();
    calcTaxAmount();
    calcNetSalary();

}

void Employee::setName(const std::string& employee_name) 
{
    if(employee_name == "")
    {
        throw std::invalid_argument("Name cannot be empty");
    }
    name = employee_name;
}

void Employee::setAge(float employee_age) 
{
    age = employee_age;
}

With this code, we now have the necessary fragments to test our Employee class. Let us see this in action.

3. Unit tests for our employee class without test fixtures

In order to clearly demonstrate why a test fixture would be needed, we first write some tests for our employee class without using a fixture. This will help us to understand why does a fixture is useful and how to use it.

For this subsection, let us assume that we are checking two functionalities of our employee class (for code, please see 1_employeetest.cpp) which are:-

  1. We can set the name of employee correctly.
  2. We can set the age correctly.

The code for these two tests (which is entirely based on what we learn in second module of this course) is given below.

// Test if we can set the name of an employee.
TEST(EmployeeTest, CanSetName) {
    Employee employee{"John", 25, 10000, 5, 1000};
    employee.setName("John Doe");
    EXPECT_EQ(employee.getName(), "John Doe");
}

// Test if we can set the age of an employee.
TEST(EmployeeTest, CanSetAge) {
    Employee employee{"John", 25, 10000, 5, 1000};
    employee.setAge(30);
    EXPECT_EQ(employee.getAge(), 30);
}

While the above test solve our problem, there is a problem of code duplication and object creation for each test. As we can see in each test that we have to create an instance of employee class by using the statement Employee employee{"John", 25, 10000, 5, 1000};. This is against the `DRY (Don’t Repeat Yourself)’ https://en.wikipedia.org/wiki/Don%27t_repeat_yourself principle.

Moreover, all our tests depend on the same Employee class. It therefore makes sense to create an instance of Employee once, and let the GoogleTest manage the creation of the instance for each test case. Let us see this in action in next section.

4. Test fixture for our employee class

Now that we know why do we need a test fixture, let us fist learn about the basic syntax of the test fixture in GoogleTest and then write the code for it.

In GoogleTest, a test fixture is created by writing another class which is derived from ::testing::Test using public access specifier. The general syntax is as shown below

class Your_test_fixture_class_name : public::testing::Test {
    public:
        ClassUnderTest publicInstance;

    protected:
        ClassUnderTest protectedInstance;

    private:
        ClassUnderTest privateInstance;
};

Please note that you do not need to use all three access specifiers public, private and protected defined above. The choice would depend on the following:-

  1. public: This is the most commonly used access specifier in test fixtures. It allows the test fixture class and its members (including the instance of the class we want to test) to be accessed from anywhere, including test cases defined outside the fixture class.

  2. protected: This access specifier restricts the visibility of the test fixture class and its members to derived classes and other classes within the same hierarchy. It can be useful if we have derived test fixture classes that need access to the class under test or if we want to limit the accessibility of the test fixture within a certain scope.

  3. private: This access specifier restricts the visibility of the test fixture class and its members only to the test fixture class itself. It can be useful if we want to encapsulate the test logic within the test fixture class and prevent external access or if we want to limit the scope of the test fixture.

For our course, we will be using the public access specifier. Once we have the test fixture class, we use the TEST_F macro available in GoogleTest to write our tests instead of the TEST macro we have been using so far. The general syntax is given below.

TEST_F(Your_test_fixture_class_name, Your_test_name) {
    // Test logic goes here
}

Since we now have all the basic tools to create our own test fixtures, let us rewrite the above tests using a fixture. The code is present in this 2_employeetest.cpp. For reference, the tests are shown in the cell below.

// Create a test fixture.
class EmployeeTestFixture : public::testing::Test {
    public:
        Employee employee{"John", 25, 45000, 12, 5000};

};

// Test if we can set the name of an employee.
TEST_F(EmployeeTestFixture, CanSetName) {
    employee.setName("John Doe");
    EXPECT_EQ(employee.getName(), "John Doe");
}

// Test if we can set the age of an employee.
TEST_F(EmployeeTestFixture, CanSetAge) {
    employee.setAge(30);
    EXPECT_EQ(employee.getAge(), 30);
}

5. Why do we need setup and teardown in test fixture?

So far, our test fixture class only creates an instance of object for the class under test. In many cases, we often want some common action for all our tests such as adding an entry, connection to a database, response from a site etc. Let us try to understand this with example which will set the background for the setup and teardown functions.

Let us consider that we are creating a table which will store the details of various employees. The table allows us to add new entries, remove employees from the table, get the number of entries in the table etc. The declaration of table class is given in employee_table.h. The definitions of table class is present in employee_table.cpp. We give the list of functions in the table class for reference below.

// Member function to add employees into the table.
void addEmployee(const Employee& employee);

// Member function to remove employees into the table.
void removeEmployee(const std::string& employeeName);

// Member function to display information (names) of all employees in the table.
void displayEmployeesName() const;

// Function to check if the table is empty.
bool isEmpty() const;

// Function to get the number of entries in the table.
int getEntryCount() const;

We want to test our table class. In particular, we are interested in testing the following:

  1. Table is not empty after adding an employee.
  2. Number of entries is one after adding an employee.
  3. Number of entries in table reduces by one after removing an employee (assuming that there was at least one entry in the table).

Using our knowledge of test fixtures learnt in previous subsection, we can write the tests as shown below. Please see the file 3_emp_table_test.cpp for more details.

// Test fixture for EmployeeTable class.
class EmployeeTableTest : public testing::Test {
    public:
        EmployeeTable table;
};

// Test that the table is not empty after adding an employee.
TEST_F(EmployeeTableTest, TableIsNotEmptyAfterAddingEmployee) {
    Employee new_employee("John Doe", 30, 5000, 5, 1000);
    table.addEmployee(new_employee);
    EXPECT_FALSE(table.isEmpty());
}

// Test that number of entries is one after adding an employee.
TEST_F(EmployeeTableTest, NumberOfEntriesIsOneAfterAddingEmployee) {
    Employee new_employee("John Doe", 30, 5000, 5, 1000);
    table.addEmployee(new_employee);
    EXPECT_EQ(table.getEntryCount(), 1);
}

// Test that number of entries in table reduces by one after removing an employee.
TEST_F(EmployeeTableTest, NumberOfEntriesIsOneLessAfterRemovingEmployee) {
    Employee new_employee("John Doe", 30, 5000, 5, 1000);
    table.addEmployee(new_employee);
    table.removeEmployee("John Doe");
    EXPECT_EQ(table.getEntryCount(), 0);
}

As we can see from above, that all tests required creating an instance of Employee first by using the statement like Employee new_employee followed by adding an entry to the table table.addEmployee(new_employee). Thus, we can see that our tests need some setup which is common for all and hence Setup() function comes to rescue for exactly such scenarios.

6. Setup and Teardown function in test fixture

A Setup() function in a test fixture is responsible for providing and executing the necessary setup instructions for our tests. Similarly, a Teardown() function is responsible for cleaning up operations such as deleting the memory allocated, closing the database connection etc.

To create a Setup() function, we just define this function in our fixture class which will override the virtual function in testing::Test class in GoogleTest.

For our table class, we can create the Setup() and Teardown() functions as shown below. For more details, please see 4_table_test_with_setup.cpp.

// Test fixture for EmployeeTable class.
class EmployeeTableTest : public testing::Test {
    public:
        EmployeeTable table;
};

TEST_F(EmployeeTableTest, TableIsEmptyWhenCreated) {
    EXPECT_TRUE(table.isEmpty());
}

TEST_F(EmployeeTableTest, TableHasSizeZeroWhenCreated) {
    EXPECT_EQ(table.getEntryCount(), 0);
}

class EmployeeTableWithOneEmployee : public testing::Test {
    public:
        EmployeeTable table;
        Employee* employee;

        void SetUp() override {
            employee = new Employee("John Doe", 30, 5000, 5, 1000);
            table.addEmployee(*employee);
        }

        void TearDown() override {
            delete employee;
            employee = nullptr;
        }
};

TEST_F(EmployeeTableWithOneEmployee, TableIsNotEmptyWhenCreatedWithOneEmployee) {
    EXPECT_FALSE(table.isEmpty());
}

TEST_F(EmployeeTableWithOneEmployee, NumberOfEntriesIsOneWhenCreatedWithOneEmployee) {
    EXPECT_EQ(table.getEntryCount(), 1);
}

TEST_F(EmployeeTableWithOneEmployee, NumberOfEntriesIsOneLessAfterRemovingEmployee) {
    table.removeEmployee("John Doe");
    EXPECT_EQ(table.getEntryCount(), 0);
}

As we can see from above, our test looks much cleaner with the setup and teardown function. The reason for creating another test fixture for writing Setup() and Teardown() is because the first two tests do not require it.

Exercise: Write a test function to check the display on screen.

In some cases, we may need to check that the output or the message displayed on screen is correct. For example, we may want to check that the name of employee is displayed correctly on screen. We have a function named displayEmployeesName in our class EmployeeTable which displays the name of all employees in the table. The purpose of this exercise is to write a test function to see if it works correctly or not.

We will be making use of std::stringstream to capture the output of displayEmployeesName function. If you are interested to know why a stringstream class is required, you can read this article Check my output is correct.

Solution

The full solution is given in Solution. We present some important parts of the solution below.

// Check the display function work correctly.
TEST_F(EmployeeTableWithOneEmployee, DisplayFunctionWorksCorrectly) {

    // STEP 1: ARRANGE
    std::stringstream s_input;

    // STEP 2: ACT
    // You would use the following line in your application (production use) to display the employees' names on the screen.
    // However, this line is not necessary for the test and has been introduced only for the demonstration purpose.
    table.displayEmployeesName(std::cout);

    // Pass a string stream object to the function under test instead of std::cout.
    // Later we will use it to compare with the expected output.
    table.displayEmployeesName(s_input);

    // Store expected output in a string.
    std::string expected_output = "-------------------------------------------------- \n"
                                  "John Doe\n"
                                  "-------------------------------------------------- \n";

    // STEP 3: ASSERT
    EXPECT_EQ(s_input.str(), expected_output);
}

7. Tests Filtering in GoogleTest

Sometimes, we may have some tests that take a lot of time to run. In some other case, when we are developing and testing our code, we do not want to run our entire test suite and run only one of the test that we have recently added.

GoogleTest allows us to select or omit tests using the command line option --gtest_filter. The general syntax to use gtest_filter is

$ ./your_executable --gtest_filter=Pattern[-Pattern]

where Pattern can be a valid string patterns. The -Pattern will run all tests except the pattern in the command. Instead of a pattern, we can also use full test name in the form test_suite_name.test_name.

Let us run our table tests in the file 4_table_test_with_setup.cpp. Let us assume that the executable name is employee_table_tests. We get the following output.

[==========] Running 5 tests from 2 test suites.
[----------] Global test environment set-up.
[----------] 2 tests from EmployeeTableTest
[ RUN      ] EmployeeTableTest.TableIsEmptyWhenCreated
[       OK ] EmployeeTableTest.TableIsEmptyWhenCreated (0 ms)
[ RUN      ] EmployeeTableTest.TableHasSizeZeroWhenCreated
[       OK ] EmployeeTableTest.TableHasSizeZeroWhenCreated (0 ms)
[----------] 2 tests from EmployeeTableTest (0 ms total)

[----------] 3 tests from EmployeeTableWithOneEmployee
[ RUN      ] EmployeeTableWithOneEmployee.TableIsNotEmptyWhenCreatedWithOneEmployee
[       OK ] EmployeeTableWithOneEmployee.TableIsNotEmptyWhenCreatedWithOneEmployee (0 ms)
[ RUN      ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneWhenCreatedWithOneEmployee
[       OK ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneWhenCreatedWithOneEmployee (0 ms)
[ RUN      ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneLessAfterRemovingEmployee
[       OK ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneLessAfterRemovingEmployee (0 ms)
[----------] 3 tests from EmployeeTableWithOneEmployee (0 ms total)

[----------] Global test environment tear-down
[==========] 5 tests from 2 test suites ran. (0 ms total)
[  PASSED  ] 5 tests.

Since, we defined 5 tests, all tests run if we run the executable. Now, let us filter the tests. We want to run only the tests associated with EmployeeTableWithOneEmployee. We use the following command

$ ./my_test --gtest_filter=*One

The Output is

Employee*
Running main() from /home/lokesh/My_compiled_Libraries/test/googletest/googletest/src/gtest_main.cc
Note: Google Test filter = *OneEmployee*
[==========] Running 3 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 3 tests from EmployeeTableWithOneEmployee
[ RUN      ] EmployeeTableWithOneEmployee.TableIsNotEmptyWhenCreatedWithOneEmployee
[       OK ] EmployeeTableWithOneEmployee.TableIsNotEmptyWhenCreatedWithOneEmployee (0 ms)
[ RUN      ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneWhenCreatedWithOneEmployee
[       OK ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneWhenCreatedWithOneEmployee (0 ms)
[ RUN      ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneLessAfterRemovingEmployee
[       OK ] EmployeeTableWithOneEmployee.NumberOfEntriesIsOneLessAfterRemovingEmployee (0 ms)
[----------] 3 tests from EmployeeTableWithOneEmployee (0 ms total)

[----------] Global test environment tear-down
[==========] 3 tests from 1 test suite ran. (0 ms total)
[  PASSED  ] 3 tests.

Finally, let us assume that we want to run all tests except EmployeeTableWithOneEmployee.NumberOfEntriesIsOneLessAfterRemovingEmployee. We can use the following command

./my_test --gtest_filter=-EmployeeTableWithOneEmployee.NumberOfEntriesIsOneLessAfterRemovingEmployee

This will run all 4 tests except the one mentioned in the filter because of the negative sign.

Summary

In this chapter, we learnt about the basics of test fixtures and how to use them to write tests. We also learnt the importance of Setup() and Teardown() function and saw example on how to write them. Finally, we also learnt about test filters.

Key Points

  • A test fixture is a useful tool while writing unit tests because it reduces the code duplication, maintains test independence and takes care of the common setup and teardown operations.

  • Test fixture ensures that different tests do not interfere with each other by creating a new instance of its fixture class for every test.

  • GoogleTest allows us to run a subset of tests which can be quite useful if our full test suite takes a long time to run or if we want to develop and check the functionality of a particular test.


Parameterised Tests using GoogleTest

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • What is a parameterised test?

  • How can I write my own parameterised tests using GoogleTest?

  • How to use test fixtures with parameterised tests?

Objectives
  • Understand the need of a parameterised test.

  • Learn how to create a parameterised test using GoogleTest.

  • Appreciate the advantages of parameterised tests.

  • Create a parameterised test based on test fixture to combine the advantages of both.

1. Introduction to Parameterised Tests

Parameterised tests, also known as data-driven tests, are a feature provided by testing frameworks like Google Test that allows us to write a single test case that can be executed with different sets of test data or inputs. Instead of duplicating similar test cases with slight variations, parameterised tests enable us to define a test once and run it with multiple inputs or test data.

In order to understand the importance of parameterised tests and why we need them, let us consider a very small example. For this chapter, we will be using our Employee class that we created in last chapter.

Let us suppose that we want to test that net bonus calculation works fine for different number of years of experience. Remember that our Employee class adds an additional bonus of £1000 when an employee has worked for more than 10 years. As a first approach, we might be tempted to write multiple tests for the same function using test fixtures in the same way we have been doing so far.

For example, we may write our tests simply using test fixtures as shown below. Please see the code in file 1_not_parameterised.cpp.

class EmployeeTestFixture : public::testing::Test {
    public:
        Employee employee{"John", 25, 8000, 3, 2000};

};

TEST_F(EmployeeTestFixture, NetBonusIsCorrectWhenYearsLessThan10) {
    employee.setNumberYearsEmployed(5);
    EXPECT_EQ(employee.getNetBonus(), 2000);
}

TEST_F(EmployeeTestFixture, NetBonusIsCorrectWhenYearsGreaterThan10) {
    employee.setNumberYearsEmployed(15);
    EXPECT_EQ(employee.getNetBonus(), 3000);
}

While the above solution works pretty well, it has a serious drawback. If we carefully look at the tests, we see that the test logic is repeated in both the tests. The only difference between the two tests are the input and output values. Moreover, managing such tests will become problematic as the number of test conditions (or input/output) values increases. Imagine if bonus also depended on productivity, experience, age, etcetera? The number of input variations to test grows exponentially as the number and range of arguments grows.

An immediate solution that comes to mind to solve this problem is to make use of a loop in C++. For each test, we may use a different input value and expect a different output. Let us see how we can use a loop to solve the same problem as described above.

The code given below is present in 2_test_using_for_loop.cpp.

TEST_F(EmployeeTestFixture, NetBonusIsCorrectForDifferentYears) {
    auto input = std::vector<int>{5, 15};
    auto expected_output = std::vector<int>{2000, 3000};
    for (int i = 0; i < input.size(); i++) {
        employee.setNumberYearsEmployed(input[i]);
        EXPECT_EQ(employee.getNetBonus(), expected_output[i]);
    }
}

Let us try to run this code and see if we get the desired output (shown below).

[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from EmployeeTestFixture
[ RUN      ] EmployeeTestFixture.NetBonusIsCorrectForDifferentYears
[       OK ] EmployeeTestFixture.NetBonusIsCorrectForDifferentYears (0 ms)
[----------] 1 test from EmployeeTestFixture (0 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (0 ms total)
[  PASSED  ] 1 test.

Although, the for loop served our purpose and we were able to run our test for multiple values, there is a big problem in this approach. If we carefully look at the output, we can see that both (or multiple values if present) the test cases were combined into a single test. This violates the general rule that we should test only one thing in a test or one assertion per test.

Moreover, the problem gets worse when one of the tests fails. In order to understand what happens during a test failure when using a for loop, let us intentionally change the expected output value to an incorrect value. In file, 2_test_using_for_loop.cpp, you can make the following change.

TEST_F(EmployeeTestFixture, NetBonusIsCorrectForDifferentYears) {
    auto input = std::vector<int>{5, 15};
    auto expected_output = std::vector<int>{2000, 7000};
    for (int i = 0; i < input.size(); i++) {
        employee.setNumberYearsEmployed(input[i]);
        EXPECT_EQ(employee.getNetBonus(), expected_output[i]);
    }
}

On running the code with this change, we get the following output.

[==========] Running 1 test from 1 test suite.
[----------] Global test environment set-up.
[----------] 1 test from EmployeeTestFixture
[ RUN      ] EmployeeTestFixture.NetBonusIsCorrectForDifferentYears
2_Test_using_for_loop.cpp:21: Failure
Expected equality of these values:
  employee.getNetBonus()
    Which is: 3000
  expected_output[i]
    Which is: 7000
[  FAILED  ] EmployeeTestFixture.NetBonusIsCorrectForDifferentYears (0 ms)
[----------] 1 test from EmployeeTestFixture (0 ms total)

[----------] Global test environment tear-down
[==========] 1 test from 1 test suite ran. (0 ms total)
[  PASSED  ] 0 tests.
[  FAILED  ] 1 test, listed below:
[  FAILED  ] EmployeeTestFixture.NetBonusIsCorrectForDifferentYears

 1 FAILED TEST

From the output, we can clearly see that it results in complete failure of the test even though one of the conditions (or tests) was right. Moreover, the output does not help much to figure out which test has failed.

The solution to these issues is to make use of parameterised tests and the next section describes that.

2. Parameterised tests in GoogleTest

In Google Test, parameterised tests are implemented using the TEST_P macro, where “P” stands for parameterised. We define a test class and then specify multiple sets of input data using the INSTANTIATE_TEST_CASE_P macro. Each set of input data represents a different instance of the test, and the test framework runs the test case for each instance.

A parameterized tests in GoogleTest requires the following components in general.

  1. A parameterised test class: Similar to the process of a test fixture, we need to create a class derived from testing::TestWithParam<T> where T could be any valid C++ type.
class YourTestParameterisedClass : public::TestWithParam<T> {
    public:
        ClassUnderTest publicInstance;
};
  1. Data structure to hold your values: We need to create some data structure to store our values (both input and output). We can use a struct for this purpose as shown below.
struct MyStruct{
    int input;
    int output;
    
    //construtor of values struct
    MyStruct(int in, int out) : input(in), output(out) {}
};

Once you have defined a structure to hold your values, you can create an instance of it with the actual set of input and output values as shown below.

MyStruct MyValues[] = {
    MyStruct{InputVal1, OutputVal1},  //using constructor to create an instance of MyStruct.  
    MyStruct{InputVal2, OutputVal2}
};
  1. Create your test with TEST_P macro: Instead of TEST_F macro that we used for test fixture, we use a TEST_P macro where P stands for Parameterised as shown below.
TEST_P(YourTestParameterisedClass, NameofTest) {
    // Test logic goes here.
}
  1. Instantiate your test: Finally, we instantiate our test by using the INSTANTIATE_TEST_SUITE_P macro. The general syntax of this macro is given below.
INSTANTIATE_TEST_SUITE_P(SuitableNameTest,
                         YourTestParameterisedClass,
                         ValuesIn(MyValues));

In the above cell, the first argument to INSTANTIATE_TEST_SUITE_P could be any suitable name. GoogleTest will add this as a PREFIX to the test name when you will run the test. The second argument is the name of the parameterised class that you have created, which is also the first argument for TEST_P macro. Finally, the last argument is a ValuesIn() function which is defined in the GoogleTest library. It helps to inject the test values into the parameterised test one by one.

Let us see how we use the above concepts for an actual test that we have been writing in our previous subsections. For more details, please see 3_parameterised_not_using_fixture.cpp.

// Create a structure that holds the input and output values.
// This structure is used to inject values into the test.
struct TestValues{
    int input;
    int output;
    
    //constructor of values struct
    TestValues(int in, int out) : input(in), output(out) {}
};

// Create a parameterised class by deriving from testing::TestWithParam<T> where T could be any valid C++ type.
class EmployeeTestParameterised : public::testing::TestWithParam<TestValues> {
    public:
        Employee employee{"John", 25, 8000, 3, 2000};
};

// Create an array of values (of type TestValues) to be injected into the test.
TestValues values[] = {
    TestValues{5, 2000},
    TestValues{15, 3000}
};

//Test net bonus works fine for different number of years.
TEST_P(EmployeeTestParameterised, NetBonusIsCorrectForDifferentYears) {
    TestValues current_test_case_value = GetParam();
    employee.setNumberYearsEmployed(current_test_case_value.input);
    EXPECT_EQ(employee.getNetBonus(), current_test_case_value.output);
}

// Instantiate the test case with the values array.
INSTANTIATE_TEST_SUITE_P( NetBonusIsCorrectForDifferentYears, 
                         EmployeeTestParameterised,
                         testing::ValuesIn(values));

On running the above file, we see the following output.

[==========] Running 2 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 2 tests from NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised
[ RUN      ] NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised.NetBonusIsCorrectForDifferentYears/0
[       OK ] NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised.NetBonusIsCorrectForDifferentYears/0 (0 ms)
[ RUN      ] NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised.NetBonusIsCorrectForDifferentYears/1
[       OK ] NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised.NetBonusIsCorrectForDifferentYears/1 (0 ms)
[----------] 2 tests from NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised (0 ms total)

[----------] Global test environment tear-down
[==========] 2 tests from 1 test suite ran. (0 ms total)
[  PASSED  ] 2 tests.

In this output, there are two things worth noting:-

  1. As expected, we are now running two tests as compared to just one in the case of a for loop.
  2. The test name NetBonusIsCorrectForDifferentYears/EmployeeTestParameterised.NetBonusIsCorrectForDifferentYears/0 is a combination of the following:-
    • A Prefix NetBonusIsCorrectForDifferentYears coming from INSTANTIATE_TEST_SUITE_P.
    • Parameterised class name EmployeeTestParameterised coming from the first argument of TEST_P macro.
    • Test name NetBonusIsCorrectForDifferentYears coming from the second argument of TEST_P macro.
    • Finally, the iteration number.

With this parameterised test, we were able to solve the issues that we were discussing above. However, in doing so, we changed the test fixture and converted it to use TEST_P macro. Our previous tests based on TEST_F macro will not work anymore as it has been replaced. The important question is: What shall we do so that we can still keep all our useful tests from test fixtures while still being able to add parameterised test? The solution is to combine test fixtures with parameterised tests and the next subsection explains that.

Exercise 1: Parameterised tests for Non member functions (i.e. functions which are not part of any class)

Consider that you have a simple function int Sum(int a, int b) that takes in two integer values a and b and returns their sum. Write a parameterised test for this function using GoogleTest. Please feel free to use Google to search how to write parameterised tests for non member functions.

Solution

The full solution is given in Solution. We present some important parts of the solution below.

// Define a test fixture class
class ParameterizedTest : public testing::TestWithParam<std::pair<int, int>> {
};

// Define the test case with the parameterized test
TEST_P(ParameterizedTest, TestSum) {
    // Get the parameter values
    int a = GetParam().first;
    int b = GetParam().second;

    // Call your normal function
    int result = Sum(a, b);

    // Perform assertion
    ASSERT_EQ(a + b, result);
}

// Define the test data
INSTANTIATE_TEST_SUITE_P(Default, ParameterizedTest, testing::Values(
    std::make_pair(1, 1),
    std::make_pair(2, 3),
    std::make_pair(-5, 10)
));

Exercise 2: Multiple parameterised tests

Suppose you have the following 3 functions that you want to test using parameterised tests:-

  1. int Sum(int a, int b) as defined in the previous exercise,

  2. double Multiply(double a, double b) function which multiples the two numbers and

  3. double Power(double a, int b) function which raises a number a to an integer power b.

For the sake of simplicity, assume that you can use the same parameters for your Multiply function as you have used in your Sum function. However, for the Power function, the parameters are different. Write a parameterised test for all the three functions.

Solution

Although we are testing 3 parameterised functions, we do not have to add 3 INSTANTIATE_TEST_SUITE_P macros in our code. This is because an INSTANTIATE_TEST_SUITE_P macro looks for the test suite name (2nd argument) and if it is same, it will instantiate the tests for all of them. Therefore, in our current exercise, we can use the same INSTANTIATE_TEST_SUITE_P for Add and Multiply functions while we can use a different INSTANTIATE_TEST_SUITE_P for the Power function.

We provide some portion of solution code below. Full code can be found in Solution

// Define the test case with the parameterized test for multiply function.
TEST_P(ParameterizedTest, TestMultiply) {
    // Your test logic goes here.
}

// Define a test fixture class
class ParameterizedTest_Power : public testing::TestWithParam<std::tuple<double, int, double>> {
};

//Check if the power function works fine for different values of a and b
TEST_P(ParameterizedTest_Power, TestPowerFun){
    // Get the parameter values
    double a = std::get<0>(GetParam());
    int b = std::get<1>(GetParam());
    double answer = std::get<2>(GetParam());

    // Call your normal function
    double result = Power(a, b);

    // Perform assertion
    ASSERT_DOUBLE_EQ(answer, result);
}

// Define the test data
INSTANTIATE_TEST_SUITE_P(PowTest, ParameterizedTest_Power, testing::Values(
    std::make_tuple(1, 1, 1),
    std::make_tuple(2, 3, 8),
    std::make_tuple(2.5, 2, 6.25)
));

3. Parameterised test based on test fixture

In order to create a parameterised test from a test fixture, all we need to do is to create a parameterised test class which derives from both the test fixture class and testing::WithParamInterface<T> class (defined in GoogleTest) to create parameterised tests.

// create a parameterised test class from the fixture defined above.
class YourParameterisedClass : public YourFixtureClass,
                               public WithParamInterface<T> {
};

For the purpose of demonstration, let us assume that we now want to check our tax calculation function getTaxAmount() which has more branches as compared to bonus calculation. For complete code, see the file 4_param_test_based_fixture.cpp. We give a small section of code below for reference.

// Create a test fixture.
class EmployeeTestFixture : public::testing::Test {
    public:
        Employee employee{"John", 25, 8000, 5, 1000};

};

// Create a structure that holds the input and output values.
// This structure is used to inject values into the test.
struct TestValues{
    double inp_salary;
    double inp_bonus;
    double inp_years_employed;
    double out_tax;
    
    //constructor of values struct
    TestValues(double salary, double bonus, double years_employed, double tax) 
              : inp_salary(salary), 
                inp_bonus(bonus), 
                inp_years_employed(years_employed), 
                out_tax(tax) {}
};

// create a parameterised test class from the fixture defined above.
class EmployeeTestParameterisedFixture : public EmployeeTestFixture, 
                                         public testing::WithParamInterface<TestValues> {
};

// Create an array of values (of type TestValues) to be injected into the test.
TestValues values[] = {
    // value are in format: salary, basic_bonus, years_employed, tax
    TestValues{8000, 2000, 3, 0},
    TestValues{8000, 2000, 11, 100},
    TestValues{60000, 8000, 13, 16500}
};

// Test that the tax calculation is correct.
TEST_P(EmployeeTestParameterisedFixture, TaxCalculationIsCorrect) {
    TestValues current_test_case_value = GetParam();
    employee.setBaseSalary(current_test_case_value.inp_salary);
    employee.SetBasicBonus(current_test_case_value.inp_bonus);
    employee.setNumberYearsEmployed(current_test_case_value.inp_years_employed);
    EXPECT_EQ(employee.getTaxAmount(), current_test_case_value.out_tax);
}

// Instantiate the test case with the values array.
INSTANTIATE_TEST_SUITE_P( CheckTaxCalculation, 
                          EmployeeTestParameterisedFixture,
                          testing::ValuesIn(values));

The major change as compared to our previous example is shown in the cell below and this change is responsible for generating a parameterised test using a test fixture.

class EmployeeTestParameterisedFixture : public EmployeeTestFixture, 
                                         public WithParamInterface<TestValues> {
};

In addition, we used a function GetParam() defined in gtest.h. This function can help us to get the input values passed via ValuesIn() function and use it in the test logic according to our requirements. In this case, it helps us to retrieve 4 values in the order inp_salary, inp_bonus, inp_years_employed and out_tax for each test case. Thus, GetParam() provides a convenient way to retrieve multiple values and use them in our test logic.

4. Advantages of Parameterised tests

From the above discussion, we can see that the parameterised tests have the following advantages.

  1. Code Reusability: With parameterised tests, we can write a single test case that can be executed with different inputs or test data. This promotes code reusability by eliminating the need to duplicate similar test cases. Instead, we can define the test logic once and apply it to multiple scenarios, reducing code duplication and improving maintainability.

  2. Increased Test Coverage: Parameterised tests allow us to easily test a wide range of input values or test cases without writing separate test cases for each variation. This enables us to achieve better test coverage by covering various combinations, edge cases, and boundary values in a concise manner.

  3. Simplified Test Maintenance: When changes are required in the test logic, having parameterised tests simplifies the maintenance process. Instead of modifying multiple test cases individually, we only need to update the single test case, which will automatically be executed with the new test data. This saves time and effort in maintaining and updating the tests.

  4. Simplified Test Reporting: Parameterised tests provide a concise way to report test results for multiple test cases. Each instance of the parameterised test is reported individually, allowing us to identify which specific inputs or test data passed or failed. This facilitates quick identification and debugging of issues.

Summary

In this chapter, we learnt about the basics of parameterised tests and how to use them in GoogleTest. We also learnt how to combine test fixture with parameterised tests. Finally, we learnt the advantages of parameterised tests.

Key Points

  • Parameterised tests can be used to repeat a specific test with different inputs, reducing code duplication.

  • Parameterised tests are individual tests, so they are more concise and easy to maintain than using a loop for testing multiple conditions

  • Fixtures can be combined with parameterised tests for maximum flexibility.


Tests doubles and dependency injection

Overview

Teaching: 30 min
Exercises: 15 min
Questions
  • What are test doubles?

  • What types of test doubles are there?

  • How do I configure and run tests with Google Mocks?

Objectives
  • Understand the different types of test doubles and when to use them

  • Understand the value of dependency injection

  • Re-write code enabling dependency injection

  • Apply test doubles in different use cases

  • Apply mocking in different use cases

Testing untestable code

Sooner or later you will face a piece of code that is not straightforward to write a test for. It might be because it calls a function that requests some data from a piece of hardware, or because it needs to access a database that is not available in the testing environment, or simply because it triggers a complex and time consuming computation process that is only suited to run in a supercomputer. Whatever the reason, you have a problem. Moreover, you might want to test if some intermediate result in the calculation is valid, and not just the final output.

There is one possible solution: replace the problematic function by another one that, for the purposes of the test, behaves in a similar manner but without the problematic functionality of the original one. These replacements are called test doubles.

Test doubles

Test doubles are artificial replacements of functions or objects that prevent - or hinder - testing a particular part of the code. Depending on what these replacements do, and also on the programming language, they receive different names. From the Wikipedia, we have:

Which type of test double to use will depend on the specific code you want to test and what the double is meant to replace. Functions are often replaced with stubs or fakes, while objects of complex classes with multiple methods or attributes require more elaborate mocks.

Now, the complexity becomes how to use them!

Dependency injection

Consider the following function that normalizes an array according to some definition of norm (ignore whether this is the most performant approach or not):

void normalize_v1(int array[], int length)
{
    double norm{calculate_norm(array)};

    for (int i{0}; i < length; ++i)
    {
        array[i] /= norm;
    }
}

You need to test it, but you do not want to have to calculate the norm along the way. How would you tell the normalize function to use a test double for calculate_norm and not the real one?

Well, you cannot. calculate_norm is hardcoded in the definition of normalize so replacing it for another function is not possible, especially if we consider that normalize will be defined in a particular file within your code, but you are testing it somewhere else.

Now consider the following alternative version of the function normalize:

void normalize_v2(
    int array[],
    int length,
    std::function<double(int[])> func = calculate_norm
    )
{
    double norm{func(array)};

    for (int i{0}; i < length; ++i)
    {
        array[i] /= norm;
    }
}

Compared with the first version, this normalize function does exactly the same thing and can be invoked in exactly the same way but, in addition, you can optionally control what specific function is used to calculate the norm. In particular, you can provide a test double that replaces the default calculate_norm.

This is called dependency injection and its application goes well beyond testing: it helps make the code more modular and re-usable by making it less intrinsically linked to specific design choices or dependencies. In the example, we could use a different definition of norm - and there are quite a few!

Dependency injection is an important design pattern

Do not disregard the value of dependency injection as an approach only useful in testing. If you design your code with dependency injection in mind, it will become more flexible and powerful. Here we have presented just one way of doing dependency injection, but there are other approaches that might be more suitable to your particular case.

Having said that, enabling dependency injection in your code is essential to be able to use test doubles, including the mocks we describe next, so make sure you fully understand what it means and how to write your code the right way.

Introducing Google Mock

Google Mock, or gMock, is a framework for creating mock classes and using them in C++. A mock class implements the same interface as the real class (so it can be used as one), but lets you specify how it will be used and what it should do at runtime, setting expectations on these interactions.

It is worth emphasizing that gMock will let you mock classes and not top level functions.

The process of using gMock is, in general, always the same:

  1. You create the mocked class using the MOCK_METHOD macro to mock the methods that will be used in the test.
  2. When running the tests, you set the expectations of what should happen when each relevant mocked method is called using the EXPECT_CALL macro. The expectations will be automatically checked at the end of the test.

Mocking virtual classes

Here we give a simple example to illustrate the process, but read the gMock Mocking Cookbook for a more detailed description of the possibilities and the inputs these macros need. Let’s assume we want to mock the following virtual class because one of its subclasses is being used in the function we want to test:

class Animal {
  virtual ~Animal() {};
  virtual double walk(int steps);
  virtual void eat(double carbs);
  virtual void die();
};

The corresponding mocked class will be:

class MockAnimal : public Animal {
  public:
    MOCK_METHOD(double, walk, (int), (override));
    MOCK_METHOD(void, eat, (double), (override));
    MOCK_METHOD(void, die, (), (override));
};

Now let’s write a test for the following function, which finds out if an animal is dead or alive at the end of the day depending on how much food it has taken and how much it has walked.

bool isAliveAtEndOfDay(int steps, double carbs, Animal animal) {
  double spent_carbs{animal.walk(steps)};
  if (spent_carbs > carbs) {
    animal.die();
    return false;
  }
  animal.eat(carbs - spent_carbs);
  return true;
}

If we were to use a real implementation of Animal, let’s say a Horse, testing this function would be complicated because the result would depend on the specific metabolism of the animal, which might be quite complicated (and potentially time consuming to run). So we can use MockAnimal instead to check that the logic of the function is correct. A couple of tests for this would look as:

using ::testing::Return;

TEST(IsAliveTest, Lives) {
  Animal animal = MockAnimal();
  int steps{400};
  double carbs{2000.0};
  double consumed{500.0};

  EXPECT_CALL(animal, walk(steps)).Times(1).WillOnce(Return(consumed));
  EXPECT_CALL(animal, eat(carbs-consumed)).Times(1);
  EXPECT_CALL(animal, die()).Times(0);
  ASSERT_TRUE(isAliveAtEndOfDay(steps, carbs, animal))
}

TEST(IsAliveTest, Dies) {
  Animal animal = MockAnimal();
  int steps{400};
  double carbs{2000.0};
  double consumed{5000.0};

  EXPECT_CALL(animal, walk(steps)).Times(1).WillOnce(Return(consumed));
  EXPECT_CALL(animal, eat(carbs-consumed)).Times(1);
  EXPECT_CALL(animal, die()).Times(1);
  ASSERT_FALSE(isAliveAtEndOfDay(steps, carbs, animal))
}

Mocking non-virtual classes

While the above situation is common enough, there will be cases when you just don’t have a common virtual class to inherit from. In those cases, you can still use mocking but you will need to make your code flexible enough so your functions can accommodate unrelated classes as inputs. The way of doing this would be using templates.

Following with the above example, let’s assume that now we don’t have an Animal abstract class, but rather a concrete Horse class with the same interface.

class Horse {
  public:
    ~Horse() {};
    double walk(int steps);
    void eat(double carbs);
    void die();
};

Mocking the above will look very similar except that we will be creating a brand new class altogether, not be inheriting from any other class, and we will omit the override parameter. Contrary to the case of virtual classes, here we only need to indicate the methods that will actually be used in the tests.

class MockHorse {
  public:
    MOCK_METHOD(double, walk, (int));
    MOCK_METHOD(void, eat, (double));
    MOCK_METHOD(void, die, ());
};

The function we want to test is the same, except that now only accept Horse as input:

bool isAliveAtEndOfDay(int steps, double carbs, Horse animal) {
  // as above
};

How do we test this? Well, we will need to modify our function to use templates, and indicate when the function is supposed to use a Horse instance or a MockHorse instance. Contrary to the case of virtual classes above, this is fixed at compilation time rather than at runtime:

template <class GenericHorse>
bool isAliveAtEndOfDay(int steps, double carbs, GenericHorse animal) {
  // as above
};

In production code, we will use this function as isAliveAtEndOfDay<Horse>(..., horse_instance) while in the tests we will call this as isAliveAtEndOfDay<MockHorse>(..., mock_horse_instance).

And that’s all! The construction of the tests is otherwise the same, for example:

TEST(IsAliveTest, Lives) {
  MockHorse animal = MockHorse();
  int steps{400};
  double carbs{2000.0};
  double consumed{500.0};

  EXPECT_CALL(animal, walk(steps)).Times(1).WillOnce(Return(consumed));
  EXPECT_CALL(animal, eat(carbs-consumed)).Times(1);
  EXPECT_CALL(animal, die()).Times(0);
  ASSERT_TRUE(isAliveAtEndOfDay<MockHorse>(steps, carbs, animal))
};

As it can be seen, it involves more steps that the case of having a virtual class to start with, and it might require from you to modify your code in order to be able to use mocks. But, on the bright side, it might also make your code more reusable and flexible and, ultimately, powerful as it was the case when you enable dependency injection.

Mocking is not always the solution

In the above examples, it would have been tricky to test the logic of the function in full without mocks. However, they are not always the solution. Mocks do not work with top level functions, only with classes. Depending on the complexity of the class, setting up the mock might be too complicated and not worth it for testing the function of interest. Very often, stubs, fakes and dummies will carry you a long way before you need to use mocks.

Test doubles in action

In this section we present a few exercises with their solutions of using test doubles to enable the testing of untestable code. In all cases, we assume that dependency injection is enabled, one way or another.

Keep in mind that there are often multiple ways of using test doubles for a particular problem, so you might come up with a different solution for the exercises below.

Test normalize_v2

Write a test using the Google Tests tools described in previous chapters to check that normalize_v2, as defined above, behaves as it should.

Solution

In this case, a simple fake will solve our problem. Let’s define our fake function as:

double norm_stub(int array[])
{
    return 10.0;
};

And then we write the test as:

TEST(NormalizeTest, ResultCorrect) {

    double factor{norm_stub([])};
    int length{3};
    int input[length]{1, 2, 3};
    int copy[length]{1, 2, 3};

    normalize_v2(input, length, norm_stub);

    for (int i{0}; i < length; ++i)
    {
        EXPECT_EQ(input[i] * factor, copy[i]);
    };
};

Here we have used a specific array for the test, but we could have explored a larger space of options and edge cases using parametric testing, as described in a previous episode. The test written this way, with a fake for the norm, lets you test only what normalize_v2 is doing - i.e. a true unit test -, without influence from the process of calculating the norm.

An exercise with mocks

A company wants to bump the basic bonus of all employees due to the increase cost of life. They have the employee data stored in a EmployeeTable, as described in previoous chapters. They have added the following method to the Table to perform the bump in the bonus:

   void bumpSalaryBonus(const double newBonus){
       for (auto& employee : employees){
           employee->setIncreasedBasicBonus(newBonus);
       }
   }

Write a test that uses mocked employees with the appropriate methods that checks that all employees in the table receive a bump in bonus. Tip: You will need to modify EmployeeTable as a template.

Solution

The solution has three steps. The first step will be to modify the existing implementation of EmployeeTable to accept a generic employee, i.e. transforming it into a template. For this to work, the new definition will need to be included in the header file employee_table.h. The following code shows just the bit relevant for this exercise:

template <class GenericEmployee>
class EmployeeTable {
private:
    std::vector<GenericEmployee*> employees;

public:
    void addEmployee(GenericEmployee* employee){
        employees.push_back(employee);
    }

    // And all the other methods
    // ...
}

The second step is to create a mocked employee. We just need the setIncreasedBasicBonus method, so we create a class with that mocked method only:

class MockEmployee{
public:
    MOCK_METHOD(void, setIncreasedBasicBonus, (double));
};

Finally, we write the test using the MockEmployee instead of real employees.

TEST(EmployeeTableTest, SetBasicBonusForEveryone)
{
    EmployeeTable<MockEmployee> table; 
    double newBonus{2000};
    
    MockEmployee employee1;
    MockEmployee employee2;

    EXPECT_CALL(employee1, setIncreasedBasicBonus(newBonus)).Times(1);
    EXPECT_CALL(employee2, setIncreasedBasicBonus(newBonus)).Times(1);

    table.addEmployee(&employee1);
    table.addEmployee(&employee2);

    table.bumpSalaryBonus(newBonus);
};

Summary

Test doubles let you test your functions in isolation, decoupling them from other parts of your code or from external dependencies. There are several approaches that you can use, like stubs, fakes or mocks, but the basis for most of them to work is to write your code in such a way that is testable, using dependency injection and templates. These will make your code also more modular and reusable.

Key Points

  • Test doubles let you write unit tests in isolation from other bits of code

  • Test doubles require dependency injection to be able to replace real parts of your code with fake ones

  • Stubs provide canned, simple values as indirect inputs to the function under test.

  • Mocks let you check indirect outputs (i.e. intermediate results) and also can provide stubs.

  • Google Mock provides the tools to implement mocks


General unit testing best practice

Overview

Teaching: 30 min
Exercises: 0 min
Questions
  • What are some best practices that should be followed when writing unit tests?

Objectives
  • Understand some of the best practices for writing unit tests.

1. Best Practices for writing unit tests

Below, we share some of the best practices that should be followed while writing unit tests. The suggestions below are general in nature and apply to all testing frameworks, not just C++. Some of the suggestions are from the book Modern C++ programming with Test-Driven-Development by Jeff Langr.

  1. Test Organisation and Naming Conventions:
    • Organise tests into logical groups using test suites and test case names that reflect the functionality or components being tested.
    • Use descriptive and meaningful names for test methods that clearly indicate the scenario or behaviour being tested.
    • Follow a consistent naming convention for test methods, such as prefixing them with “Test” or using a “should” or “can” style of naming.
  2. Keep Tests Focused and Isolated:
    • Ensure that each test case focuses on a single behaviour or scenario, testing one aspect of your code at a time.
    • Avoid writing tests that depend on the state or side effects of other tests. Each test case should be independent and self-contained.
    • Use test fixtures to encapsulate common setup and teardown logic, promoting code reusability and reducing code duplication.
    • In general follow the rule ONE ASSERT PER TEST as much as possible.
    • Be cautious when using global variables or static state in tests, as they can introduce unwanted dependencies and make tests more fragile.
  3. Write Clear and Readable Tests:
    • Use comments to explain the purpose and expected behaviour of each test case, especially when dealing with complex or edge cases. A comment would enhance the readability of the test.
    • Break down complex test scenarios into smaller, manageable assertions. Each assertion should test a single condition.
  4. Choose Appropriate Assertions:
    • Select the most appropriate assertion macros provided by Google Test that closely match the behaviour being tested.
    • Use specific assertions (e.g., ASSERT_EQ, ASSERT_TRUE, ASSERT_FALSE) instead of general assertions (e.g., ASSERT or EXPECT) to improve test failure diagnosis.
  5. Test Coverage and Code Review:
    • Aim for high code coverage by ensuring that the tests exercise different code paths, including error handling, edge cases, and boundary conditions.
    • Regularly review your test suite to identify gaps in test coverage and update it accordingly. Periodically review and remove redundant or obsolete tests.
    • Use code coverage analysis tools to measure the effectiveness of your tests and identify areas that need additional testing.
  6. Test Doubles and dependency injection:
    • Use dependency injection to decouple your functionality from external dependencies: your code will become more modular and reusable.
    • Dependency injection is essential to use test doubles, so make sure that you write your code in a way that facilitate testing.
    • There are several types of test doubles, like stubs, fakes, mocks or dummies, each useful in different contexts.
    • Use mocking frameworks, such as Google Mock, to create mock objects for dependencies that need a higher degree of control or isolation during testing.
  7. Continuous Integration and Test Execution:
    • Integrate your unit tests into your continuous integration (CI) process to ensure that tests are automatically executed with each code change.
    • Aim for fast and deterministic tests by minimising external dependencies, reducing I/O operations, and avoiding non-deterministic behaviours.
  8. Test Failure Investigation:
    • When a test fails, investigate and diagnose the failure by examining the failure message, log output, and any relevant debug information.
  9. Ensure Repeatability:
    • Ensure that the unit tests produce the same results when run at different times, different machines etc.

Summary

In this chapter, we described some of the best practices that should be followed while writing unit tests.

Key Points

  • Unit tests should be isolated

  • Unit tests should be repeatable

  • Unit tests should be readable

  • Unit tests should have one assertion per test