unittest - Documentation

What is unittest?

unittest is Python’s built-in unit testing framework. It provides a rich set of tools for creating and running tests, enabling you to write comprehensive test suites for your code. At its core, unittest uses a class-based approach where test cases are organized into classes that inherit from unittest.TestCase. These classes contain methods representing individual test units, each designed to verify a specific aspect of your code’s functionality. The framework handles running these tests, reporting results, and providing detailed information on any failures.

Why use unittest?

Using unittest offers several significant advantages:

unittest vs. other testing frameworks

While Python boasts several testing frameworks (pytest, nose, etc.), unittest holds a unique position:

However, other frameworks like pytest offer advantages such as simpler syntax, advanced features (like fixtures and parametrization), and better plugin support. The choice depends on the project’s complexity and developer preference. For smaller projects or those needing a simple, readily available solution, unittest is an excellent choice. For larger or more complex projects, pytest might provide more powerful features.

Setting up your environment

To use unittest, you generally don’t need any special setup beyond having Python installed. unittest is part of the standard library, so it’s readily available. Simply import the unittest module in your test scripts. For example:

import unittest

# Your test classes and methods would go here.

No additional packages or configuration is typically required. You can run your tests from the command line using the unittest module directly (e.g., python -m unittest test_module.py) or through an IDE’s integrated testing tools.

Core Concepts

Test Cases

A test case is the smallest unit of testing in unittest. It represents a single test to be performed on a piece of code. In unittest, test cases are methods within a class that inherits from unittest.TestCase. These methods must begin with the prefix test_ to be recognized by the test runner. Each test case method should verify a specific aspect of your code’s behavior, typically using assertions to check expected outcomes.

import unittest

class TestMyCode(unittest.TestCase):
    def test_addition(self):
        self.assertEqual(2 + 2, 4)  # Assertion: checks for equality

    def test_subtraction(self):
        self.assertEqual(5 - 3, 2)  # Another assertion

if __name__ == '__main__':
    unittest.main()

A TestCase can also include setUp() and tearDown() methods for setting up and cleaning up resources before and after each test method.

Test Suites

A test suite is a collection of test cases. It’s a way to organize and run multiple tests together. You can create a suite manually by adding individual test cases or by using the unittest.TestLoader to automatically discover tests within modules or directories. Running a suite executes all the contained test cases.

import unittest

# ... (Test classes defined as above) ...

suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(TestMyCode))  # Add all tests from TestMyCode
runner = unittest.TextTestRunner()
runner.run(suite)

# Alternatively, using TestLoader:
loader = unittest.TestLoader()
suite = loader.loadTestsFromModule(module_containing_tests) #module_containing_tests is a variable holding the module
runner = unittest.TextTestRunner()
runner.run(suite)

Test Fixtures

Test fixtures provide a mechanism for setting up and tearing down resources needed for your tests. setUp() is called before each test case, allowing you to prepare the environment (e.g., creating temporary files, connecting to a database). tearDown() is called after each test case, allowing you to clean up resources (e.g., deleting temporary files, closing database connections). This ensures that each test runs in a consistent and isolated environment.

import unittest

class TestDatabase(unittest.TestCase):
    def setUp(self):
        # Connect to the database
        self.connection = connect_to_db()

    def tearDown(self):
        # Close the database connection
        self.connection.close()

    def test_query(self):
        # Perform a database query using self.connection
        pass

setUpClass() and tearDownClass() methods exist for setting up and tearing down class-level resources. They are called once before and once after all test methods in the class.

Test Runners

A test runner is responsible for executing the test suite and reporting the results. The default test runner in unittest is unittest.TextTestRunner, which produces a simple text-based output to the console. Other runners can be created to provide more sophisticated output formats (e.g., HTML reports) or integrate with other tools.

Assertions

Assertions are methods provided by unittest.TestCase to verify the expected behavior of your code. They check conditions and raise an exception if a condition is not met. Common assertions include:

Using appropriate assertions is crucial for effectively verifying the correctness of your code within your unit tests.

Writing Test Cases

Creating Test Cases

Test cases in unittest are methods within classes that inherit from unittest.TestCase. Each method represents a single test and must begin with the prefix test_. The method’s body contains the code to execute the test and assertions to verify its results. A simple test case might look like this:

import unittest

class TestStringMethods(unittest.TestCase):
    def test_upper(self):
        self.assertEqual('foo'.upper(), 'FOO')

    def test_isupper(self):
        self.assertTrue('FOO'.isupper())
        self.assertFalse('Foo'.isupper())

    def test_split(self):
        s = 'hello world'
        self.assertEqual(s.split(), ['hello', 'world'])
        # check that s.split fails when the separator is not a string
        with self.assertRaises(TypeError):
            s.split(2)

if __name__ == '__main__':
    unittest.main()

This example demonstrates three test methods (test_upper, test_isupper, test_split), each testing a different aspect of the string manipulation functions.

Using Assertions (assertEqual, assertTrue, assertFalse, etc.)

Assertions are crucial for verifying the results of your tests. unittest.TestCase provides a variety of assertion methods. If an assertion fails, the test runner reports the failure and provides information about the discrepancy. Some commonly used assertions include:

setUp() and tearDown() methods

setUp() and tearDown() methods are used to set up and tear down resources before and after each test case, respectively. This ensures that each test runs in a clean and consistent environment. For example:

import unittest
import tempfile

class TestTempFile(unittest.TestCase):
    def setUp(self):
        self.fd, self.name = tempfile.mkstemp()

    def tearDown(self):
        import os
        os.close(self.fd)
        os.remove(self.name)

    def test_file_creation(self):
        with open(self.name, "w") as f:
            f.write("test")
        self.assertTrue(os.path.exists(self.name))

Here, setUp() creates a temporary file, and tearDown() cleans it up after each test.

Handling Exceptions

The assertRaises() context manager is ideal for testing whether a specific exception is raised by a piece of code:

import unittest

class TestExceptionHandling(unittest.TestCase):
    def test_zero_division(self):
        with self.assertRaises(ZeroDivisionError):
            1 / 0

Skipping Tests

Sometimes it’s useful to skip a test, either conditionally or permanently. The unittest.skip() and unittest.skipIf() decorators provide this functionality:

import unittest
import sys

class TestSkip(unittest.TestCase):
    @unittest.skip("demonstrating skipping")
    def test_nothing(self):
        pass

    @unittest.skipIf(sys.version_info < (3, 7), "requires python 3.7 or higher")
    def test_format(self):
      pass

Expected Failures

If you know a test is going to fail but you don’t want to fix it immediately (e.g., because it’s related to a known bug), you can mark it as an expected failure using unittest.expectedFailure():

import unittest

class TestExpectedFailure(unittest.TestCase):
    @unittest.expectedFailure
    def test_broken_feature(self):
        self.assertEqual(1, 2)  # This assertion is expected to fail

An expected failure will be reported differently than a regular failure, indicating that the failure was anticipated. This helps distinguish between genuine failures and known issues.

Organizing Tests

Test Suites and Test Discovery

As your project grows, managing individual test cases becomes cumbersome. unittest provides TestSuite objects to group tests together for efficient execution. Instead of running each test individually, you can create a suite containing multiple test cases or even other suites, allowing you to run all associated tests at once.

Test discovery simplifies the process of locating tests. The unittest.TestLoader automatically finds and loads tests from modules and packages based on naming conventions (test methods starting with test_, test classes inheriting from unittest.TestCase). This eliminates the need for manually adding each test case to a suite.

import unittest
import my_test_module  # Assume this module contains test cases

# Manual Suite Creation
suite = unittest.TestSuite()
suite.addTest(unittest.makeSuite(my_test_module.TestMyClass)) # Add all tests from TestMyClass

# Using TestLoader for Automatic Discovery
loader = unittest.TestLoader()
suite = loader.loadTestsFromModule(my_test_module) # Loads all tests from the module
suite = loader.discover('test_directory') # Discovers tests in 'test_directory' and subdirectories.


runner = unittest.TextTestRunner()
runner.run(suite)

loadTestsFromModule gathers all tests from a given module, and discover recursively searches a directory for test files (by default looking for files ending in _test.py or test.py). This greatly streamlines the testing process for larger projects.

Organizing Tests into Modules and Packages

For large projects, it is crucial to structure your tests effectively. The best practice is to organize tests into separate modules and even packages that mirror the structure of your main application code. This maintains a clear correspondence between the code and its tests, making it easy to find and run the appropriate tests.

For example, if your application has modules app/module1.py and app/module2.py, you could create corresponding test modules such as tests/test_module1.py and tests/test_module2.py. For larger projects, create further sub-packages within tests to maintain a logical organizational structure. This organized approach increases code maintainability and testability.

Using Test Loaders

unittest.TestLoader provides various methods for discovering and loading tests. We’ve already seen loadTestsFromModule and discover, but it also offers other methods for finer-grained control:

These methods provide flexibility in how you construct test suites, allowing you to include or exclude specific tests based on your needs. This can be especially helpful in building customized test runners or filtering tests for specific purposes. For instance, you can use these methods in conjunction with other logic (such as command-line arguments) to select subsets of your entire test suite for targeted execution.

Running Tests

Running Tests from the Command Line

The simplest way to run unittest tests is from the command line. The unittest module acts as a test runner when invoked directly. The most basic usage involves specifying the module containing your tests:

python -m unittest test_module.py

This command runs all tests within test_module.py. You can specify multiple modules:

python -m unittest test_module1.py test_module2.py

To run tests within a specific class within the module, use the following:

python -m unittest test_module.py:TestClass

For more advanced command-line options, refer to the python -m unittest --help output. Options include specifying verbosity levels (-v for verbose output, -vv for even more detail), generating XML reports, and more.

Running Tests from an IDE

Most modern IDEs (Integrated Development Environments) such as PyCharm, VS Code, and Spyder have built-in support for unittest. These IDEs typically provide graphical interfaces to discover, run, and debug tests. Features often include:

The specific process for running tests within an IDE varies depending on the IDE itself, but generally involves right-clicking on a test file or class and selecting a “Run Tests” or similar option. Consult your IDE’s documentation for detailed instructions.

Using Test Runners

unittest’s TextTestRunner is the default test runner, providing text-based output to the console. However, you can customize the test running process by creating your own test runners or using third-party runners to achieve specific behaviors. A custom runner might output results in different formats (HTML, JUnit XML), integrate with CI/CD pipelines, or add specialized reporting features.

To use a custom runner, create a class that inherits from unittest.TextTestRunner and override methods such as run(), which controls test execution and reporting.

import unittest

class MyTestRunner(unittest.TextTestRunner):
    def run(self, test):
        # Customize test execution and reporting here...
        return super().run(test)

if __name__ == '__main__':
    suite = unittest.TestLoader().loadTestsFromModule(test_module)
    runner = MyTestRunner()
    runner.run(suite)

Generating Test Reports

While TextTestRunner provides basic output, generating more comprehensive reports is often beneficial, especially for larger projects or continuous integration. The command-line option -b (or --buffer) buffers the output and provides summary results at the end. For more structured reports, consider using third-party tools that integrate with unittest or generate reports in formats like JUnit XML (compatible with many CI/CD platforms) or HTML for better readability.

Example of generating an XML report using the command line:

python -m unittest -v test_module.py > test_results.xml

(Note: Some third-party tools provide more sophisticated XML report generation with richer detail). For HTML reports, consider using tools such as HTMLTestRunner (a separate package). These specialized tools often offer more detailed information about test execution, including timings and individual test results, facilitating better analysis and debugging.

Advanced Techniques

Parameterizing Tests

Running the same test with different inputs can be achieved using unittest.parameterized.parameterized (from the unittest2 backport library which might need installation if not already available). This avoids code duplication and improves the efficiency of your test suite.

import unittest
from unittest2 import parameterized

class TestParameterized(unittest.TestCase):
    @parameterized.expand([
        (1, 2, 3),
        (10, 20, 30),
        (-1, 1, 0),
    ])
    def test_addition(self, a, b, expected):
        self.assertEqual(a + b, expected)

if __name__ == '__main__':
    unittest.main()

This example runs the test_addition method three times, each time with a different set of inputs. The parameterized.expand decorator takes a list of tuples, where each tuple represents a set of input parameters and the expected output. Note that unittest2 is required for this example, as it’s not directly available in the standard unittest library (although newer versions are approaching equivalent functionality). If your Python version supports this feature directly in unittest you can utilize that directly instead of using unittest2.

Using Mock Objects

Mock objects are invaluable for isolating units of code during testing. They simulate the behavior of dependencies without requiring the actual dependencies to be available or functional. The unittest.mock module (part of the standard library since Python 3.3) provides tools for creating mock objects and controlling their behavior.

import unittest
from unittest.mock import patch

class MyClass:
    def external_function(self):
        # This function interacts with an external system
        return "data from external system"


class TestMyClass(unittest.TestCase):
    @patch('__main__.MyClass.external_function')
    def test_using_mock(self, mock_external_function):
        mock_external_function.return_value = "mocked data"
        my_object = MyClass()
        result = my_object.external_function()
        self.assertEqual(result, "mocked data")

if __name__ == '__main__':
    unittest.main()

This example replaces the external_function with a mock object which returns “mocked data”. This allows testing MyClass without needing a working external system.

Test-Driven Development (TDD)

TDD is a development approach where you write tests before you write the code being tested. This helps ensure that your code meets its requirements and improves the overall design. In the context of unittest, you would write your test cases, run them (initially they should fail), then implement the code necessary to make the tests pass. Iteratively adding features and corresponding tests is the core of TDD.

Code Coverage

Code coverage tools measure the percentage of your code that is executed during testing. This provides an indication of how thoroughly your tests cover your codebase. Several tools are available (e.g., coverage.py), which can be integrated with unittest to generate coverage reports. High code coverage does not guarantee perfect correctness, but it helps identify areas of your code that are not tested and might contain bugs.

Integrating with Continuous Integration

Continuous Integration (CI) systems (such as Jenkins, Travis CI, GitHub Actions, GitLab CI) automatically build and test your code whenever changes are pushed to a repository. unittest integrates seamlessly with CI systems. You can configure your CI system to run your unittest tests as part of the build process. The results of these tests are then used to determine whether the build is successful or not. Commonly, CI systems will use the JUnit XML report format mentioned earlier to easily collect and interpret the test outcomes.

Best Practices

Writing Readable Tests

Readability is paramount for maintainable and collaborative test suites. Well-written tests are easy to understand, modify, and debug. Consider these points:

Naming Conventions

Consistent naming conventions improve the organization and readability of your tests. Follow these guidelines:

Test Maintainability

Maintainable tests adapt to code changes without requiring extensive rework. Here’s how to ensure maintainability:

Avoiding Common Pitfalls

Appendix

Glossary of Terms

Further Reading and Resources

Remember to always consult the official Python documentation for the most accurate and up-to-date information. Many online resources and tutorials provide valuable supplementary information, but the official documentation serves as the definitive source.