Table-driven testing is a testing paradigm where multiple test cases are defined in a structured format, typically as a collection of inputs and expected outputs. Instead of writing separate test functions for each test case, you define a single test function that iterates through the collection (or "table") of test cases.
This approach allows you to add new test cases by simply extending your test table rather than writing new test functions. The paradigm gets its name from how the test cases are organized - like rows in a table where each row represents a complete test case with inputs and expected outputs.
// Simplified example in Go
tests := []struct {
name string
input int
expected int
}{
{"positive number", 5, 5},
{"negative number", -5, 5},
{"zero", 0, 0},
}
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
result := Abs(tc.input)
if result != tc.expected {
t.Errorf("Abs(%d) = %d; expected %d", tc.input, result, tc.expected)
}
})
}
Why Table-Driven Testing Became Popular
Table-driven testing became popular for several compelling reasons:
- Reduced code duplication: Instead of writing similar test functions with slight variations, you write a single function that processes multiple test cases.
- Improved maintainability: When you need to change how tests are evaluated, you only need to update one function rather than multiple similar functions.
- Better test coverage visibility: The table format makes it easy to see the range of inputs being tested, making it clearer which edge cases are covered.
- Easier to add test cases: Adding new test cases is as simple as adding a new entry to the table, which encourages more comprehensive testing.
- Self-documenting: The table structure itself documents what inputs are being tested and what outputs are expected.
- Great fit for unit tests: It works particularly well for pure functions where different inputs should produce predictable outputs.
Examples in Go
Go's testing framework is particularly well-suited for table-driven testing. Here's a more comprehensive example:
package calculator
import "testing"
func TestAdd(t *testing.T) {
// Define table of test cases
testCases := []struct {
name string
a int
b int
expected int
}{
{"both positive", 2, 3, 5},
{"positive and negative", 2, -3, -1},
{"both negative", -2, -3, -5},
{"zero and positive", 0, 3, 3},
{"large numbers", 10000, 20000, 30000},
}
// Iterate through all test cases
for _, tc := range testCases {
// Use t.Run to create a named subtest
t.Run(tc.name, func(t *testing.T) {
result := Add(tc.a, tc.b)
if result != tc.expected {
t.Errorf("Add(%d, %d) = %d; expected %d",
tc.a, tc.b, result, tc.expected)
}
})
}
}
func TestCalculate(t *testing.T) {
testCases := []struct {
name string
a int
b int
op string
expected int
expectErr bool
}{
{"addition", 5, 3, "+", 8, false},
{"subtraction", 5, 3, "-", 2, false},
{"multiplication", 5, 3, "*", 15, false},
{"division", 6, 3, "/", 2, false},
{"division by zero", 6, 0, "/", 0, true},
{"invalid operation", 5, 3, "$", 0, true},
}
for _, tc := range testCases {
t.Run(tc.name, func(t *testing.T) {
result, err := Calculate(tc.a, tc.b, tc.op)
// Check error expectations
if tc.expectErr && err == nil {
t.Errorf("Calculate(%d, %d, %s) expected error but got none",
tc.a, tc.b, tc.op)
return
}
if !tc.expectErr && err != nil {
t.Errorf("Calculate(%d, %d, %s) unexpected error: %v",
tc.a, tc.b, tc.op, err)
return
}
// If we don't expect an error, check the result
if !tc.expectErr && result != tc.expected {
t.Errorf("Calculate(%d, %d, %s) = %d; expected %d",
tc.a, tc.b, tc.op, result, tc.expected)
}
})
}
}
Go's testing framework provides t.Run()
which creates a subtest for each test case, allowing individual cases to pass or fail independently. This also provides clear output about which specific test cases failed.
Implementing Table-Driven Testing in Python
While table-driven testing originated in Go, the concept can be applied to any language. Python also supports table-driven testing, typically using frameworks like unittest
or pytest
:
Using unittest
import unittest
def add(a, b):
return a + b
class TestAddition(unittest.TestCase):
def test_add(self):
# Define test cases as a list of tuples
test_cases = [
# (a, b, expected)
(2, 3, 5),
(0, 0, 0),
(-1, 1, 0),
(-1, -1, -2),
(100, 200, 300)
]
# Iterate through test cases
for a, b, expected in test_cases:
with self.subTest(a=a, b=b):
result = add(a, b)
self.assertEqual(result, expected,
f"add({a}, {b}) returned {result} instead of {expected}")
if __name__ == '__main__':
unittest.main()
The .subTest()
context manager in unittest
serves a similar purpose to Go's t.Run()
, creating separate subtests for each test case.
Using pytest
import pytest
def calculate(a, b, op):
if op == '+':
return a + b
elif op == '-':
return a - b
elif op == '*':
return a * b
elif op == '/':
if b == 0:
raise ValueError("Division by zero")
return a // b
else:
raise ValueError(f"Unknown operation: {op}")
# Define test cases
test_cases = [
# a, b, op, expected
(5, 3, '+', 8),
(5, 3, '-', 2),
(5, 3, '*', 15),
(6, 3, '/', 2),
]
# Test function that pytest will discover
@pytest.mark.parametrize("a,b,op,expected", test_cases)
def test_calculate(a, b, op, expected):
result = calculate(a, b, op)
assert result == expected, f"calculate({a}, {b}, '{op}') returned {result} instead of {expected}"
# Test cases for exceptions
error_test_cases = [
# a, b, op, exception
(6, 0, '/', ValueError),
(5, 3, '$', ValueError),
]
@pytest.mark.parametrize("a,b,op,exception", error_test_cases)
def test_calculate_exceptions(a, b, op, exception):
with pytest.raises(exception):
calculate(a, b, op)
Pytest's parametrize
decorator provides an elegant way to implement table-driven tests. It automatically generates a separate test for each set of parameters.
In other languages:
Java (using JUnit 5)
import org.junit.jupiter.params.ParameterizedTest;
import org.junit.jupiter.params.provider.Arguments;
import org.junit.jupiter.params.provider.MethodSource;
import java.util.stream.Stream;
import static org.junit.jupiter.api.Assertions.assertEquals;
class CalculatorTest {
@ParameterizedTest
@MethodSource("additionTestCases")
void testAddition(int a, int b, int expected) {
Calculator calculator = new Calculator();
assertEquals(expected, calculator.add(a, b),
"Addition result incorrect");
}
// Method providing the test cases
static Stream<Arguments> additionTestCases() {
return Stream.of(
Arguments.of(1, 1, 2),
Arguments.of(0, 0, 0),
Arguments.of(-1, 1, 0),
Arguments.of(-1, -1, -2),
Arguments.of(Integer.MAX_VALUE, 1, Integer.MIN_VALUE) // Overflow case
);
}
}
When Not to Use Table-Driven Testing
Despite its advantages, table-driven testing isn't suitable for all testing scenarios:
- Complex setup requirements: When each test requires complex, unique setup and teardown procedures.
- Testing side effects: When you're testing functions that produce side effects like file I/O or database modifications that are difficult to represent in a table.
- Sequence-dependent tests: When tests must run in a specific order because they depend on state changes from previous tests.
- Complex assertions: When verifying results requires complex logic that can't be easily expressed in a table format.
- UI or integration testing: These typically require more complex interactions and verifications that don't fit well into a simple input/output table.
Conclusion
Table-driven testing offers a powerful, maintainable approach to testing functions with multiple input/output combinations. Its structured format reduces code duplication, improves test clarity, and makes it easier to add new test cases.
While it works exceptionally well with pure functions and unit tests, it may not be suitable for more complex testing scenarios involving side effects or sequence-dependent operations. Go's testing framework provides particularly elegant support for table-driven testing, but as we've seen, the paradigm can be implemented effectively in many programming languages using appropriate testing frameworks.