principleModeratepending
Principle: Write tests that give confidence
Viewed 0 times
test confidencetesting trophyintegration testsbehavior testingcoverage
Problem
Test suite has high coverage numbers but doesn't catch real bugs. Tests are fragile, slow, or test the wrong things.
Solution
Focus on confidence per test, not coverage percentage:
The testing trophy (Kent C. Dodds):
High-confidence tests:
Low-confidence tests (avoid):
Ask: If this test fails, does it mean a user-facing bug exists?
The testing trophy (Kent C. Dodds):
/\ End-to-end (few)
/ \ - Happy path user flows
/----\ - Critical business flows
/ Intg \ Integration (many)
/--------\ - API endpoints with real DB
/ Unit \ - Component rendering with events
/___________\ Unit (some)
- Pure logic, utilities
Static analysis (all)
- TypeScript, ESLintHigh-confidence tests:
# Tests behavior, not implementation
def test_user_can_place_order():
user = create_user()
product = create_product(price=10.00)
order = place_order(user, product, quantity=2)
assert order.total == 20.00
assert order.status == 'confirmed'
assert len(user.orders) == 1
# Tests the whole flow, not internal methods
# Tests edge cases that matter
def test_order_with_zero_quantity_rejected():
with pytest.raises(ValidationError):
place_order(user, product, quantity=0)Low-confidence tests (avoid):
# Tests implementation details
def test_internal_cache_updated():
service._cache.clear()
service.get_user(1)
assert 1 in service._cache # Who cares? Test the behavior
# Tests trivial code
def test_getter():
user = User(name='Alice')
assert user.name == 'Alice' # Tests nothing usefulAsk: If this test fails, does it mean a user-facing bug exists?
Why
100% coverage with low-confidence tests gives false security. 60% coverage with high-confidence tests catches more real bugs.
Context
Designing test strategies for applications
Revisions (0)
No revisions yet.