CVTas Testing Guide

Overview

CVTas has a comprehensive testing infrastructure to ensure code quality, prevent regressions, and maintain stability across development cycles.

Table of Contents


Test Structure

Tests are organized into three categories:

tests/
├── unit/               # Unit tests (fast, isolated)
│   ├── test_dependency_resolver.py
│   ├── test_food_calculator.py
│   └── ...
├── integration/        # Integration tests (API, database)
│   ├── test_scenario_api.py
│   └── ...
├── e2e/               # End-to-end tests (full user workflows)
│   ├── test_planner_ui.py
│   └── ...
├── fixtures/          # Test data and fixtures
│   ├── scenarios.json
│   ├── expected_results.json
│   └── edge_cases.json
└── conftest.py        # Shared pytest fixtures

Test Types

Unit Tests (tests/unit/) - Test individual functions and classes in isolation - Fast execution (< 1s per test) - No database or external dependencies - Use mocks for external dependencies

Integration Tests (tests/integration/) - Test API endpoints and database interactions - Test multiple components working together - Use Django test database - Test full request/response cycle

E2E Tests (tests/e2e/) - Test complete user workflows - Use Playwright for browser automation - Test JavaScript interactions - Slowest but most comprehensive


Running Tests

Prerequisites

# Install development dependencies
pip install -r requirements-dev.txt

# Install Playwright browsers (for E2E tests)
playwright install

Run All Tests

# Run all tests
pytest

# Run with verbose output
pytest -v

# Run with coverage
pytest --cov=backend --cov-report=term --cov-report=html

Run Specific Test Suites

# Unit tests only (fast)
pytest tests/unit/

# Integration tests
pytest tests/integration/

# E2E tests
pytest tests/e2e/

# Specific test file
pytest tests/unit/test_dependency_resolver.py

# Specific test class
pytest tests/unit/test_dependency_resolver.py::TestFormulaEvaluation

# Specific test function
pytest tests/unit/test_dependency_resolver.py::TestFormulaEvaluation::test_simple_arithmetic

Run with Markers

# Run only tests marked as @pytest.mark.slow
pytest -m slow

# Skip slow tests
pytest -m "not slow"

# Run E2E tests
pytest -m e2e

E2E Test Options

# Run E2E tests in headed mode (show browser)
pytest tests/e2e/ --headed

# Run E2E tests with specific browser
pytest tests/e2e/ --browser chromium  # Default
pytest tests/e2e/ --browser firefox
pytest tests/e2e/ --browser webkit    # Safari engine

# Run E2E tests with screenshots on failure
pytest tests/e2e/ --screenshot on-failure

# Run E2E tests with video recording
pytest tests/e2e/ --video on

Parallel Execution

# Run tests in parallel (faster)
pytest -n auto  # Auto-detect CPU cores
pytest -n 4     # Use 4 workers

Watch Mode

# Install pytest-watch
pip install pytest-watch

# Run tests on file changes
ptw

Writing Tests

Unit Test Example

# tests/unit/test_example.py
import pytest
from backend.scenarios.dependency_resolver import DependencyResolver


@pytest.fixture
def resolver():
    """Create a dependency resolver instance."""
    return DependencyResolver()


class TestDependencyResolver:
    """Test dependency resolution logic."""

    def test_simple_calculation(self, resolver):
        """Test basic arithmetic."""
        context = {'a': 10, 'b': 5}
        result = resolver.safe_eval('a + b', context)
        assert result == 15

    def test_conditional_logic(self, resolver):
        """Test IF/THEN/ELSE logic."""
        context = {'construction_possible': True}
        expr = 'IF construction_possible THEN 100 ELSE 0'
        result = resolver.evaluate_logic_simple(expr, context)
        assert result == 100

Integration Test Example

# tests/integration/test_api.py
import pytest
import json
from django.urls import reverse


@pytest.mark.django_db
class TestScenarioAPI:
    """Test scenario calculation API."""

    def test_calculate_scenario(self, authenticated_client, baseline_scenario):
        """Test scenario calculation endpoint."""
        url = reverse('scenarios:calculate')
        data = {'parameters': baseline_scenario['parameters']}

        response = authenticated_client.post(
            url,
            json.dumps(data),
            content_type='application/json'
        )

        assert response.status_code == 200
        result = response.json()
        assert 'status' in result
        assert 'metrics' in result

E2E Test Example

# tests/e2e/test_planner.py
import pytest
from playwright.sync_api import Page, expect


@pytest.mark.e2e
def test_scenario_workflow(authenticated_page: Page, base_url):
    """Test complete scenario planning workflow."""
    authenticated_page.goto(f"{base_url}/scenarios/planner/")

    # Select scenario
    nuclear_option = authenticated_page.locator("text=/Nuclear/i").first
    nuclear_option.click()

    # Calculate
    calculate_button = authenticated_page.locator("button:has-text('Calculate')").first
    calculate_button.click()

    # Verify results
    authenticated_page.wait_for_timeout(2000)
    assert "survival" in authenticated_page.content().lower()

Using Fixtures

Fixtures are defined in tests/conftest.py and automatically available:

def test_with_fixtures(
    nuclear_early_scenario,      # Scenario data
    authenticated_client,         # Authenticated API client
    test_user,                   # Test user instance
    edge_cases_fixture,          # Edge case scenarios
):
    """Test using multiple fixtures."""
    pass

Test Coverage

View Coverage Report

# Run tests with coverage
pytest --cov=backend --cov-report=html

# Open HTML report
open htmlcov/index.html  # macOS
xdg-open htmlcov/index.html  # Linux
start htmlcov/index.html  # Windows

Coverage Configuration

Coverage is configured in .coveragerc:

  • Target: 80%+ coverage
  • Excludes: Migrations, tests, venv, settings files
  • Reports: Terminal, HTML, XML (for CI)

Check Coverage Locally

# Run with coverage
pytest --cov=backend --cov-report=term

# Check if meets 80% threshold
coverage report --fail-under=80

Continuous Integration

GitHub Actions

Tests run automatically on: - Push to main, master, or develop - Pull requests to these branches

Workflow: .github/workflows/test.yml

Test Matrix: - Python 3.10 - Python 3.11

Steps: 1. Install dependencies 2. Run linting (flake8) 3. Check formatting (black) 4. Type checking (mypy) 5. Django checks 6. Run tests with coverage 7. Upload coverage to Codecov 8. Security checks (safety, bandit)

View CI Results

  • Go to GitHub repository → Actions tab
  • Click on latest workflow run
  • View test results and coverage report

Coverage Badge

Add to README.md:

[![codecov](https://codecov.io/gh/PipFoweraker/CVTas/branch/master/graph/badge.svg)](https://codecov.io/gh/PipFoweraker/CVTas)

Pre-commit Hooks

Installation

# Install pre-commit hooks
pre-commit install

# Run manually on all files
pre-commit run --all-files

What Gets Checked

Pre-commit hooks run before every commit:

  1. Code Formatting
  2. Black (auto-format)
  3. isort (import sorting)

  4. Linting

  5. flake8 (style violations)
  6. pylint (additional checks)

  7. Type Checking

  8. mypy (type hints)

  9. Security

  10. bandit (security issues)
  11. detect-secrets (credential detection)

  12. Django

  13. Django system checks

  14. General

  15. Trailing whitespace
  16. End-of-file fixer
  17. YAML/JSON validation
  18. Large file detection

Skip Hooks (Emergency Only)

# Skip pre-commit hooks (NOT RECOMMENDED)
git commit --no-verify -m "Emergency fix"

Best Practices

Test Writing

DO: - Write tests before or alongside code (TDD) - Test edge cases and error conditions - Use descriptive test names: test_scenario_fails_with_zero_budget - One assertion per test (when possible) - Use fixtures for reusable test data - Mock external dependencies (APIs, file I/O)

DON'T: - Test Django/library internals - Write tests that depend on other tests - Use sleep() - use proper waits instead - Commit code without tests for new features - Skip failing tests (fix them!)

Test Organization

class TestFeature:
    """Test Feature functionality."""

    def test_normal_case(self):
        """Test typical usage."""
        pass

    def test_edge_case_zero_value(self):
        """Test with zero input."""
        pass

    def test_error_handling_invalid_input(self):
        """Test error handling."""
        pass

Fixture Usage

# Good - Reusable fixture
@pytest.fixture
def calculator():
    return FoodProductionCalculator(headcount=100)

# Good - Fixture depends on other fixture
@pytest.fixture
def authenticated_client(api_client, test_user):
    api_client.force_authenticate(user=test_user)
    return api_client

Parametrize for Multiple Cases

@pytest.mark.parametrize("headcount,expected_calories", [
    (50, 50 * 3001 * 365),
    (100, 100 * 3001 * 365),
    (200, 200 * 3001 * 365),
])
def test_calorie_scaling(headcount, expected_calories):
    """Test calorie needs scale with headcount."""
    calc = FoodProductionCalculator(headcount=headcount)
    assert calc.calories_needed_per_year == expected_calories

Test Documentation

def test_nuclear_scenario_triggers_failures(self):
    """
    Test that nuclear scenario with short timeline triggers FAILURE status.

    This scenario has:
    - 12 months until event
    - Only 14 days warning
    - 30% yield reduction
    - Growing season affected

    Expected: FAILURE status with multiple critical constraints
    """
    pass

Troubleshooting

Common Issues

Tests fail locally but pass in CI: - Check Python version matches CI (3.10 or 3.11) - Ensure all dependencies installed: pip install -r requirements-dev.txt - Clear pytest cache: pytest --cache-clear

Import errors: - Ensure you're in project root - Check PYTHONPATH includes project root - Install project in editable mode: pip install -e .

Database errors: - Use @pytest.mark.django_db decorator - Reset test database: python manage.py migrate --run-syncdb

E2E tests timeout: - Increase timeout: authenticated_page.wait_for_timeout(5000) - Check Django dev server is running - Use --headed flag to see what's happening

Coverage drops unexpectedly: - Run coverage html and check what's not covered - Ensure test actually executes (didn't skip or fail early) - Check .coveragerc excludes aren't too broad


Quick Reference

# Daily development workflow
pre-commit run --all-files  # Check code quality
pytest tests/unit/ -v       # Run fast unit tests
pytest --cov=backend        # Full test suite with coverage

# Before committing
pre-commit run --all-files  # Auto-runs on commit anyway
pytest tests/unit/ -v       # Quick smoke test

# Before PR
pytest --cov=backend --cov-report=html  # Full test suite
coverage report --fail-under=80         # Check coverage
pytest tests/integration/ -v            # Integration tests
pytest tests/e2e/ -v                   # E2E tests (optional)

Additional Resources


Last Updated: 2025-10-19 Version: v0.2.0