Skip to content

Quality Tools

provide.testkit.quality

Code quality analysis utilities for the provide testkit.

This module provides pytest fixtures and utilities for integrating code quality tools into testing workflows. All quality tools are optional and only activated when explicitly requested.

Key Features: - Coverage tracking and reporting - Security scanning with Bandit - Complexity analysis with Radon - Performance profiling with py-spy - Documentation coverage with Interrogate

Usage

Basic quality fixture

def test_with_coverage(quality_coverage): result = quality_coverage.track_coverage()

Quality decorator

@quality_check(coverage=90, security=True) def test_with_gates(): pass

CLI usage

provide-testkit quality analyze src/

Classes

BaseQualityFixture

BaseQualityFixture(
    config: dict[str, Any] | None = None,
    artifact_dir: Path | None = None,
)

Bases: ABC

Base class for pytest quality fixtures.

Provides common functionality for quality analysis fixtures including configuration management, artifact handling, and result tracking.

Initialize the fixture.

Parameters:

Name Type Description Default
config dict[str, Any] | None

Tool-specific configuration

None
artifact_dir Path | None

Directory to store artifacts

None
Source code in provide/testkit/quality/base.py
def __init__(self, config: dict[str, Any] | None = None, artifact_dir: Path | None = None) -> None:
    """Initialize the fixture.

    Args:
        config: Tool-specific configuration
        artifact_dir: Directory to store artifacts
    """
    self.config = config or {}
    self.artifact_dir = artifact_dir or Path(".quality")
    self.results: list[QualityResult] = []
    self._setup_complete = False
Functions
setup abstractmethod
setup() -> None

Setup the quality tool.

Source code in provide/testkit/quality/base.py
@abstractmethod
def setup(self) -> None:
    """Setup the quality tool."""
    pass
teardown abstractmethod
teardown() -> None

Cleanup after quality check.

Source code in provide/testkit/quality/base.py
@abstractmethod
def teardown(self) -> None:
    """Cleanup after quality check."""
    pass
add_result
add_result(result: QualityResult) -> None

Add a result to the tracked results.

Source code in provide/testkit/quality/base.py
def add_result(self, result: QualityResult) -> None:
    """Add a result to the tracked results."""
    self.results.append(result)
get_results
get_results() -> list[QualityResult]

Get all tracked results.

Source code in provide/testkit/quality/base.py
def get_results(self) -> list[QualityResult]:
    """Get all tracked results."""
    return self.results.copy()
get_results_by_tool
get_results_by_tool() -> dict[str, QualityResult]

Get results indexed by tool name.

Source code in provide/testkit/quality/base.py
def get_results_by_tool(self) -> dict[str, QualityResult]:
    """Get results indexed by tool name."""
    return {result.tool: result for result in self.results}
ensure_setup
ensure_setup() -> None

Ensure setup has been called.

Source code in provide/testkit/quality/base.py
def ensure_setup(self) -> None:
    """Ensure setup has been called."""
    if not self._setup_complete:
        self.setup()
        self._setup_complete = True
create_artifact_dir
create_artifact_dir(subdir: str | None = None) -> Path

Create and return artifact directory.

Parameters:

Name Type Description Default
subdir str | None

Optional subdirectory name

None

Returns:

Type Description
Path

Path to the artifact directory

Source code in provide/testkit/quality/base.py
def create_artifact_dir(self, subdir: str | None = None) -> Path:
    """Create and return artifact directory.

    Args:
        subdir: Optional subdirectory name

    Returns:
        Path to the artifact directory
    """
    artifact_path = self.artifact_dir / subdir if subdir else self.artifact_dir

    artifact_path.mkdir(parents=True, exist_ok=True)
    return artifact_path

QualityResult dataclass

QualityResult(
    tool: str,
    passed: bool,
    score: float | None = None,
    details: dict[str, Any] = dict(),
    artifacts: list[Path] = list(),
    execution_time: float | None = None,
)

Result from a quality analysis tool.

Attributes:

Name Type Description
tool str

Name of the tool that generated this result

passed bool

Whether the quality check passed

score float | None

Numeric score (0-100) if applicable

details dict[str, Any]

Tool-specific details and metrics

artifacts list[Path]

List of artifact files created

execution_time float | None

Time taken to run the analysis in seconds

Attributes
summary property
summary: str

Human-readable summary of the result.

QualityTool

Bases: Protocol

Protocol for quality analysis tools.

Functions
analyze
analyze(path: Path, **kwargs: Any) -> QualityResult

Run analysis on the given path.

Parameters:

Name Type Description Default
path Path

Path to analyze (file or directory)

required
**kwargs Any

Tool-specific options

{}

Returns:

Type Description
QualityResult

QualityResult containing analysis results

Source code in provide/testkit/quality/base.py
def analyze(self, path: Path, **kwargs: Any) -> QualityResult:
    """Run analysis on the given path.

    Args:
        path: Path to analyze (file or directory)
        **kwargs: Tool-specific options

    Returns:
        QualityResult containing analysis results
    """
    ...
report
report(
    result: QualityResult, format: str = "terminal"
) -> str

Generate a report from analysis result.

Parameters:

Name Type Description Default
result QualityResult

Result to generate report for

required
format str

Output format (terminal, json, html, markdown)

'terminal'

Returns:

Type Description
str

Formatted report string

Source code in provide/testkit/quality/base.py
def report(self, result: QualityResult, format: str = "terminal") -> str:
    """Generate a report from analysis result.

    Args:
        result: Result to generate report for
        format: Output format (terminal, json, html, markdown)

    Returns:
        Formatted report string
    """
    ...

ReportGenerator

ReportGenerator(config: dict[str, Any] | None = None)

Generates reports from quality analysis results.

Supports multiple output formats including terminal, JSON, HTML, and Markdown.

Initialize report generator.

Parameters:

Name Type Description Default
config dict[str, Any] | None

Configuration for report generation

None
Source code in provide/testkit/quality/report.py
def __init__(self, config: dict[str, Any] | None = None) -> None:
    """Initialize report generator.

    Args:
        config: Configuration for report generation
    """
    self.config = config or {}
Functions
generate
generate(
    results: dict[str, QualityResult],
    format: str = "terminal",
) -> str

Generate a report from quality results.

Parameters:

Name Type Description Default
results dict[str, QualityResult]

Quality results to report on

required
format str

Output format (terminal, json, html, markdown)

'terminal'

Returns:

Type Description
str

Formatted report string

Source code in provide/testkit/quality/report.py
def generate(self, results: dict[str, QualityResult], format: str = "terminal") -> str:
    """Generate a report from quality results.

    Args:
        results: Quality results to report on
        format: Output format (terminal, json, html, markdown)

    Returns:
        Formatted report string
    """
    if format == "terminal":
        return self._generate_terminal_report(results)
    elif format == "json":
        return self._generate_json_report(results)
    elif format == "html":
        return self._generate_html_report(results)
    elif format == "markdown":
        return self._generate_markdown_report(results)
    else:
        raise ValueError(f"Unsupported report format: {format}")
save_report
save_report(
    results: dict[str, QualityResult],
    output_path: Path,
    format: str | None = None,
) -> None

Save report to file.

Parameters:

Name Type Description Default
results dict[str, QualityResult]

Quality results to report on

required
output_path Path

Path to save report to

required
format str | None

Output format (auto-detected from extension if None)

None
Source code in provide/testkit/quality/report.py
def save_report(
    self, results: dict[str, QualityResult], output_path: Path, format: str | None = None
) -> None:
    """Save report to file.

    Args:
        results: Quality results to report on
        output_path: Path to save report to
        format: Output format (auto-detected from extension if None)
    """
    if format is None:
        # Auto-detect format from file extension
        suffix = output_path.suffix.lower()
        if suffix == ".json":
            format = "json"
        elif suffix == ".html":
            format = "html"
        elif suffix == ".md":
            format = "markdown"
        else:
            format = "terminal"

    report_content = self.generate(results, format)
    ensure_dir(output_path.parent)
    atomic_write_text(output_path, report_content)

QualityRunner

QualityRunner(
    artifact_root: Path | None = None,
    tools: list[str] | None = None,
    config: dict[str, Any] | None = None,
)

Orchestrates multiple quality analysis tools.

Manages the execution of quality tools, artifact collection, and result aggregation with configurable quality gates.

Initialize the quality runner.

Parameters:

Name Type Description Default
artifact_root Path | None

Root directory for storing artifacts (defaults to .quality-artifacts)

None
tools list[str] | None

List of tool names to run (None for default set)

None
config dict[str, Any] | None

Configuration for tools and runner

None
Source code in provide/testkit/quality/runner.py
def __init__(
    self,
    artifact_root: Path | None = None,
    tools: list[str] | None = None,
    config: dict[str, Any] | None = None,
) -> None:
    """Initialize the quality runner.

    Args:
        artifact_root: Root directory for storing artifacts (defaults to .quality-artifacts)
        tools: List of tool names to run (None for default set)
        config: Configuration for tools and runner
    """
    self.artifact_root = Path(artifact_root) if artifact_root else Path(".quality-artifacts")
    self.config = config or {}
    self.tools = tools or self._get_default_tools()
    self.tool_instances: dict[str, QualityTool] = {}
    self._initialize_tools()
Functions
run_all
run_all(
    target: Path, **kwargs: Any
) -> dict[str, QualityResult]

Run all configured quality tools on the target.

Parameters:

Name Type Description Default
target Path

Path to analyze

required
**kwargs Any

Additional arguments passed to tools

{}

Returns:

Type Description
dict[str, QualityResult]

Dictionary mapping tool names to their results

Source code in provide/testkit/quality/runner.py
def run_all(self, target: Path, **kwargs: Any) -> dict[str, QualityResult]:
    """Run all configured quality tools on the target.

    Args:
        target: Path to analyze
        **kwargs: Additional arguments passed to tools

    Returns:
        Dictionary mapping tool names to their results
    """
    results = {}
    target = Path(target)

    for tool_name, tool in self.tool_instances.items():
        artifact_dir = self.artifact_root / tool_name
        ensure_dir(artifact_dir)

        try:
            start_time = time.time()
            result = tool.analyze(target, artifact_dir=artifact_dir, **kwargs)
            result.execution_time = time.time() - start_time

            # Save artifacts
            self._save_tool_artifacts(result, artifact_dir)
            results[tool_name] = result

        except Exception as e:
            # Create failed result for tool
            results[tool_name] = QualityResult(
                tool=tool_name, passed=False, details={"error": str(e), "error_type": type(e).__name__}
            )

    return results
run_with_gates
run_with_gates(
    target: Path, gates: dict[str, Any], **kwargs: Any
) -> tuple[bool, dict[str, QualityResult]]

Run quality tools and check against quality gates.

Parameters:

Name Type Description Default
target Path

Path to analyze

required
gates dict[str, Any]

Quality gate requirements

required
**kwargs Any

Additional arguments passed to tools

{}

Returns:

Type Description
tuple[bool, dict[str, QualityResult]]

Tuple of (all_gates_passed, results)

Source code in provide/testkit/quality/runner.py
def run_with_gates(
    self, target: Path, gates: dict[str, Any], **kwargs: Any
) -> tuple[bool, dict[str, QualityResult]]:
    """Run quality tools and check against quality gates.

    Args:
        target: Path to analyze
        gates: Quality gate requirements
        **kwargs: Additional arguments passed to tools

    Returns:
        Tuple of (all_gates_passed, results)
    """
    results = self.run_all(target, **kwargs)
    passed = self._check_gates(results, gates)
    return passed, results
get_available_tools
get_available_tools() -> list[str]

Get list of available tool names.

Source code in provide/testkit/quality/runner.py
def get_available_tools(self) -> list[str]:
    """Get list of available tool names."""
    return list(self.tool_instances.keys())
generate_summary_report
generate_summary_report(
    results: dict[str, QualityResult],
) -> str

Generate a summary report of all results.

Parameters:

Name Type Description Default
results dict[str, QualityResult]

Results to summarize

required

Returns:

Type Description
str

Summary report string

Source code in provide/testkit/quality/runner.py
def generate_summary_report(self, results: dict[str, QualityResult]) -> str:
    """Generate a summary report of all results.

    Args:
        results: Results to summarize

    Returns:
        Summary report string
    """
    lines = ["Quality Analysis Summary", "=" * 30, ""]

    total_tools = len(results)
    passed_tools = sum(1 for r in results.values() if r.passed)

    lines.append(f"Tools Run: {total_tools}")
    lines.append(f"Passed: {passed_tools}")
    lines.append(f"Failed: {total_tools - passed_tools}")
    lines.append("")

    for _tool_name, result in results.items():
        lines.append(result.summary)

    return "\n".join(lines)
run_tools
run_tools(
    target: Path,
    tools: list[str] | None = None,
    artifact_dir: Path | None = None,
    tool_configs: dict[str, Any] | None = None,
) -> dict[str, QualityResult]

Run specific quality tools on the target.

Parameters:

Name Type Description Default
target Path

Path to analyze

required
tools list[str] | None

List of tool names to run (None for all available)

None
artifact_dir Path | None

Directory for artifacts (overrides default)

None
tool_configs dict[str, Any] | None

Configuration for tools

None

Returns:

Type Description
dict[str, QualityResult]

Dictionary mapping tool names to their results

Source code in provide/testkit/quality/runner.py
def run_tools(
    self,
    target: Path,
    tools: list[str] | None = None,
    artifact_dir: Path | None = None,
    tool_configs: dict[str, Any] | None = None,
) -> dict[str, QualityResult]:
    """Run specific quality tools on the target.

    Args:
        target: Path to analyze
        tools: List of tool names to run (None for all available)
        artifact_dir: Directory for artifacts (overrides default)
        tool_configs: Configuration for tools

    Returns:
        Dictionary mapping tool names to their results
    """
    if artifact_dir:
        original_artifact_root = self.artifact_root
        self.artifact_root = artifact_dir

    if tool_configs:
        original_config = self.config
        self.config = tool_configs
        # Re-initialize tools with new config
        self._initialize_tools()

    # Filter tools if specified
    if tools:
        filtered_instances = {name: tool for name, tool in self.tool_instances.items() if name in tools}
        original_instances = self.tool_instances
        self.tool_instances = filtered_instances

    try:
        results = self.run_all(target)
        return results
    finally:
        # Restore original state
        if artifact_dir:
            self.artifact_root = original_artifact_root
        if tool_configs:
            self.config = original_config
            self._initialize_tools()
        if tools:
            self.tool_instances = original_instances