Skip to content

Cache

πŸ€– AI-Generated Content

This documentation was generated with AI assistance and is still being audited. Some, or potentially a lot, of this information may be inaccurate. Learn more.

provide.foundation.serialization.cache

Functions

get_cache_enabled

get_cache_enabled() -> bool

Whether caching is enabled.

Source code in provide/foundation/serialization/cache.py
def get_cache_enabled() -> bool:
    """Whether caching is enabled."""
    config = _get_cache_config()
    result: bool = config.cache_enabled
    return result

get_cache_key

get_cache_key(content: str, format: str) -> tuple[str, int]

Generate cache key from content and format.

Uses a (format, hash) tuple as the key to avoid intermediate string allocations. Python's built-in hash() is a single C-level operation that returns an int β€” no encode(), no hexdigest(), no slicing.

Note: hash() is not stable across Python processes (PYTHONHASHSEED), but that's fine for an in-process LRU cache.

Parameters:

Name Type Description Default
content str

String content to hash

required
format str

Format identifier (json, yaml, toml, etc.)

required

Returns:

Type Description
tuple[str, int]

Cache key tuple

Source code in provide/foundation/serialization/cache.py
def get_cache_key(content: str, format: str) -> tuple[str, int]:
    """Generate cache key from content and format.

    Uses a (format, hash) tuple as the key to avoid intermediate string
    allocations. Python's built-in hash() is a single C-level operation
    that returns an int β€” no encode(), no hexdigest(), no slicing.

    Note: hash() is not stable across Python processes (PYTHONHASHSEED),
    but that's fine for an in-process LRU cache.

    Args:
        content: String content to hash
        format: Format identifier (json, yaml, toml, etc.)

    Returns:
        Cache key tuple

    """
    return (format, hash(content))

get_cache_size

get_cache_size() -> int

Cache size limit.

Source code in provide/foundation/serialization/cache.py
def get_cache_size() -> int:
    """Cache size limit."""
    config = _get_cache_config()
    result: int = config.cache_size
    return result

get_serialization_cache

get_serialization_cache() -> Any

Get or create serialization cache with thread-safe lazy initialization.

Lock overhead (~20-50ns) is negligible compared to actual cache operations (~100-1000ns lookup, ~1-100ΞΌs for serialization).

Source code in provide/foundation/serialization/cache.py
def get_serialization_cache() -> Any:  # LRUCache
    """Get or create serialization cache with thread-safe lazy initialization.

    Lock overhead (~20-50ns) is negligible compared to actual cache operations
    (~100-1000ns lookup, ~1-100ΞΌs for serialization).
    """
    global _serialization_cache

    with _cache_lock:
        if _serialization_cache is None:
            from provide.foundation.utils.caching import LRUCache, register_cache

            config = _get_cache_config()
            _serialization_cache = LRUCache(maxsize=config.cache_size)
            register_cache("serialization", _serialization_cache)
        return _serialization_cache

reset_serialization_cache_config

reset_serialization_cache_config() -> None

Reset cached config for testing purposes.

Thread-safe reset that acquires the lock.

Source code in provide/foundation/serialization/cache.py
def reset_serialization_cache_config() -> None:
    """Reset cached config for testing purposes.

    Thread-safe reset that acquires the lock.
    """
    global _cached_config, _serialization_cache
    with _cache_lock:
        _cached_config = None
        _serialization_cache = None