Inference cache
π€ AI-Generated Content
This documentation was generated with AI assistance and is still being audited. Some, or potentially a lot, of this information may be inaccurate. Learn more.
pyvider.cty.conversion.inference_cache
¶
Classes¶
Functions¶
get_container_schema_cache
¶
Get the current container schema cache from the context.
Returns:
| Type | Description |
|---|---|
dict[tuple[Any, ...], CtyType[Any]] | None
|
Cache dictionary if in active context, None otherwise |
Source code in pyvider/cty/conversion/inference_cache.py
get_structural_key_cache
¶
Get the current structural key cache from the context.
Returns:
| Type | Description |
|---|---|
dict[int, tuple[Any, ...]] | None
|
Cache dictionary if in active context, None otherwise |
Source code in pyvider/cty/conversion/inference_cache.py
inference_cache_context
¶
Provide isolated inference caches for type inference operations.
Creates scoped caches that are automatically cleaned up when exiting the context. Nested contexts reuse the parent cache. Respects the configuration setting for enabling/disabling caches.
Yields:
| Type | Description |
|---|---|
Generator[None]
|
None (use get_*_cache() functions within context) |
Examples:
>>> with inference_cache_context():
... # Caches are active here
... result = infer_cty_type_from_raw(data)
... # Caches automatically cleared
Source code in pyvider/cty/conversion/inference_cache.py
with_inference_cache
¶
Decorator providing isolated inference cache for function execution.
Ensures thread/async safety by providing each invocation with its own cache context via ContextVar-based scoping.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
F
|
Function to decorate |
required |
Returns:
| Type | Description |
|---|---|
F
|
Decorated function with cache context |
Examples: