Testmode
provide.foundation.testmode
¶
TODO: Add module docstring.
Functions¶
configure_structlog_for_test_safety
¶
Configure structlog to use stdout for multiprocessing safety.
When running tests with parallel execution (pytest-xdist, mutmut with --max-children, etc.), file handles don't survive process forking. This causes "I/O operation on closed file" errors when structlog's PrintLogger tries to write to file handles from forked processes.
This function configures structlog to use sys.stdout which is safe for multiprocessing and properly handled by pytest.
Should be called automatically when is_in_test_mode() returns True.
Source code in provide/foundation/testmode/detection.py
get_test_unsafe_features
¶
Get the registry of all test-unsafe features.
This is primarily used by validation tests to ensure all test-unsafe features are properly decorated.
Returns:
| Type | Description |
|---|---|
dict[str, dict[str, Any]]
|
Dictionary mapping function IDs to their metadata |
Example
features = get_test_unsafe_features() assert "process.title.set_process_title" in features
Source code in provide/foundation/testmode/decorators.py
is_in_click_testing
¶
Check if we're running inside Click's testing framework.
This detects Click's CliRunner testing context to prevent stream manipulation that could interfere with Click's output capture.
Returns:
| Type | Description |
|---|---|
bool
|
True if running in Click testing context, False otherwise |
Source code in provide/foundation/testmode/detection.py
is_in_test_mode
¶
Detect if we're running in a test environment.
This method checks for common test environment indicators to determine if Foundation components should adjust their behavior for test compatibility.
Performance: Results are cached after first detection since test mode doesn't change during process lifetime. Use _clear_test_mode_cache() in tests for proper isolation.
Returns:
| Type | Description |
|---|---|
bool
|
True if running in test mode, False otherwise |
Source code in provide/foundation/testmode/detection.py
is_test_unsafe
¶
Check if a function is registered as test-unsafe.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
func
|
Callable[..., Any]
|
The function to check |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if the function is decorated with @skip_in_test_mode |
Example
@skip_in_test_mode() def my_function(): ... pass
is_test_unsafe(my_function) True
Source code in provide/foundation/testmode/decorators.py
reset_circuit_breaker_state
¶
Reset all circuit breaker instances to ensure test isolation.
This function resets all circuit breaker instances that were created by the @circuit_breaker decorator and direct instantiation to ensure their state doesn't leak between tests.
Source code in provide/foundation/testmode/internal.py
reset_foundation_for_testing
¶
Complete Foundation reset for testing with transport re-registration.
This is the full reset function that testing frameworks should call. It performs the complete state reset and handles test-specific concerns like transport re-registration and test stream preservation.
Source code in provide/foundation/testmode/orchestration.py
reset_foundation_state
¶
Reset Foundation's complete internal state using proper orchestration.
This is the master reset function that knows the proper order and handles Foundation-specific concerns. It resets: - structlog configuration to defaults - Foundation Hub state (which manages all Foundation components) - Stream state back to defaults - Lazy setup state tracking (if available) - OpenTelemetry provider state (if available) - Foundation environment variables to defaults
This function encapsulates Foundation-internal knowledge about proper reset ordering and component dependencies.
Source code in provide/foundation/testmode/orchestration.py
125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 | |
reset_global_coordinator
¶
Reset the global initialization coordinator state for testing.
This function resets the singleton InitializationCoordinator state to ensure proper test isolation between test runs.
WARNING: This should only be called from test code or test fixtures. Production code should not reset the global coordinator state.
Source code in provide/foundation/testmode/internal.py
reset_hub_state
¶
Reset Hub state to defaults.
This clears the Hub registry and resets all Hub components to their initial state.
Source code in provide/foundation/testmode/internal.py
reset_logger_state
¶
Reset Foundation logger state to defaults.
This resets the lazy setup state and logger configuration flags without importing the full logger module to avoid circular dependencies.
Source code in provide/foundation/testmode/internal.py
reset_streams_state
¶
Reset stream state to defaults.
This resets file streams and other stream-related state managed by the streams module.
Source code in provide/foundation/testmode/internal.py
reset_structlog_state
¶
Reset structlog configuration to defaults.
This is the most fundamental reset - it clears all structlog configuration and returns it to an unconfigured state.
Source code in provide/foundation/testmode/internal.py
reset_test_mode_cache
¶
Reset test mode detection cache.
This clears the cached test mode detection result, allowing fresh detection on the next call. This is important for test isolation when tests manipulate environment variables or sys.modules.
Source code in provide/foundation/testmode/internal.py
reset_version_cache
¶
Reset version cache to defaults.
This clears the cached version to ensure clean state between tests, allowing each test to verify different version resolution scenarios.
Source code in provide/foundation/testmode/internal.py
should_use_shared_registries
¶
should_use_shared_registries(
use_shared_registries: bool,
component_registry: object | None,
command_registry: object | None,
) -> bool
Determine if Hub should use shared registries based on explicit parameters.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
use_shared_registries
|
bool
|
Explicit user preference |
required |
component_registry
|
object | None
|
Custom component registry if provided |
required |
command_registry
|
object | None
|
Custom command registry if provided |
required |
Returns:
| Type | Description |
|---|---|
bool
|
True if shared registries should be used |
Source code in provide/foundation/testmode/detection.py
skip_in_test_mode
¶
skip_in_test_mode(
return_value: Any = None,
log_level: str = "debug",
reason: str | None = None,
) -> Callable[[F], F]
Decorator to skip function execution in test mode.
Marks a function as test-unsafe and automatically skips execution when running in test mode. The function is registered in a global registry for validation purposes.
This decorator is reusable for any scenario where you want to conditionally skip function execution based on runtime detection.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
return_value
|
Any
|
Value to return when skipped (default: None) |
None
|
log_level
|
str
|
Log level for skip message (default: "debug") |
'debug'
|
reason
|
str | None
|
Optional custom reason for skipping (for logging) |
None
|
Returns:
| Type | Description |
|---|---|
Callable[[F], F]
|
Decorated function that checks test mode before execution |
Example
@skip_in_test_mode(return_value=True) def set_system_state(value: str) -> bool: ... # This won't run in tests ... os.system(f"something {value}") ... return True
@skip_in_test_mode(return_value=None, reason="systemd not available in tests") def notify_systemd(status: str) -> None: ... systemd.notify(status)