Metrics
provide.foundation.profiling.metrics
¶
TODO: Add module docstring.
Classes¶
ProfileMetrics
¶
Thread-safe metrics collection for profiling Foundation performance.
Tracks message processing performance, emoji overhead, and throughput metrics for Foundation's logging infrastructure.
Example
metrics = ProfileMetrics() metrics.record_message(duration_ns=1500000, has_emoji=True, field_count=5) print(f"Avg latency: {metrics.avg_latency_ms:.2f}ms") print(f"Throughput: {metrics.messages_per_second:.0f} msg/sec")
Initialize metrics with zero values and current timestamp.
Source code in provide/foundation/profiling/metrics.py
Attributes¶
avg_fields_per_message
property
¶
Calculate average number of fields per message.
avg_latency_ms
property
¶
Calculate average processing latency in milliseconds.
emoji_overhead_percent
property
¶
Calculate percentage of messages with emoji processing.
messages_per_second
property
¶
Calculate messages per second since start time.
Functions¶
record_message
¶
Record a processed message with timing and metadata.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
duration_ns
|
int
|
Processing duration in nanoseconds |
required |
has_emoji
|
bool
|
Whether the message contained emoji processing |
required |
field_count
|
int
|
Number of fields in the log event |
required |
Source code in provide/foundation/profiling/metrics.py
reset
¶
Reset all metrics to initial values with new start time.
Source code in provide/foundation/profiling/metrics.py
to_dict
¶
Serialize metrics to dictionary for JSON output.
Returns:
| Type | Description |
|---|---|
dict[str, Any]
|
Dictionary containing all current metrics |