Lightweight Python toolkit for measuring execution time, CPU time, memory usage, and call statistics of functions and code blocks. thon toolkit forhelps developers understand performance characteristics of their code with CPU time, memory usage, and caland a simple decorator or context manager.
- Measure execution time
- Measure CPU time
- Measure peak memory usage
- Call count summary
- Supports sync and async functions
- Decorator API
- Context manager API
- Optional arguments and result logging
- Logger integration
- Pure Python implementation
- No external runtime dependencies
- Works on Python 3.10+
pip install funcmetricsx⸻
from funcmetricsx import metrics
@metrics()
def compute():
return sum(i * i for i in range(10000))
compute()
[funcmetricsx] | compute | time=0.0012s | cpu=0.0011s | peak_memory=10.25 KB
[funcmetricsx] | compute summary | calls=1 | avg_time=0.0012s | last_time=0.0012s | avg_cpu=0.0011s | last_cpu=0.0011s
⸻
import requests
from funcmetricsx import metrics
@metrics()
def fetch_users():
r = requests.get("https://jsonplaceholder.typicode.com/users")
return r.json()
fetch_users()
[funcmetricsx] | fetch_users | time=0.2341s | cpu=0.0120s | peak_memory=48.00 KB
Useful for understanding network latency vs CPU work.
⸻
import sqlite3
from funcmetricsx import metrics
conn = sqlite3.connect(":memory:")
@metrics(show_result=True)
def query_db():
cur = conn.cursor()
cur.execute("SELECT sqlite_version()")
return cur.fetchone()
query_db()
[funcmetricsx] | query_db | time=0.0003s | cpu=0.0003s | result=('3.44.0',)
Helpful when profiling database access speed.
⸻
import time
from funcmetricsx import metrics
@metrics()
def process_data():
time.sleep(0.2)
return "done"
process_data()
[funcmetricsx] | process_data | time=0.2001s | cpu=0.0001s
Shows that the time is mostly waiting, not CPU usage.
⸻
from funcmetricsx import metrics
@metrics()
def cpu_heavy():
total = 0
for i in range(10_000_000):
total += i
return total
cpu_heavy()
[funcmetricsx] | cpu_heavy | time=0.45s | cpu=0.44s
Here CPU time ≈ wall time, meaning the function is CPU-bound.
⸻
from funcmetricsx import metrics
@metrics()
def add(a, b):
return a + b
add(1, 2)
add(3, 4)
add(5, 6)
[funcmetricsx] | add | time=0.0001s | cpu=0.0001s
[funcmetricsx] | add summary | calls=1 | avg_time=0.0001s
[funcmetricsx] | add | time=0.0001s | cpu=0.0001s
[funcmetricsx] | add summary | calls=2 | avg_time=0.0001s
[funcmetricsx] | add | time=0.0001s | cpu=0.0001s
[funcmetricsx] | add summary | calls=3 | avg_time=0.0001s
Great for profiling high-frequency functions.
⸻
import asyncio
from funcmetricsx import metrics
@metrics()
async def fetch_data():
await asyncio.sleep(0.1)
return "done"
asyncio.run(fetch_data())
[funcmetricsx] | fetch_data | time=0.1003s | cpu=0.0002s
Works automatically with async def.
⸻
Sometimes you want to measure a specific block of code instead of a function.
import time
from funcmetricsx import measure_block
with measure_block("load-data"):
time.sleep(0.2)
[funcmetricsx] | load-data | time=0.2002s | cpu=0.0001s
⸻
from funcmetricsx import metrics
@metrics(show_args=True)
def multiply(a, b):
return a * b
multiply(3, 4)
[funcmetricsx] | multiply | time=0.0001s | args=(3, 4)
⸻
from funcmetricsx import metrics
@metrics(show_result=True)
def square(x):
return x * x
square(5)
[funcmetricsx] | square | time=0.0001s | result=25
Send metrics to a Python logger instead of printing.
import logging
from funcmetricsx import metrics
logging.basicConfig(
filename="metrics.log",
level=logging.INFO,
format="%(asctime)s - %(levelname)s - %(message)s",
)
logger = logging.getLogger("metrics")
@metrics(logger=logger)
def task():
return sum(range(10000))
task()
you can check the output in metrics.log
@metrics(
show_time=True,
show_cpu_time=True,
show_memory=True,
show_args=False,
show_result=False,
track_calls=True,
)
| Option | Description |
|---|---|
| show_time | display execution time |
| show_cpu_time | display CPU usage time |
| show_memory | display peak memory usage |
| show_args | show function arguments |
| show_result | show returned value |
| track_calls | maintain call statistics |
pytest