Rate-limited HTTP client with disk caching.
pip install api-cache[cache] # with disk caching (recommended)
pip install api-cache # rate limiting only, no disk cachefrom api_cache import CachedApiClient, RateLimitConfig, CacheConfig
client = CachedApiClient(
base_url="https://jsonplaceholder.typicode.com",
rate_limit=RateLimitConfig(max_requests=30, window_seconds=60),
cache=CacheConfig(cache_dir=".cache", default_ttl=300),
)
post = client.get("/posts/1") # hits network, caches result
post = client.get("/posts/1") # returns cached instantly
print(client.requests_remaining) # 29
print(client.cache_stats) # {"enabled": True, "size": 1, "volume": 834}- Sliding window rate limit -- not fixed-window, so you never get burst-then-blocked behavior
- MD5 deterministic cache keys -- same URL + params always hits the same cache entry
- Optional diskcache -- works without
diskcacheinstalled (graceful fallback to no caching) requests_remainingproperty -- check your budget before making a call- Per-request TTL override --
client.get("/data", ttl=60)for short-lived entries
Data pipeline API calls -- Your pipeline calls 5 external APIs. Cache responses to avoid redundant calls on reruns. Rate limiting prevents getting banned.
Development and testing -- During development, you call the same API endpoints repeatedly. Cache saves time and API quota.
Multi-source aggregation -- Aggregate data from 10 sources, each with different rate limits. One client class handles caching and throttling for all of them.
| Method / Property | Purpose |
|---|---|
CachedApiClient(base_url, headers, rate_limit, cache) |
Create a client |
.get(endpoint, params, ttl, skip_cache) |
GET request with throttle + cache |
.requests_remaining |
Requests left in current rate limit window |
.cache_stats |
{"enabled", "size", "volume"} |
.clear_cache() |
Empty all cached responses |
RateLimitConfig(
max_requests=60, # per window
window_seconds=3600, # 1 hour sliding window
min_interval=0.5, # minimum seconds between calls
)
CacheConfig(
cache_dir=".cache",
default_ttl=3600, # seconds
enabled=True,
)- No async -- synchronous
requestslibrary only - No POST caching -- only GET requests are cached
- Single-process rate limit -- counter is in-memory, not shared across processes
- No cache invalidation by key -- clear all or nothing
MIT