A simple and robust caching library for Python functions, supporting both synchronous and asynchronous code.
- Cache function results based on function ID and arguments
- Supports both synchronous and asynchronous functions
- Thread-safe locking to prevent duplicate calculations
- Configurable Time-To-Live (TTL) for cached items
- "Never Die" mode for functions that should keep cache refreshed automatically
- Skip cache functionality to force fresh function execution while updating cache
# Clone the repository
git clone https://github.com/PulsarDefi/caching.git
cd caching
# Install the package
poetry installfrom caching import cache
# Cache function results for 5 minutes (default)
@cache()
def expensive_calculation(a, b):
# Some expensive operation
return a + b
# Async cache with custom TTL (1 hour)
@cache(ttl=3600)
async def another_calculation(url):
# Some expensive IO call
return requests.get(url).json()The never_die feature ensures that cached values never expire by automatically refreshing them in the background:
# Cache with never_die (automatic refresh)
@cache(ttl=300, never_die=True)
def critical_operation(user_id):
# Expensive operation that should always be available from cache
return fetch_data_from_database(user_id)How Never Die Works:
- When a function with
never_die=Trueis first called, the result is cached - A background thread monitors all
never_diefunctions - Before the cache expires (at 90% of TTL), the function is automatically called again
- The cache is updated with the new result
- If the refresh operation fails, the existing cached value is preserved
- Clients always get fast response times by reading from cache
Benefits:
- Cache is always "warm" and ready to serve
- No user request ever has to wait for the expensive operation
- If backend services go down temporarily, the last successful result is still available
- Perfect for critical operations where latency must be minimized
The skip_cache feature allows you to bypass reading from cache while still updating it with fresh results:
@cache(ttl=300)
def get_user_data(user_id):
# Expensive operation to fetch user data
return fetch_from_database(user_id)
# Normal call - uses cache if available
user = get_user_data(123)
# Force fresh execution while updating cache
fresh_user = get_user_data(123, skip_cache=True)
# Next normal call will get the updated cached value
updated_user = get_user_data(123)How Skip Cache Works:
- When
skip_cache=Trueis passed, the function bypasses reading from cache - The function executes normally and returns fresh results
- The fresh result is stored in the cache, updating any existing cached value
- Subsequent calls without
skip_cache=Truewill use the updated cached value - The TTL timer resets from when the cache last was updated
Benefits:
- Force refresh of potentially stale data while keeping cache warm
- Ensuring fresh data for critical operations while maintaining cache for other calls
Run the test scripts
python -m pytestMIT