Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 6 additions & 2 deletions .github/workflows/tests.yml
Original file line number Diff line number Diff line change
Expand Up @@ -36,16 +36,20 @@ jobs:
- name: Check code formatting with ruff
run: uv run ruff format --check src/ tests/

- name: Run tests
- name: Run unit tests
run: uv run pytest tests/test_correctness.py -v --tb=short

- name: Run integration tests (Redis with testcontainers)
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.12'
run: uv run pytest tests/test_integration_redis.py -v --tb=short

- name: Run benchmarks
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.12'
run: uv run python tests/benchmark.py

- name: Generate coverage report
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.12'
run: uv run pytest tests/test_correctness.py --cov=src/advanced_caching --cov-report=xml --cov-report=term
run: uv run pytest tests/test_correctness.py tests/test_integration_redis.py --cov=src/advanced_caching --cov-report=xml --cov-report=term

- name: Upload coverage to Codecov
if: matrix.os == 'ubuntu-latest' && matrix.python-version == '3.12'
Expand Down
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -134,4 +134,6 @@ dmypy.json
.uv-venv/
.venv/
venv/
benchmarks.log
scalene_profile.json

26 changes: 23 additions & 3 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,25 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- Redis cluster support
- DynamoDB backend example

## [0.1.4] - 2025-12-12

### Changed
- Performance improvements in hot paths:
- Reduced repeated cache initialization/lookups inside decorators.
- Reduced repeated `time.time()` calls by reusing a single timestamp per operation.
- `CacheEntry` is now a slotted dataclass to reduce per-entry memory/attribute overhead.
- SWR background refresh now uses a shared thread pool (avoids spawning a new thread per refresh).

### Added
- Benchmarking & profiling tooling updates:
- Benchmarks can be configured via environment variables (e.g. `BENCH_WORK_MS`, `BENCH_RUNS`).
- Helper to compare JSON benchmark runs in `benchmarks.log`.
- Tight-loop profiler workload for decorator overhead.

### Documentation
- README updated to reflect current APIs, uv usage, and storage/Redis examples.
- Added step-by-step benchmarking/profiling guide in `docs/benchmarking-and-profiling.md`.

## [0.1.3] - 2025-12-10

### Changed
Expand Down Expand Up @@ -74,9 +93,10 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `storage.py` coverage improved to ~74%.
- Ensured all tests pass under the documented `pyproject.toml` configuration.

[Unreleased]: https://github.com/namshiv2/advanced_caching/compare/v0.1.3...HEAD
[0.1.3]: https://github.com/namshiv2/advanced_caching/compare/v0.1.2...v0.1.3
[0.1.2]: https://github.com/namshiv2/advanced_caching/compare/v0.1.1...v0.1.2
[Unreleased]: https://github.com/agkloop/advanced_caching/compare/v0.1.4...HEAD
[0.1.4]: https://github.com/agkloop/advanced_caching/compare/v0.1.3...v0.1.4
[0.1.3]: https://github.com/agkloop/advanced_caching/compare/v0.1.2...v0.1.3
[0.1.2]: https://github.com/agkloop/advanced_caching/compare/v0.1.1...v0.1.2
[0.1.1]: https://github.com/namshiv2/advanced_caching/releases/tag/v0.1.1

## [0.1.1] - 2025-12-10
Expand Down
188 changes: 136 additions & 52 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
- [Installation](#installation) – Get started in 30 seconds
- [Quick Examples](#quick-start) – Copy-paste ready code
- [API Reference](#api-reference) – Full decorator & backend docs
- [Storage & Redis](#storage--redis) – Redis/Hybrid/custom storage examples
- [Custom Storage](#custom-storage) – Implement your own backend
- [Benchmarks](#benchmarks) – See the performance gains
- [Use Cases](#use-cases) – Real-world examples
Expand All @@ -37,6 +38,8 @@ pip install advanced-caching
uv pip install advanced-caching
# with Redis support
pip install "advanced-caching[redis]"
# with Redis support (uv)
uv pip install "advanced-caching[redis]"
```

## Quick Start
Expand Down Expand Up @@ -90,6 +93,10 @@ user = await get_user_async(42)
## Benchmarks
Full benchmarks available in `tests/benchmark.py`.

Step-by-step benchmarking + profiling guide: `docs/benchmarking-and-profiling.md`.

Storage & Redis usage is documented below.

## API Reference

### Key templates & custom keys
Expand Down Expand Up @@ -159,7 +166,7 @@ Simple time-based cache with configurable TTL.
TTLCache.cached(
key: str | Callable[..., str],
ttl: int,
cache: CacheStorage | None = None,
cache: CacheStorage | Callable[[], CacheStorage] | None = None,
) -> Callable
```

Expand All @@ -172,7 +179,7 @@ TTLCache.cached(

Positional key:
```python
@TTLCache.cached("user:{},", ttl=300)
@TTLCache.cached("user:{}", ttl=300)
def get_user(user_id: int):
return db.fetch(user_id)

Expand Down Expand Up @@ -206,7 +213,7 @@ SWRCache.cached(
key: str | Callable[..., str],
ttl: int,
stale_ttl: int = 0,
cache: CacheStorage | None = None,
cache: CacheStorage | Callable[[], CacheStorage] | None = None,
enable_lock: bool = True,
) -> Callable
```
Expand Down Expand Up @@ -262,7 +269,7 @@ BGCache.register_loader(
ttl: int | None = None,
run_immediately: bool = True,
on_error: Callable[[Exception], None] | None = None,
cache: CacheStorage | None = None,
cache: CacheStorage | Callable[[], CacheStorage] | None = None,
) -> Callable
```

Expand Down Expand Up @@ -325,6 +332,97 @@ BGCache.shutdown(wait=True)

### Storage Backends

## Storage & Redis

### Install (uv)

```bash
uv pip install advanced-caching
uv pip install "advanced-caching[redis]" # for RedisCache / HybridCache
```

### How storage is chosen

- If you don’t pass `cache=...`, each decorated function lazily creates its own `InMemCache` instance.
- You can pass either a cache instance (`cache=my_cache`) or a cache factory (`cache=lambda: my_cache`).

### Share one storage instance

```python
from advanced_caching import InMemCache, TTLCache

shared = InMemCache()

@TTLCache.cached("user:{}", ttl=60, cache=shared)
def get_user(user_id: int) -> dict:
return {"id": user_id}

@TTLCache.cached("org:{}", ttl=60, cache=shared)
def get_org(org_id: int) -> dict:
return {"id": org_id}
```

### Use RedisCache (distributed)

`RedisCache` stores values in Redis using `pickle`.

```python
import redis
from advanced_caching import RedisCache, TTLCache

client = redis.Redis(host="localhost", port=6379)
cache = RedisCache(client, prefix="app:")

@TTLCache.cached("user:{}", ttl=300, cache=cache)
def get_user(user_id: int) -> dict:
return {"id": user_id}
```

### Use SWRCache with RedisCache (recommended)

`SWRCache` uses `get_entry`/`set_entry` so it can store freshness metadata.

```python
import redis
from advanced_caching import RedisCache, SWRCache

client = redis.Redis(host="localhost", port=6379)
cache = RedisCache(client, prefix="products:")

@SWRCache.cached("product:{}", ttl=60, stale_ttl=30, cache=cache)
def get_product(product_id: int) -> dict:
return {"id": product_id}
```

### Use HybridCache (L1 memory + L2 Redis)

`HybridCache` is a two-level cache:
- **L1**: fast in-memory (`InMemCache`)
- **L2**: Redis-backed (`RedisCache`)

Reads go to L1 first; on L1 miss it tries L2; on L2 hit it warms L1.

```python
import redis
from advanced_caching import HybridCache, InMemCache, RedisCache, TTLCache

client = redis.Redis(host="localhost", port=6379)

hybrid = HybridCache(
l1_cache=InMemCache(),
l2_cache=RedisCache(client, prefix="app:"),
l1_ttl=60,
)

@TTLCache.cached("user:{}", ttl=300, cache=hybrid)
def get_user(user_id: int) -> dict:
return {"id": user_id}
```

Notes:
- `ttl` on the decorator controls how long values are considered valid.
- `l1_ttl` controls how long HybridCache keeps values in memory after an L2 hit.

#### InMemCache()
Thread-safe in-memory cache with TTL.

Expand Down Expand Up @@ -386,18 +484,16 @@ if entry and entry.is_fresh():

Implement the `CacheStorage` protocol for custom backends (DynamoDB, file-based, encrypted storage, etc.).

### File-Based Cache Example
### File-based example

```python
import json
import time
from pathlib import Path
from advanced_caching import CacheStorage, TTLCache, validate_cache_storage
from advanced_caching import CacheEntry, CacheStorage, TTLCache, validate_cache_storage


class FileCache(CacheStorage):
"""File-based cache storage."""

def __init__(self, directory: str = "/tmp/cache"):
self.directory = Path(directory)
self.directory.mkdir(parents=True, exist_ok=True)
Expand All @@ -406,26 +502,45 @@ class FileCache(CacheStorage):
safe_key = key.replace("/", "_").replace(":", "_")
return self.directory / f"{safe_key}.json"

def get(self, key: str):
def get_entry(self, key: str) -> CacheEntry | None:
path = self._get_path(key)
if not path.exists():
return None
try:
with open(path) as f:
data = json.load(f)
if data["fresh_until"] < time.time():
path.unlink()
return None
return data["value"]
except (json.JSONDecodeError, KeyError, OSError):
return CacheEntry(
value=data["value"],
fresh_until=float(data["fresh_until"]),
created_at=float(data["created_at"]),
)
except Exception:
return None

def set_entry(self, key: str, entry: CacheEntry, ttl: int | None = None) -> None:
now = time.time()
if ttl is not None:
fresh_until = now + ttl if ttl > 0 else float("inf")
entry = CacheEntry(value=entry.value, fresh_until=fresh_until, created_at=now)
with open(self._get_path(key), "w") as f:
json.dump(
{"value": entry.value, "fresh_until": entry.fresh_until, "created_at": entry.created_at},
f,
)

def get(self, key: str):
entry = self.get_entry(key)
if entry is None:
return None
if not entry.is_fresh():
self.delete(key)
return None
return entry.value

def set(self, key: str, value, ttl: int = 0) -> None:
now = time.time()
fresh_until = now + ttl if ttl > 0 else float("inf")
data = {"value": value, "fresh_until": fresh_until, "created_at": now}
with open(self._get_path(key), "w") as f:
json.dump(data, f)
self.set_entry(key, CacheEntry(value=value, fresh_until=fresh_until, created_at=now))

def delete(self, key: str) -> None:
self._get_path(key).unlink(missing_ok=True)
Expand All @@ -440,15 +555,12 @@ class FileCache(CacheStorage):
return True


# Use it
cache = FileCache("/tmp/app_cache")
assert validate_cache_storage(cache)

@TTLCache.cached("user:{}", ttl=300, cache=cache)
def get_user(user_id: int):
return {"id": user_id, "name": f"User {user_id}"}

user = get_user(42) # Stores in /tmp/app_cache/user_42.json
return {"id": user_id}
```

### Best Practices
Expand All @@ -463,12 +575,12 @@ user = get_user(42) # Stores in /tmp/app_cache/user_42.json

### Run Tests
```bash
pytest tests/test_correctness.py -v
uv run pytest tests/test_correctness.py -v
```

### Run Benchmarks
```bash
python tests/benchmark.py
uv run python tests/benchmark.py
```


Expand Down Expand Up @@ -564,39 +676,11 @@ Contributions welcome! Please:
1. Fork the repository
2. Create a feature branch (`git checkout -b feature/my-feature`)
3. Add tests for new functionality
4. Ensure all tests pass (`pytest`)
4. Ensure all tests pass (`uv run pytest`)
5. Submit a pull request

---

## License

MIT License – See [LICENSE](LICENSE) for details.

---

## Changelog

### 0.1.0 (Initial Release)
- ✅ TTL Cache decorator
- ✅ SWR Cache decorator
- ✅ Background Cache with APScheduler
- ✅ InMemCache, RedisCache, HybridCache storage backends
- ✅ Full async/sync support
- ✅ Custom storage protocol
- ✅ Comprehensive test suite
- ✅ Benchmark suite

---

## Roadmap

- [ ] Distributed tracing/observability
- [ ] Metrics export (Prometheus)
- [ ] Cache warming strategies
- [ ] Serialization plugins (msgpack, protobuf)
- [ ] Redis cluster support
- [ ] DynamoDB backend example

---

Loading
Loading