fix: preserve Python types in L1-only mode, allow cache_clear() on async#117
Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (2)
🚧 Files skipped from review as they are similar to previous changes (2)
📝 WalkthroughWalkthroughL1-only mode now stores raw Python objects via ObjectCache; sync and async wrappers read/write this cache and track keys; invalidate_cache/ainvalidate_cache remove ObjectCache entries; cache_clear() works synchronously for async wrappers in L1-only mode. Tests updated to reflect these behaviors. ChangesL1-only mode with object type preservation
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🧹 Nitpick comments (1)
tests/unit/test_cache_clear_async.py (1)
98-128: ⚡ Quick winThis test never exercises the async+backend invalidation path.
The class/docstring says this is the recommended path for async functions with a backend, but
@cache(backend=None)only covers the L1-only branch. Please either rename it to match the scenario under test or back it with a backend double so the async backend invalidation path is actually covered.🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/unit/test_cache_clear_async.py` around lines 98 - 128, The test TestAsyncInvalidateCacheStillWorks currently uses `@cache`(backend=None) so it only exercises the L1-only path; either update the test to reflect that by renaming the class/docstring to indicate L1-only, or change the decorator to supply a backend test double (e.g., a simple async-capable backend mock) so async_func and its method ainvalidate_cache actually exercise the async+backend invalidation path; modify the test to instantiate and pass that backend into cache(...) and keep calling async_func.ainvalidate_cache(5) to validate backend invalidation behavior.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@src/cachekit/decorators/wrapper.py`:
- Around line 587-593: The wrapper currently coerces ttl to 300 on cache writes,
violating create_cache_wrapper()'s contract that ttl=None means no expiration;
update the write sites (in the cache wrapper where _object_cache.put(cache_key,
result, ttl=ttl or 300) and the similar block around lines 912-917) to pass ttl
through unchanged (or translate None to the explicit sentinel expected by
ObjectCache.put) instead of using "ttl or 300", ensuring that ttl=None remains
unbounded expiry when calling ObjectCache.put with cache_key, result.
- Around line 579-585: In the L1-only fast-return path inside the wrapper (when
_l1_only_mode is true and _object_cache.get(cache_key) yields found), call
features.clear_correlation_id() before resetting function stats and returning
the cached value so the correlation context is not preserved across requests;
i.e., insert a call to features.clear_correlation_id() just prior to
reset_current_function_stats(token) / return cached_value in the branch that
currently invokes _stats.record_l1_hit().
---
Nitpick comments:
In `@tests/unit/test_cache_clear_async.py`:
- Around line 98-128: The test TestAsyncInvalidateCacheStillWorks currently uses
`@cache`(backend=None) so it only exercises the L1-only path; either update the
test to reflect that by renaming the class/docstring to indicate L1-only, or
change the decorator to supply a backend test double (e.g., a simple
async-capable backend mock) so async_func and its method ainvalidate_cache
actually exercise the async+backend invalidation path; modify the test to
instantiate and pass that backend into cache(...) and keep calling
async_func.ainvalidate_cache(5) to validate backend invalidation behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: a865f557-8877-4d5b-bc91-de7d3ef68c8c
📒 Files selected for processing (2)
src/cachekit/decorators/wrapper.pytests/unit/test_cache_clear_async.py
Codecov Report❌ Patch coverage is
📢 Thoughts on this report? Let us know! |
… async L1-only mode (`backend=None`) now stores raw Python objects via ObjectCache instead of serializing through MessagePack. This fixes: - #73: tuples, sets, frozensets no longer degrade to lists on cache hit - #76: cache_clear() works synchronously on async functions in L1-only mode (no backend I/O needed, so no reason to force async invalidation) The type inconsistency was particularly insidious: first call (miss) returned the original type, second call (hit) returned the deserialized type. Now both return the same object. Closes #73 Closes #76
7b8ab0e to
ee843fa
Compare
- Fix ttl=None coercion: use 1-year sentinel (31536000s) instead of hardcoded 300s, preserving the contract that ttl=None means no expiry - Clear correlation ID on L1 cache hit (both sync and async paths) to prevent context leaking across requests - Rename TestAsyncInvalidateCacheStillWorks to reflect it only tests L1-only path
Summary
@cache(backend=None)now stores raw Python objects (viaObjectCache) instead of serializing through MessagePack. Tuples, sets, frozensets are preserved on cache hit.cache_clear()works synchronously on async functions in L1-only mode (no backend I/O needed).Root Cause
The L1-only code path serialized results into MessagePack bytes on cache miss but returned the original object. On cache hit, it deserialized — returning a different type (tuples → lists, sets → lists). This made
type(fn()) != type(fn())across calls.Design
@cache(backend=None)now usesObjectCache(same as@cache.local()) for storage. This means:lru_cache)Users who need serialization consistency with L2 (for upgrade path testing) can use
@cache(backend=None, serializer='standard')— but the default behavior should match user expectations.Test plan
cache_clear()works on async L1-only functionscache_clear()still raises TypeError for async+backend (intentional)Closes #73
Closes #76
Summary by CodeRabbit
Bug Fixes
Improvements
Tests