perf: extend _returns_model caching to pos_handler, async_pos_handler, fast_model_handler#161
Conversation
…, fast_model_handler Closes #157. Sub-task of #42. Stacked on #156 (extends the `_returns_model` helper that PR adds). PR #156 introduced `_returns_model(handler)` and consulted it at dispatch in the three fast-handler sites. The same `hasattr(result, "model_dump")` pattern existed in three more dispatch sites that that PR's scope deliberately left out. This PR closes the loop. Sites patched: * create_pos_handler.pos_handler (sync, positional) * create_async_pos_handler.pos_handler (async, positional) * create_fast_model_handler.fast_model_handler (model-input dispatch) Each gets: * `_returns_md = _returns_model(original_handler)` once at creation. * Per-response check switches from `if hasattr(...): result = result.model_dump()` to: if _returns_md or (_returns_md is None and hasattr(result, "model_dump")): result = result.model_dump() Identity-preserving for all three branches (True / False / None) — same shape as #156. The `_returns_md is None` fallback keeps the per-response hasattr behavior for unannotated handlers. `create_fast_model_handler` is the most interesting site: it already takes a `model_class` for input validation, but the OUTPUT shape is unknown until you check the user handler's return annotation. The same helper applies — we introspect `original_handler` (the user's handler), not `model_class`. Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
07d8df9 to
4d88206
Compare
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 07d8df9ce8
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| if _returns_md or (_returns_md is None and hasattr(result, "model_dump")): | ||
| result = result.model_dump() |
There was a problem hiding this comment.
Preserve tuple responses before cached model dumping
When a handler is annotated as returning a model but returns a status tuple such as ({'detail': 'created'}, 201) or an error tuple on some branch, _returns_md is True, so this now calls model_dump() on the tuple and the wrapper returns a 500 instead of reaching the existing tuple-response branch. The same ordering was added in create_async_pos_handler and create_fast_model_handler; tuple handling needs to remain before the cached model-dump shortcut, as in the no-args fast path.
Useful? React with 👍 / 👎.
Closes #157. Sub-task of #42. Follow-up to #156 (now merged into `release/1.0.30`). Re-opens replacing the original PR #160 which auto-closed when its stacked base branch was deleted on #156's merge.
Summary
PR #156 introduced
_returns_model(handler)and consulted it at dispatch in three fast-handler sites. The same per-responsehasattr(result, "model_dump")pattern existed in three more sites that #156's scope deliberately left out. This PR closes the loop.create_pos_handler.pos_handlercreate_async_pos_handler.pos_handlercreate_fast_model_handler.fast_model_handlerEach gets:
_returns_md = _returns_model(original_handler)bound once at the top of the factory.if hasattr(result, "model_dump"): ...toif _returns_md or (_returns_md is None and hasattr(result, "model_dump")): ....create_fast_model_handleris the most interesting site: it already takes amodel_classfor input validation, but the output shape is still unknown until you check the user handler's return annotation. The same helper applies — we introspectoriginal_handler(the user's handler), notmodel_class.Identity-preserving for all three branches (True / False / None). The
_returns_md is Nonefallback keeps the per-responsehasattrbehavior for unannotated handlers.Files / subsystems touched
python/turboapi/request_handler.py— 3 closure-time bindings + 3 dispatch-site swaps (one per factory function).tests/test_returns_model_cache_extended.py— new file. 10 targeted tests, 3-4 per handler, covering each annotation shape (-> Model,-> dict, unannotated).benchmarks/bench_model_dump_cache_extended.py— new isolated microbench targetingcreate_pos_handler(the cheapest of the three; the other two share the same dispatch code paths and benefit identically).No public API change. No dependency change. No
uv.lockchange.Red-to-green
Microbench (load-bearing artifact)
.venv/bin/python benchmarks/bench_model_dump_cache_extended.py, n=200k × 5 runs, free-threaded 3.14t, M-series macOS:```
case pre median post median delta pre p99 post p99 delta
leaf_dict 668.6 ns 654.3 ns -2.1% 875 ns 875 ns 0%
model_dhi 2711.2 ns 2645.5 ns -2.4% 3125 ns 2917 ns -7%
unannot 686.8 ns 669.4 ns -2.5% 959 ns 958 ns noise
```
Modest in absolute terms (the per-call savings is one
hasattr≈ 30 ns) but consistent across all three shapes. The model-return path's p99 drops 7%, matching the pattern from #156's bench.Pre-PR baseline collected by `git stash`-ing the patch file (with #156's
_returns_modelhelper still in place) and re-running the same bench against the unpatched dispatch sites.Targeted tests
```
PYTEST_DISABLE_PLUGIN_AUTOLOAD=1 .venv/bin/python -m pytest -q \
tests/test_returns_model_cache.py \
tests/test_returns_model_cache_extended.py \
tests/test_async_handlers.py \
tests/test_post_body_parsing.py
```
Result: 32 passed in 24.48s (10 new for #157 + 10 from #156's test file + 12 existing). Pre-existing `PytestReturnNotNoneWarning`s unchanged by this PR.
Risk
Low. Same shape as #156. The new test file directly exercises:
All confirm: annotated-as-model still calls
model_dump, annotated-as-dict skips it correctly, unannotated still falls back tohasattr.🤖 Generated with Claude Code