Skip to content

integrating into my project#4

Closed
sunildkumar wants to merge 1 commit into
PrimeIntellect-ai:mainfrom
groundlight:add_verifiers_support
Closed

integrating into my project#4
sunildkumar wants to merge 1 commit into
PrimeIntellect-ai:mainfrom
groundlight:add_verifiers_support

Conversation

@sunildkumar
Copy link
Copy Markdown

No description provided.

@willccbb
Copy link
Copy Markdown
Member

does this need to be a PR to the main repo? what's the problem being fixed?

@sunildkumar
Copy link
Copy Markdown
Author

AHHH sorry. I didn't mean to PR this against the main copy of the code - meant to be my fork...

@sunildkumar sunildkumar deleted the add_verifiers_support branch February 19, 2025 21:16
@sunildkumar sunildkumar restored the add_verifiers_support branch February 19, 2025 21:16
hallerite added a commit that referenced this pull request May 6, 2026
Adds two new optional fields to ClientConfig:
  - preserve_all_thinking
  - preserve_thinking_between_tool_calls

RendererClient._get_renderer_or_pool reads them off the ClientConfig and
forwards to create_renderer_pool, which binds them as defaults on every
pooled renderer instance (renderers PR #4). Added to the pool cache key
so a renderer built with the flag off can't satisfy a request that asked
for it on.

Off by default — zero behaviour change for existing callers. Bumps the
renderers source pin to 736f47f (PR #4: feat/preserve-thinking-defaults).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hallerite added a commit that referenced this pull request May 6, 2026
Adds two new optional fields to ClientConfig:
  - preserve_all_thinking
  - preserve_thinking_between_tool_calls

RendererClient._get_renderer_or_pool reads them off the ClientConfig and
forwards to create_renderer_pool, which binds them as defaults on every
pooled renderer instance (renderers PR #4). Added to the pool cache key
so a renderer built with the flag off can't satisfy a request that asked
for it on.

Off by default — zero behaviour change for existing callers. Bumps the
renderers source pin to 736f47f (PR #4: feat/preserve-thinking-defaults).

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
hallerite added a commit that referenced this pull request May 12, 2026
…irection

renderer_client (bugbot #3, high): the previous isinstance(renderer,
MultimodalRenderer) check was performed against the outer renderer
parameter, which in production is a RendererPool. RendererPool was not a
Renderer subclass, so the multimodal branch never fired and the PR's mm
carry-forward was silently broken under pooled use. Renderers PR now has
RendererPool implement the Renderer protocol structurally, and this side
dispatches via the cached is_multimodal(r) helper which works on either
a bare renderer or a pool.

save_utils (bugbot #2, medium): is_json_serializable previously
whitelisted torch tensors and renderer dataclasses, but make_serializable
has no handler for them — it would stringify to "tensor(...)" garbage if
anything actually hit JSON. The whitelist worked only because the
orchestrator excludes "trajectory" at the JSONL boundary. Restore the
honest JSON-only contract and bypass the gate explicitly in
state_to_output for col == "trajectory" (where msgpack handles tensors
via its custom encoder).

save_utils (bugbot #4, low): _strip_intermediate_mm_data was stripping
step["tokens"]["multi_modal_data"] but not the duplicate at
step["response"].message.tokens.multi_modal_data. The Pydantic Response
serialization preserves it through msgpack via model_dump(), so the
O(N²) bloat the function targets was only halved. Now strips both.

Also drop the pool-vs-bare-renderer branching ladder via _maybe_offload
(asyncio.to_thread iff pool); pool's checkout is now an implementation
detail of the pool itself.

Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants