Skip to content

feat: Support conversation history directly in AI Provider model runners#166

Merged
jsonbailey merged 3 commits intomainfrom
jb/message-history-in-providers
May 5, 2026
Merged

feat: Support conversation history directly in AI Provider model runners#166
jsonbailey merged 3 commits intomainfrom
jb/message-history-in-providers

Conversation

@jsonbailey
Copy link
Copy Markdown
Contributor

@jsonbailey jsonbailey commented May 5, 2026

Summary

  • OpenAI: Adds a _history: List[LDMessage] field to OpenAIModelRunner. Each successful run() appends the user message and assistant response so subsequent calls include prior turns. OpenAI Chat Completions has no built-in state management, so a manual list is the standard pattern.
  • LangChain: Adds InMemoryChatMessageHistory to LangChainModelRunner. Config messages are converted to native BaseMessage types once per call and joined with the history before invoking the model. add_user_message() / add_ai_message() keep history as native LangChain types throughout, avoiding repeated conversion.
  • History is only updated on successful runs — failed or empty responses leave history unchanged.
  • Tests added for multi-turn accumulation, no-accumulation-on-failure, and config-message ordering.

Test plan

  • OpenAI history accumulates across two successful calls
  • OpenAI history not updated on exception
  • LangChain history accumulates across two successful calls using InMemoryChatMessageHistory
  • LangChain history not updated on exception
  • Config messages still prepended before history on every call
  • CI green

🤖 Generated with Claude Code


Note

Medium Risk
Adds stateful, cross-call message history to both model runners, which can change prompt composition and token usage for any multi-turn caller. Risk is moderate due to behavior change and potential memory growth, but updates are gated to successful, non-empty responses and covered by new tests.

Overview
Adds multi-turn conversation state to model runners. OpenAIModelRunner now keeps an internal _history (seeded with any config_messages) and appends successful user+assistant turns so subsequent run() calls include prior context.

LangChain runner now uses LangChain-native history. LangChainModelRunner replaces the one-off config_messages prepend with an InMemoryChatMessageHistory, invokes the LLM with accumulated BaseMessage history plus the new HumanMessage, and appends to history only on successful, non-empty responses.

Tests expanded to verify history accumulation, no accumulation on failures, and correct ordering of config/system messages ahead of prior turns.

Reviewed by Cursor Bugbot for commit 023c248. Bugbot is set up for automated code reviews on this repo. Configure here.

@jsonbailey jsonbailey marked this pull request as ready for review May 5, 2026 20:44
@jsonbailey jsonbailey requested a review from a team as a code owner May 5, 2026 20:44
Base automatically changed from jb/fix-judge-string-input to main May 5, 2026 20:45
jsonbailey and others added 2 commits May 5, 2026 15:46
OpenAI runner maintains a List[LDMessage] history (Chat Completions has
no built-in state). LangChain runner uses InMemoryChatMessageHistory to
store native BaseMessage objects; config messages are converted once per
call and joined with the history before sending to the model.

History accumulates only on successful runs. Failed or empty responses
leave history unchanged so the next call retries from clean state.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
…per call

Config messages (system prompt, instructions) are added once when the runner
is constructed, not re-injected on every run() call. OpenAI collapses
_config_messages into _history at init; LangChain seeds InMemoryChatMessageHistory
with the converted messages so they appear naturally at the start of the thread.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@jsonbailey jsonbailey force-pushed the jb/message-history-in-providers branch from d867dce to ad0e0b9 Compare May 5, 2026 20:47
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@jsonbailey jsonbailey changed the title feat: add conversation history to OpenAI and LangChain model runners feat: Support conversation history directly in AI Provider model runners May 5, 2026
@jsonbailey jsonbailey merged commit 4bb3e78 into main May 5, 2026
46 checks passed
@jsonbailey jsonbailey deleted the jb/message-history-in-providers branch May 5, 2026 22:41
@github-actions github-actions Bot mentioned this pull request May 5, 2026
jsonbailey added a commit that referenced this pull request May 6, 2026
🤖 I have created a release *beep* *boop*
---


<details><summary>launchdarkly-server-sdk-ai: 0.19.0</summary>

##
[0.19.0](launchdarkly-server-sdk-ai-0.18.0...launchdarkly-server-sdk-ai-0.19.0)
(2026-05-05)


### ⚠ BREAKING CHANGES

* StructuredResponse replaced by RunnerResult with new "parsed" property
* AgentResult replaced by RunnerResult and Managed Result
* Removed ModelRunner and AgentRunner protocols
* Removed invoke_method, invoke_structured_model from AIProvider base
class.
* ModelResponse was replaced by RunnerResult
* Add ManagedResult, RunnerResult, and Runner protocol; rename invoke()
to run()
([#148](#148))
* Swap track_metrics_of parameter order to match spec
([#144](#144))

### Features

* Add evaluations support to ManagedAgent.run()
([#153](#153))
([442f46a](442f46a))
* Add judge evaluation support to agent graphs
([#142](#142))
([3d5a6a9](3d5a6a9))
* Add ManagedGraphResult, GraphMetricSummary, and AgentGraphRunnerResult
types
([#151](#151))
([301e24c](301e24c))
* Add ManagedResult, RunnerResult, and Runner protocol; rename invoke()
to run()
([#148](#148))
([88d4ddc](88d4ddc))
* Add root-level tools map with customParameters to AI Config types
([#141](#141))
([f17c535](f17c535))
* bake sampling_rate into Judge at construction; simplify Evaluator to
List[Judge]
([#159](#159))
([86c79e6](86c79e6))
* Update LangChain runners to implement Runner protocol returning
RunnerResult
([#150](#150))
([62a8e25](62a8e25))


### Bug Fixes

* Add runtime DeprecationWarnings to deprecated methods
([#145](#145))
([2189b81](2189b81))
* AgentResult replaced by RunnerResult and Managed Result
([fbb0b4b](fbb0b4b))
* build judge input as string; strip legacy judge config messages
([#165](#165))
([e6942a6](e6942a6))
* Fall back to model.parameters.tools when root tools absent
([#146](#146))
([2c30d75](2c30d75))
* Graph tracking refactor — ManagedAgentGraph drives tracking for new
runner shape
([#154](#154))
([20a5020](20a5020))
* ModelResponse was replaced by RunnerResult
([fbb0b4b](fbb0b4b))
* parse model.parameters.tools as list
([#160](#160))
([fb53e99](fb53e99))
* reference correct PyPI package names in provider load error messages
([#164](#164))
([48761c9](48761c9))
* Removed invoke_method, invoke_structured_model from AIProvider base
class.
([fbb0b4b](fbb0b4b))
* Removed ModelRunner and AgentRunner protocols
([fbb0b4b](fbb0b4b))
* Replace done_callback with coroutine chain for judge tracking
([#147](#147))
([1e1f36b](1e1f36b))
* StructuredResponse replaced by RunnerResult with new "parsed" property
([fbb0b4b](fbb0b4b))
* Swap track_metrics_of parameter order to match spec
([#144](#144))
([53db736](53db736))
</details>

<details><summary>launchdarkly-server-sdk-ai-langchain: 0.6.0</summary>

##
[0.6.0](launchdarkly-server-sdk-ai-langchain-0.5.0...launchdarkly-server-sdk-ai-langchain-0.6.0)
(2026-05-05)


### Features

* Add judge evaluation support to agent graphs
([#142](#142))
([3d5a6a9](3d5a6a9))
* Migrate LangGraph runner to AgentGraphRunnerResult; clean up legacy
shape detection
([#156](#156))
([efa8e00](efa8e00))
* Support conversation history directly in AI Provider model runners
([#166](#166))
([4bb3e78](4bb3e78))
* Update LangChain runners to implement Runner protocol returning
RunnerResult
([#150](#150))
([62a8e25](62a8e25))


### Bug Fixes

* build judge input as string; strip legacy judge config messages
([#165](#165))
([e6942a6](e6942a6))
</details>

<details><summary>launchdarkly-server-sdk-ai-openai: 0.5.0</summary>

##
[0.5.0](launchdarkly-server-sdk-ai-openai-0.4.0...launchdarkly-server-sdk-ai-openai-0.5.0)
(2026-05-05)


### Features

* Add judge evaluation support to agent graphs
([#142](#142))
([3d5a6a9](3d5a6a9))
* Support conversation history directly in AI Provider model runners
([#166](#166))
([4bb3e78](4bb3e78))
* Update OpenAI graph runner to return AgentGraphRunnerResult with
GraphMetrics
([#155](#155))
([388b7af](388b7af))
* Update OpenAI runners to implement Runner protocol returning
RunnerResult
([#149](#149))
([382e662](382e662))


### Bug Fixes

* build judge input as string; strip legacy judge config messages
([#165](#165))
([e6942a6](e6942a6))
</details>

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).

<!-- CURSOR_SUMMARY -->
---

> [!NOTE]
> **Medium Risk**
> Primarily a release/version bump, but it publishes **breaking API
changes** (move to unified `Runner.run()`/`RunnerResult` and removal of
`invoke_*` methods), which can break downstream integrations.
> 
> **Overview**
> Cuts a new release across the core SDK and provider packages:
`launchdarkly-server-sdk-ai` to `0.19.0`, LangChain provider to `0.6.0`,
and OpenAI provider to `0.5.0`, updating the release manifest and
package metadata accordingly.
> 
> Changelogs document the shipped breaking API surface changes (notably
removing `invoke_model()`/`invoke_structured_model()` in favor of
`run(...)` and standardizing returns on `RunnerResult`) plus
accompanying feature/fix entries; the core package version
constants/docs (`__version__`, `PROVENANCE.md`) are updated to match.
> 
> <sup>Reviewed by [Cursor Bugbot](https://cursor.com/bugbot) for commit
a20d7a5. Bugbot is set up for automated
code reviews on this repo. Configure
[here](https://www.cursor.com/dashboard/bugbot).</sup>
<!-- /CURSOR_SUMMARY -->

---------

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Co-authored-by: jsonbailey <jbailey@launchdarkly.com>
Co-authored-by: Claude Sonnet 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants