Skip to content

Releases: yologdev/yoagent

v0.7.5

27 Mar 15:41
6784921

Choose a tag to compare

Bug Fixes

  • Filter empty text blocks to prevent API errors (#30, closes #29)
    • Filter out empty Content::Text blocks in all 7 providers before sending to APIs
    • Guard cache_control placement to skip empty text blocks in Anthropic provider
    • Cache breakpoint scans backwards past empty messages instead of silently dropping
    • Guard openai_compat single-text fast path against empty text

v0.7.4

25 Mar 23:03
c0a1282

Choose a tag to compare

Bug Fixes

  • Extract HTTP response body from EventSource errors (#27, closes #26)
    • Added classify_eventsource_error() to read response body from InvalidStatusCode errors and classify them properly (context overflow, rate limit, auth, API error)
    • Added classify_sse_error_event() for consistent SSE-embedded error classification
    • Only Transport errors are retryable; protocol/parse errors (StreamEnded, InvalidContentType) fail fast
    • Providers return Err(ProviderError) enabling retry logic for rate limits and context overflow
    • Removed spurious StreamEvent::Error channel sends that caused duplicate error events on retry

v0.7.3

25 Mar 21:31
fc4030c

Choose a tag to compare

New Features

  • MiniMax provider: Add ModelConfig::minimax() and OpenAiCompat::minimax() for MiniMax AI (MiniMax-Text-01, MiniMax-M1, etc.) with 1M context window support (#23)
  • Auto-derive ContextConfig: When context_config is not explicitly set, compaction budget is automatically derived from ModelConfig.context_window (80% for context, 20% reserved for output) (#25)
    • ContextConfig::from_context_window() available for manual use
    • without_context_management() correctly takes precedence over auto-derivation

Documentation

  • Updated provider docs with MiniMax and Z.ai support
  • Added all ModelConfig convenience constructors to provider overview
  • Documented context auto-derivation behavior and priority chain

v0.7.2

21 Mar 10:52
221665b

Choose a tag to compare

Bug Fixes

  • Streaming: Forward StreamEvents in real-time instead of buffering until provider stream completes (#20, #21)
    • Events now stream token-by-token to callers via a concurrent forwarder task
    • Retryable errors properly abort the forwarder to prevent duplicate lifecycle events

v0.7.1

17 Mar 23:26
44195d9

Choose a tag to compare

New features

  • First-class Z.ai (Zhipu AI) provider support via ModelConfig::zai()
  • ModelConfig factories for xAI, Groq, DeepSeek, and Mistral
  • --provider flag in CLI example for easy provider switching (zai, openai, xai, groq, deepseek, mistral, google)

v0.7.0

16 Mar 14:51
188c7b1

Choose a tag to compare

Breaking changes

  • AgentLoopConfig no longer has a lifetime parameter; provider field is now Arc<dyn StreamProvider>
  • Agent::reset() is now async

New features

  • prompt(), prompt_messages(), and continue_loop() now spawn the agent loop concurrently, returning the event receiver immediately for true real-time streaming
  • New Agent::finish() method to await a pending loop and restore state

Fixes

  • _with_sender variants now call finish() first to prevent state loss from prior runs
  • Updated all documentation to reflect API changes

v0.6.1

13 Mar 02:07
ae8f911

Choose a tag to compare

What's New

  • Real-time event streaming — New prompt_with_sender(), prompt_messages_with_sender(), and continue_loop_with_sender() methods accept a caller-provided mpsc::UnboundedSender<AgentEvent> for consuming events in real-time on a separate task. Zero breaking changes — existing API works as before.

Usage

let (tx, mut rx) = tokio::sync::mpsc::unbounded_channel();

tokio::spawn(async move {
    while let Some(event) = rx.recv().await { /* real-time! */ }
});

agent.prompt_with_sender("hello", tx).await;
// state restored automatically, no finish() needed

Closes #13

v0.6.0

09 Mar 02:42
70e49b5

Choose a tag to compare

What's New

  • OpenAPI Tool Adapter — Auto-generate AgentTool implementations from OpenAPI 3.0 specs. Point an agent at any API spec (GitHub, Stripe, etc.) and it instantly gets callable tools for every operation. Gated behind the openapi Cargo feature.

Usage

let agent = Agent::new(AnthropicProvider)
    .with_openapi_file("petstore.yaml", OpenApiConfig::default(), OperationFilter::All)
    .await?
    .with_system_prompt("You are an API assistant.")
    .with_model("claude-sonnet-4-20250514")
    .with_api_key(api_key);

See the OpenAPI guide for full documentation.

v0.5.3

05 Mar 22:19

Choose a tag to compare

What's New

  • Local provider support: Wire ModelConfig through the agent loop, enabling local OpenAI-compatible servers (LM Studio, Ollama, llama.cpp, vLLM, LocalAI) via the high-level Agent API
  • ModelConfig::local() constructor: Convenience method for local servers — no API key required
  • Agent::with_model_config(): New builder method to set model configuration (base URL, headers, compat flags)
  • --api-url CLI flag: Run the CLI example against any local server:
    cargo run --example cli -- --api-url http://localhost:1234/v1 --model my-model
    

Migration

If you construct AgentLoopConfig directly (low-level API), add model_config: None to the struct literal. The high-level Agent API requires no changes.

v0.5.2

27 Feb 21:54

Choose a tag to compare

CLI Example Improvements

  • Fix UTF-8 panic: truncate() used byte slicing which panicked on multi-byte characters (emoji, CJK). Now uses char_indices() for safe truncation.
  • Display errors: API failures, rate limits, and network errors were silently swallowed. Now shown in red via StopReason::Error handling.
  • Smarter /clear: Uses clear_messages() instead of rebuilding the entire Agent, preserving runtime configuration.
  • Ctrl+C handling: Graceful exit with goodbye message instead of raw ^C output.
  • Accurate line count: Updated doc comment and README to reflect actual file length (~250 lines).