Skip to content

Conversation

@lightionight
Copy link

current only add deepseek-chat(DeepSeek V3) support, if offical api update, will add deepseek-chat reasoner(DeepSeek R1) support.

@evalstate
Copy link
Owner

Hi Lightionight - this is definitely going to get merged - thank you so much for the contribution.

May I quickly check

  1. Are you using https://www.deepseek.com/en as the API Provider? I want to add DeepSeek to the automated test suite so want to make sure I am testing against the same provider you are using.

I'm currently making some changes in this area to improve type safety and test coverage, so plan to merge this in to branch first, test and then publish.

Best, evalstate.

@lightionight
Copy link
Author

Hi Lightionight - this is definitely going to get merged - thank you so much for the contribution.

May I quickly check

  1. Are you using https://www.deepseek.com/en as the API Provider? I want to add DeepSeek to the automated test suite so want to make sure I am testing against the same provider you are using.

I'm currently making some changes in this area to improve type safety and test coverage, so plan to merge this in to branch first, test and then publish.

Best, evalstate.

Hello! evalstate

I’m glad to hear that. Yes, I am using this.

Currently, I have only added the DeepSeek-Chat model, as the Reasoner R1 model api
is still being adjusted by the official team.

Best lightionight

@evalstate
Copy link
Owner

Hi @lightionight -- quick update on this.

I've made quite a few updates in this area, and bought in a couple of your files and completed some testing. This is currently on "main" as I finish the 0.2.0 release notes.

Notes:

Let me know if you have the chance to take a look. We'll add the reasoning model once finalised?

@funnythingfunnylove
Copy link

Hi @lightionight -- quick update on this.

I've made quite a few updates in this area, and bought in a couple of your files and completed some testing. This is currently on "main" as I finish the 0.2.0 release notes.

Notes:

Let me know if you have the chance to take a look. We'll add the reasoning model once finalised?

Hi @evalstate nice work!
This almost done add deepseek api, Yes, i will contiune watch offical api changes, when it happend i will notice u, Thanks for your work
Best lightionight

@evalstate evalstate self-requested a review April 9, 2025 20:45
@evalstate
Copy link
Owner

reopen/new PR after reasoning support tested.

@evalstate evalstate closed this Apr 9, 2025
@evalstate evalstate mentioned this pull request Apr 18, 2025
iqdoctor pushed a commit to strato-space/fast-agent that referenced this pull request Nov 9, 2025
Problem: Only seeing logs from instance evalstate#4 when multiple instances of the
same child agent run in parallel.

Root cause: Multiple parallel instances share the same child agent object.
When instance 1 finishes, it restores display config (show_chat=True), which
immediately affects instances 2, 3, 4 that are still running. The last
instance (evalstate#4) ends up with restored config and shows all its chat logs.

Race condition flow:
1. Instance 1 starts → sets show_chat=False on shared object
2. Instances 2,3,4 start → see show_chat=False
3. Instance 1 finishes → restores show_chat=True
4. Instances 2,3,4 still running → now have show_chat=True (see logs!)

Solution: Reference counting
- Track active instance count per child agent ID
- Only modify display config when first instance starts
- Only restore display config when last instance completes
- Store original config per child_id for safe restoration

Data structures:
- _display_suppression_count[child_id] → count of active instances
- _original_display_configs[child_id] → stored original config

Now all instances respect show_chat=False until ALL complete.
iqdoctor pushed a commit to strato-space/fast-agent that referenced this pull request Nov 9, 2025
Problem: Only instance evalstate#4 was showing chat logs. The issue was that call_tool
was trying to suppress display config inside each parallel task, creating a
race condition where configs would get overwritten.

Solution:
1. Move display suppression to run_tools BEFORE parallel execution starts
2. Iterate through all child agents that will be called and suppress once
3. Store original configs in _original_display_configs dictionary
4. Remove all suppression logic from call_tool - it just executes now
5. After results displayed, restore all configs that were suppressed

This ensures:
- All instances use the same suppressed config (no race conditions)
- Config is suppressed ONCE before parallel tasks start
- All parallel instances respect show_chat=False
- Config restored after all results are displayed

The key insight: Don't try to suppress config inside parallel tasks - do it
before they start so they all inherit the same suppressed state.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants