Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: [autogenstudio] agent llm send max_tokens: null #2050

Open
nicho2 opened this issue Mar 18, 2024 · 10 comments
Open

[Bug]: [autogenstudio] agent llm send max_tokens: null #2050

nicho2 opened this issue Mar 18, 2024 · 10 comments
Labels
bug Something isn't working studio Related to AutoGen Studio.

Comments

@nicho2
Copy link
Collaborator

nicho2 commented Mar 18, 2024

Describe the bug

When max_tokens parameter is None, the agent send a frame /v1/chat/completions with max_tokens: null.
In this case the LLM don't understand and and stop after the second token.

Steps to reproduce

autogenstudio 0.0.54 + lmstudio + mistral

Model Used

Mistral-7B-Instruct-v0.2

Expected Behavior

no sending max_tokens parameter if None

Screenshots and logs

[2024-03-18 09:55:28.532] [INFO] [LM STUDIO SERVER] Processing queued request...
[2024-03-18 09:55:28.533] [INFO] Received POST request to /v1/chat/completions with body: { "messages": [ { "content": "Tu es un expert dans la communication avec ............ ####\n\n ", "role": "system" }, { "content": "je cherche la liste des produits du réseau", "role": "user" } ],
"model": "local_lmstudio",
"max_tokens": null,
"stream": false,
"temperature": 0.1 }
[2024-03-18 09:55:28.533] [INFO] [LM STUDIO SERVER] Context Overflow Policy is: Rolling Window
[2024-03-18 09:55:28.533] [INFO] [LM STUDIO SERVER] Last message: { role: 'user', content: 'je cherche la liste des produits du réseau' } (total messages = 2)
[2024-03-18 09:55:30.909] [INFO] [LM STUDIO SERVER] Accumulating tokens ... (stream = false)
[2024-03-18 09:55:30.909] [INFO] Accumulated 1 tokens: To
[2024-03-18 09:55:30.989] [INFO] Accumulated 2 tokens: To find
[2024-03-18 09:55:31.059] [INFO] [LM STUDIO SERVER] Generated prediction: { "id": "chatcmpl-rbmlwf91blp1nfv6114if6", "object": "chat.completion", "created": 1710752128, "model": "/home/system/.cache/lm-studio/models/TheBloke/Mistral-7B-Instruct-v0.2-GGUF/mistral-7b-instruct-v0.2.Q8_0.gguf", "choices": [ { "index": 0, "message": { "role": "assistant", "content": " To find" }, "finish_reason": "stop" } ], "usage": { "prompt_tokens": 15271, "completion_tokens": 2, "total_tokens": 15273 } }

Additional Information

AutoGen Studio CLI version: 0.0.54
autogenstudio==0.0.54
pyautogen==0.2.19

@nicho2 nicho2 added the bug Something isn't working label Mar 18, 2024
@DavidBaurCodes
Copy link

I have the exact same problem. Im not able to define the max_tokens with these two GUIs really, at least i dont know where:
[2024-03-19 16:31:15.988] [INFO] [LM STUDIO SERVER] Processing queued request...
[2024-03-19 16:31:15.988] [INFO] Received POST request to /v1/chat/completions with body: {
"messages": [
{
"content": "You are a helpful assistant.",
"role": "system"
},
{
"content": "Tell me a joke",
"role": "assistant"
},
{
"content": "A man",
"role": "user"
},
{
"content": "A man",
"role": "assistant"
},
{
"content": "A man",
"role": "user"

.....

{
  "content": "A man",
  "role": "assistant"
},
{
  "content": "A man",
  "role": "user"
}

],
"model": "local",
"max_tokens": null,
"stream": false,
"temperature": 0.1
}

Also Autogen v 0.0.54 and lm studio 0.2.17

@avonx
Copy link

avonx commented Mar 20, 2024

let me share what I'm doing to solve the problem (not permanent solution)

Firstly, there is no GUI form for setting max_tokens for now.
#1608

So you need to edit samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json to define max_tokens.

In the existing implementation, you need to update the parameter of llm_config of LLMConfig

max_tokens: Optional[int] = None

So what you can do is to edit here, (Assuming you change the setting)

  • delete config_list option
  • add max_token option

I attach my dbdefaults.json .
Since I'm using Claude3 with Litellm, you have to change the definition of the model and llm_config accoding to your need.

Also I recommend you to delete the database.sqlite you created before, when you change the dbdefaults.json.
Or you can create new work space by using this option.
autogenstudio ui --reload --appdir /path/to/your/workspace
dbdefaults.json

@PrinzMegahertz
Copy link

PrinzMegahertz commented Mar 29, 2024

For me, this happens with Autogen Studio 0.56 and all models (tried several mistral, mixtral, llama2)

@PrinzMegahertz
Copy link

let me share what I'm doing to solve the problem (not permanent solution)

Firstly, there is no GUI form for setting max_tokens for now. #1608

So you need to edit samples/apps/autogen-studio/autogenstudio/utils/dbdefaults.json to define max_tokens.

In the existing implementation, you need to update the parameter of llm_config of LLMConfig

max_tokens: Optional[int] = None

So what you can do is to edit here, (Assuming you change the setting)

* delete `config_list` option

* add `max_token` option

I attach my dbdefaults.json . Since I'm using Claude3 with Litellm, you have to change the definition of the model and llm_config accoding to your need.

Also I recommend you to delete the database.sqlite you created before, when you change the dbdefaults.json. Or you can create new work space by using this option. autogenstudio ui --reload --appdir /path/to/your/workspace dbdefaults.json

Actually, to add to this:
I think it is enough to change the datamodel.py. After you did that, the change will apply to all new agents you create. If you have already configured agents to use you local llm, nothing will make them work, as the max_token = null has already been assigned to them. So just delete them and create new agends.

Cudos to avonx for providing this solution, I was really about to bite in my keyboard because of this bug.

@WaelKarkoub WaelKarkoub added the studio Related to AutoGen Studio. label Mar 31, 2024
@lpingree
Copy link

lpingree commented Apr 2, 2024

I seem to have this issue too, none of my local models seem to be working with a default install of LMstudio. It stops with only showing the first word of output from the LLM in the UX. Is this slated to be fixed?

@christiandarkin
Copy link

i have the same problem. editing datamodel.py line 95 doesn't seem to fix it

@pressx2select
Copy link

@avonx I tried the propose solution of editing the dbdefaults.json file to add the line about max_tokens, also tried using the file you provided (thank you for that) but it did not resolve it for me.

@MMoneer
Copy link

MMoneer commented Apr 3, 2024

@avonx I tried the propose solution of editing the dbdefaults.json file to add the line about max_tokens, also tried using the file you provided (thank you for that) but it did not resolve it for me.

Hi, the below steps Work for me:
Edit the line 95 in datamodel.py :
max_tokens: Optional[int] = 3000
Use the @avonx dbdefaults.json file
Delete files folder (if there is no any important data) and database.sqlite
Q-Dir_03042024_053

@odrobnik
Copy link

odrobnik commented Apr 4, 2024

I just got stumped by the same issue. Setting the max_tokens helps me too. Shouldn't that also work with -1 as the normal default? Should the UI allow specifying such things?

@waszak
Copy link

waszak commented May 4, 2024

My workaround was just to update in database.
(Ofc then you need to re-pick agent in workflow or update) I also started new chat and it solved the issue

obraz
obraz

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working studio Related to AutoGen Studio.
Projects
None yet
Development

No branches or pull requests