You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
[Bug]: Although the model working, got below error, "autogen.oai.client: 06-16 16:42:50] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing."
#2951
Closed
inoue0426 opened this issue
Jun 16, 2024
· 5 comments
➜ ollama pull llama3
pulling manifest
pulling 6a0746a1ec1a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB
pulling 8ab4849b038c... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 254 B
pulling 577073ffcc6c... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 110 B
pulling 3f8eb4da87fa... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
(multi)
~/code via 🅒 multi
➜ litellm --model ollama_chat/llama3
INFO: Started server process [21017]
INFO: Waiting for application startup.
#------------------------------------------------------------## ## 'It would help me if you could add...' ## https://github.com/BerriAI/litellm/issues/new ## ##------------------------------------------------------------#
Thank you for using LiteLLM! - Krrish & Ishaan
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
INFO: 127.0.0.1:51033 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51075 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51088 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51096 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51120 - "POST /chat/completions HTTP/1.1" 200 OK
Below is the log.
log
playground_multi_agent on main [?] via 🅒 multi
➜ python test.py --dir california_housing_data.csv
user_proxy (to chatbot):
Read california_housing_data.csv and predict.
--------------------------------------------------------------------------------
[autogen.oai.client: 06-16 16:42:50] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price": [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
chatbot (to user_proxy):
***** Suggested tool call (call_2ebc7b90-d539-46af-ac62-be24a784fb65): return_ml_result *****
Arguments:
{"task": "Regression", "n_labels": 1, "data_path": "california_housing_data.csv"}
*********************************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION return_ml_result...
[runtime logging] log_function_use: autogen logger is None
user_proxy (to chatbot):
user_proxy (to chatbot):
***** Response from calling tool (call_2ebc7b90-d539-46af-ac62-be24a784fb65) *****
1.3536363080132126
**********************************************************************************
--------------------------------------------------------------------------------
[autogen.oai.client: 06-16 16:43:16] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price": [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
chatbot (to user_proxy):
***** Suggested tool call (call_4fd4faa5-3859-4ede-b198-73d0e84c3f03): return_ml_result *****
Arguments:
{"task": "Regression", "n_labels": 1, "data_path": "california_housing_data.csv"}
*********************************************************************************************
--------------------------------------------------------------------------------
[autogen.oai.client: 06-16 16:43:21] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price": [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
(multi)
playground_multi_agent on main [?] via 🅒 multi took 39s
➜
Code
importargparsefromtypingimportLiteralfromautogenimportAssistantAgent, UserProxyAgentfromtyping_extensionsimportAnnotatedlocal_llm_config= {
"config_list": [
{
"model": "NotRequired", # Loaded with LiteLLM command"api_key": "NotRequired", # Not needed"base_url": "http://0.0.0.0:4000", # Your LiteLLM URL
}
],
"cache_seed": None, # Turns off caching, useful for testing different models
}
chatbot=AssistantAgent(
name="chatbot",
system_message="""For ML prediction tasks, only use the functions you have been provided with. Output 'TERMINATE' when an answer has been provided. Do not include the function name or result in the JSON. Example of the return JSON is: { "parameter_1_name": 'classification', "parameter_2_name": 2, "parameter_3_name": 'data.csv' }. Another example of the return JSON is: "parameter_1_name": "Regression", "parameter_2_name": 1, "parameter_3_name": 'data.csv' }. """,
llm_config=local_llm_config,
)
user_proxy=UserProxyAgent(
name="user_proxy",
is_termination_msg=lambdax: x.get("content", "")
and"TERMINATE"inx.get("content", ""),
human_input_mode="NEVER",
max_consecutive_auto_reply=1,
code_execution_config={
"last_n_messages": 1,
"work_dir": "tmp",
"use_docker": False,
},
)
fromsklearn.linear_modelimportLogisticRegressionfromsklearn.svmimportSVCfromsklearn.ensembleimportRandomForestClassifierfromsklearn.linear_modelimportLinearRegressionfromsklearn.svmimportSVRfromsklearn.ensembleimportRandomForestRegressorfromsklearn.metricsimportaccuracy_score, mean_squared_errorimportpandasaspddefrun_ml_model(
task: str, n_labels: int, data: str
) ->float:
data=pd.read_csv(data)
target=data.iloc[:, -1]
data=data.iloc[:, :-1]
iftask=='classification':
ifn_labels>1:
model=RandomForestClassifier()
else:
model=LogisticRegression()
else:
iftask=='regression':
model=LinearRegression()
else:
model=SVR()
model.fit(data, target)
pred=model.predict(data)
iftask=='classification':
returnaccuracy_score(target, pred)
else:
returnmean_squared_error(target, pred)
@user_proxy.register_for_execution()@chatbot.register_for_llm(description="Predict by ML model.")defreturn_ml_result(
task: Annotated[str, "task"],
n_labels: Annotated[int, "number of class"],
data_path: Annotated[str, "path to csv file"]
) ->str:
res=run_ml_model(task, n_labels, data_path)
returnstr(res)
parser=argparse.ArgumentParser()
parser.add_argument('--dir', type=str, help='Path to the data file')
args=parser.parse_args()
data_path=args.dirres=user_proxy.initiate_chat(
chatbot,
message=f"Read {data_path} and predict.",
summary_method="reflection_with_llm",
)
The line inside of $$$$$$$$$$$$$ shouldn't be shown.
Screenshots and logs
Autogen
(multi)
playground_multi_agent on main [?] via 🅒 multi
➜ python test.py --dir iris_data.csv
user_proxy (to chatbot):
Read iris_data.csv and predict.
--------------------------------------------------------------------------------
[autogen.oai.client: 06-16 16:52:25] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
chatbot (to user_proxy):
***** Suggested tool call (call_a0df1876-b65f-4297-8d04-2ac5b6ea2d8d): return_ml_result *****
Arguments:
{"task": "classification", "n_labels": 3, "data_path": "iris_data.csv"}
*********************************************************************************************
--------------------------------------------------------------------------------
>>>>>>>> EXECUTING FUNCTION return_ml_result...
[runtime logging] log_function_use: autogen logger is None
user_proxy (to chatbot):
user_proxy (to chatbot):
***** Response from calling tool (call_a0df1876-b65f-4297-8d04-2ac5b6ea2d8d) *****
1.0
**********************************************************************************
--------------------------------------------------------------------------------
[autogen.oai.client: 06-16 16:52:28] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
chatbot (to user_proxy):
***** Suggested tool call (call_bf85f653-29b9-4dca-a6b3-9836971b7711): return_ml_result *****
Arguments:
{"task": "classification", "n_labels": 3, "data_path": "iris_data.csv"}
*********************************************************************************************
--------------------------------------------------------------------------------
[autogen.oai.client: 06-16 16:52:32] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing.
(multi)
playground_multi_agent on main [?] via 🅒 multi took 21s
➜
LightLLM
(multi)
~/code via 🅒 multi
➜ ollama pull llama3
pulling manifest
pulling 6a0746a1ec1a... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 4.7 GB
pulling 4fa551d4f938... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 12 KB
pulling 8ab4849b038c... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 254 B
pulling 577073ffcc6c... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 110 B
pulling 3f8eb4da87fa... 100% ▕███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████▏ 485 B
verifying sha256 digest
writing manifest
removing any unused layers
success
(multi)
~/code via 🅒 multi
➜ litellm --model ollama_chat/llama3
INFO: Started server process [21017]
INFO: Waiting for application startup.
#------------------------------------------------------------#
# #
# 'It would help me if you could add...' #
# https://github.com/BerriAI/litellm/issues/new #
# #
#------------------------------------------------------------#
Thank you for using LiteLLM! - Krrish & Ishaan
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
INFO: Application startup complete.
INFO: Uvicorn running on http://0.0.0.0:4000 (Press CTRL+C to quit)
INFO: 127.0.0.1:51033 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51075 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51088 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51096 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51120 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51120 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51120 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51171 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51174 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51177 - "POST /chat/completions HTTP/1.1" 200 OK
INFO: 127.0.0.1:51180 - "POST /chat/completions HTTP/1.1" 200 OK
Additional Information
macOS 13.2.1
m1 chip
Ollama 0.1.44
llama3:latest
lightllm 1.40.15
pyautogen 0.2.29
Python 3.11.9
The text was updated successfully, but these errors were encountered:
inoue0426
changed the title
[Bug]: Although the model working, got error, "autogen.oai.client: 06-16 16:42:50] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing."
[Bug]: Although the model working, got below error, "autogen.oai.client: 06-16 16:42:50] {294} WARNING - Model ollama_chat/llama3 is not found. The cost will be 0. In your config_list, add field {"price" : [prompt_price_per_1k, completion_token_price_per_1k]} for customized pricing."
Jun 16, 2024
Hi @inoue0426, thank you for asking the question. AutoGen has the price information for some commonly used models, such as GPT models, and uses this information to estimate the cost for you. If the model you provided is not among them, a warning will be issued. However, if you provide the price information, AutoGen will use that to calculate the cost and will not show the warning.
And closing this issue as I guess it is not a bug. You can re-open it if you still have questions. Thank you!
Describe the bug
Hi,
Looks like my code is working correctly but get the WARNING.
Is this a bug or am I doing something wrong?
Environment
This is the log for Ollma
Below is the log.
log
Code
data
iris_data.csv
Steps to reproduce
./ollama-darwin serve
ollama pull llama3
python test.py --dir iris_data.csv
Model Used
llama3:latest
Expected Behavior
The line inside of $$$$$$$$$$$$$ shouldn't be shown.
Screenshots and logs
Autogen
LightLLM
Additional Information
The text was updated successfully, but these errors were encountered: