-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Challenges of Large Language Models #48
Comments
Different large language models can be tested by setting environment variables: Antropic: import os, getpass
os.environ["LANGSIM_PROVIDER"] = "anthropic"
os.environ["LANGSIM_API_KEY"] = os.environ['ANTHROPIC_API_KEY']
os.environ["LANGSIM_MODEL"] = "claude-3-5-sonnet-20240620" OpenAPI: import os, getpass
os.environ["LANGSIM_API_KEY"] = os.environ['OPENAI_API_KEY']
os.environ["LANGSIM_MODEL"] = "gpt-4o" KISSKI: import os, getpass
os.environ["LANGSIM_API_KEY"] = os.environ['KISSKI_API']
os.environ["LANGSIM_API_URL"] = "https://chat-ai.academiccloud.de/v1"
os.environ["LANGSIM_MODEL"] = "meta-llama-3-8b-instruct" |
Hi there! I was looking at the llm.py code and I'm not sure the above mentioned way of changing the model can actually work. |
It is only hard coded for the case when it is |
I see, right, I can specify a model in the executor. But, how can it reads it from the env variable then? |
ok, I think the answer to my question is in the magics.py ;-) |
The benchmarking of LLMs is now also discussed in the developer section of the website. |
Open Source
Unfortunately most llama based and other free models fail to work with the tools defined by
langchain
. It works for single functions but already the current complexity oflangsim
they struggle.ChatGPT
main
branch with anJSONDecodeError
.The behaviour seems to be somewhat reproducible so I wanted to quickly summarise it here.
The text was updated successfully, but these errors were encountered: