-
Notifications
You must be signed in to change notification settings - Fork 675
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature Request] Add support to Anthropic LLM #283
Comments
@lightaime Hi, I opened a PR for Anthropic LLM backend. However, it looks like some tests fail for not being able to read the secret variable OPENAI_API_KEY. I check the action log and it is empty:
I think it is because PR does not have access to secrets. See link. Could you help to add a "dump api_key" job in actions to fix it? Since we actually don't need a valid api key. |
Hey @lightaime, If you're integrating via litellm, here's an easy way to test if the anthropic integration is working: |
Hi @lightaime @ocss884 I believe we can help with this issue. I’m the maintainer of LiteLLM https://github.com/BerriAI/litellm - we allow you to use any LLM as a drop in replacement for You can use LiteLLM in the following ways: With your own API KEY:This calls the provider API directly from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "your-key" #
os.environ["COHERE_API_KEY"] = "your-key" #
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages) Using the LiteLLM Proxy with a LiteLLM Keythis is great if you don’t have access to claude but want to use the open source LiteLLM proxy to access claude from litellm import completion
import os
## set ENV variables
os.environ["OPENAI_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your openai key
os.environ["COHERE_API_KEY"] = "sk-litellm-5b46387675a944d2" # [OPTIONAL] replace with your cohere key
messages = [{ "content": "Hello, how are you?","role": "user"}]
# openai call
response = completion(model="gpt-3.5-turbo", messages=messages)
# cohere call
response = completion(model="command-nightly", messages=messages) |
Please explain the way you want to add this Antropic/Claude. Will it be another model backend? Pleas also explain the difference between Anthropic and Claude. Pls do it right in the description of the ticket. |
Hi @Obs01ete , Claude series are LLM from a company called Anthropic, which support 100,000 token context windows. They could be another great model backend choice for role-playing agents. I have added more details in this issue
Could you help to check the OPENAI_API_KEY setup? Due to the dangers inherent to automatic processing of PRs, GitHub’s standard |
Required prerequisites
Motivation
The Claude series from Anthropic company is one of the most popular LLMs and they are serious competitors of GPTs. They support a 100,000 tokens context window while still free for personal API usage.
Solution
Add Claude-2 and Claude-instant-1 as backend models
Alternatives
No response
Additional context
No response
The text was updated successfully, but these errors were encountered: