Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Request: Add openrouter.ai endpoint support #285

Open
Sander-Chen opened this issue May 25, 2024 · 4 comments
Open

Request: Add openrouter.ai endpoint support #285

Sander-Chen opened this issue May 25, 2024 · 4 comments

Comments

@Sander-Chen
Copy link

Hi,

I have tried adding the openrouter.ai into ChainForge as the custom provider, but it always responsed"Error encountered while calling custom provider function: 400 Client Error: Bad Request for url: https://openrouter.ai/api/v1/chat/completions".

I'm not a developer, I have tried my best to debug this issue with GPT 4 turbo for multiple hours, and it was still not working:(

Could you please consider adding this endpoint like toghether.ai as the native endpoint when you are free?
https://openrouter.ai/docs#quick-start

This endpoint is really popular in the market.

Really appreciate if you can consider this request!

@ianarawjo
Copy link
Owner

Hi, for this and all other requests to add a provider, I cannot personally devote time to it. The best solution is to submit a PR for it. Someone did this recently to add together.ai support to ChainForge.

The error you are getting suggests it is something with your call that is incorrect. The custom provider shouldn't be the problem here. I would check your Python code and make sure the call works outside of ChainForge first.

@Sander-Chen
Copy link
Author

Sander-Chen commented May 25, 2024

Thank you for your quick feedback! Since I'm not a developer(I can read some easy code but I don't professional), I think I have no ability to submit a PR for it...
But I can provide more background information for you or someone else who has ability to write the code in the future:

Actually the reason that I want to use openrouter.ai basically is for two models:
Google Gemini flash/pro 1.5
Cohere Command R Plus

The weried thing is that if I specify some models from OpenAI like 「gpt-3.5-turbo」 or 「gpt-4-turbo」 in Openrouter.ai customized provider settings, it worked, ChainForge can successfully response to openrouter.ai correctly.

However if I specify the model name like 「google/gemini-1.5-pro」 or 「cohere/command-r-plus」 in Openrouter.ai customized provider settings(.py), it doesn't worked, responsed 400 error.

Below is the python code I wrote under GPT 4 assist:

`# -- coding: utf-8 --
from chainforge.providers import provider
import requests

JSON schemas to pass react-jsonschema-form, one for this provider's settings and one to describe the settings UI.

THIRD_PARTY_GPT_SETTINGS_SCHEMA = {
"settings": {
"temperature": {
"type": "number",
"title": "temperature",
"description": "Controls the 'creativity' or randomness of the response.",
"default": 0.7,
"minimum": 0,
"maximum": 1.0,
"multipleOf": 0.01,
},
"max_tokens": {
"type": "integer",
"title": "max_tokens",
"description": "Maximum number of tokens to generate in the response.",
"default": 4096,
"minimum": 1,
"maximum": 4096,
},
},
"ui": {
"temperature": {
"ui:help": "Defaults to 0.75.",
"ui:widget": "range"
},
"max_tokens": {
"ui:help": "Defaults to 100.",
"ui:widget": "range"
},
}
}

Custom model provider for the third-party OpenAI GPT service

@Provider(name="Openrouter",
emoji="\U0001F680",
models=["openai/gpt-3.5-turbo-16k ","cohere/command-r"],
rate_limit="sequential",
settings_schema=THIRD_PARTY_GPT_SETTINGS_SCHEMA)
def third_party_gpt_v2_completion(prompt: str, model: str, temperature: float = 0.75, max_tokens: int = 100, repetition_penalty: float = 1.0, **kwargs) -> str:
url = "https://openrouter.ai/api/v1/chat/completions"
headers = {
"Content-Type": "application/json",
"Authorization": "Bearer API-KEY"
}
data = {
"model": model,
"messages": [{"role": "user", "content": prompt}],
"temperature": temperature,
"max_tokens": max_tokens,
"repetition_penalty": repetition_penalty
}
response = requests.post(url, headers=headers, json=data)
response.raise_for_status()
result = response.json()
return result["choices"][0]["message"]["content"]
`

@ianarawjo
Copy link
Owner

Hmm this sounds like it has to do with the "/" in the path. It's certainly a workaround, but you can try changing all slashes to | or something else, then convert them back to slashes in your Python code.

It's probably something on CF's end with how custom providers work with the slash notation, that it's cutting off the prefix slashes.

@Sander-Chen
Copy link
Author

Let me clarify:
Since Openrouter.ai also integrate the gpt model, and as I mentioned if I request this model ID--「openai/gpt-3.5-turbo-16k」 through openrouter.ai inside ChainForge, it worked, but when I request the model ID--「cohere/command-r」 , it failed.

So I personally assumed that it might not be the "/" issue?
Anyhow, due to I don't have the ability to write the coding, so I will let it go and wait for some great talent to push it forward some days;
Still really appreciate your time for help me debugging this issue, and I expcet ChainForge can be better in the future:)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants