-
Notifications
You must be signed in to change notification settings - Fork 238
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Hello,
I always get InternalServerError: Error code: 500 - {'error': 'litellm.AuthenticationError: AuthenticationError: OpenAIException - The api_key client option must be set either by passing api_key to the client or by setting the OPENAI_API_KEY environment variable'}
when running:
import os
os.environ["OPENAI_API_KEY"] = "sk-...."
from openai import OpenAI
OPENAI_KEY = os.environ.get("OPENAI_API_KEY")
OPENAI_BASE_URL = "http://localhost:8000/v1"
client = OpenAI(api_key=OPENAI_KEY, base_url=OPENAI_BASE_URL)
response = client.chat.completions.create(
model="moa-gpt-4o",
messages=[
{
"role": "user",
"content": "Write a Python program to build an RL model to recite text from any position that the user provides, using only numpy."
}
],
temperature=0.2
)
print(response)
but when I am running
import os
os.environ["OPENAI_API_KEY"] = "sk-...."
from openai import OpenAI
OPENAI_KEY = os.environ.get("OPENAI_API_KEY")
client = OpenAI(api_key=OPENAI_KEY )
response = client.chat.completions.create(
model="gpt-4o",
messages=[
{
"role": "user",
"content": "Write a Python program to build an RL model to recite text from any position that the user provides, using only numpy."
}
],
temperature=0.2
)
print(response)
directly to OpenAI it works just fine. So the API key itsself is not the problem, it not working with the litellm wrapper I guess.
What can I do? I really want to try rStar for my research
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request