Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AzureOpenAI evals runs forever #301

Closed
ajac-zero opened this issue Nov 18, 2023 · 2 comments
Closed

AzureOpenAI evals runs forever #301

ajac-zero opened this issue Nov 18, 2023 · 2 comments
Assignees
Labels
bug Something isn't working

Comments

@ajac-zero
Copy link

Describe the bug
I followed the "Using AzureOpenAI" tutorial. Though, I had to eliminate "os.environ["OPENAI_API_BASE"]" as it was giving me the following error:

base_url and azure_endpoint are mutually exclusive (type=value_error)

Once eliminated, the rest of the code ran with no error, but the evaluation step keeps running forever, with no progress being shown.

Ragas version: 0.0.2
Python version: 3.11.5

Code to Reproduce

from ragas import evaluate

result = evaluate(
fiqa_eval["baseline"],
metrics=metrics,
)

result

Error trace

evaluating with [faithfulness]
0%| | 0/2 [00:00<?, ?it/s]

Expected behavior
Evaluation results from example dataset

Additional context
Aside from eliminating the OPENAI_API_BASE environment variable, no changes were made to the code in the AzureOpenAI documentation.

@jjmachan
Copy link
Member

hey @AJAC2000 thanks for raising this.

could you confirm a few things

  1. update to latest version of Ragas v0.0.21 and check again
  2. that this happens everytime and not due to openai client waiting for service to respond and then timeout

@jjmachan jjmachan added the bug Something isn't working label Nov 30, 2023
@jjmachan jjmachan self-assigned this Nov 30, 2023
@HySoaKa
Copy link
Contributor

HySoaKa commented Dec 13, 2023

Hi @AJAC2000, try importing the evaluate function before patching the RagasLLM instance.
Or if you are using a notebook, try adding :
import nest_asyncio
nest_asyncio.apply()
before calling the evaluate function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

3 participants