Skip to content

ragas evaluate with llama_index and langchain doesn't seem to work with Azure OpenAI #114

@Data-drone

Description

@Data-drone

Azure OpenAI requires the special parameter deployment or deployment_id

The langchain wrappers seem to mostly have been updated to accomodate for this but it doesn't seem to work with ragas.

I ended up getting faithfulness working but updating the generate method with:

from


    elif isinstance(llm, BaseChatModel):
        ps = [p.format_messages() for p in prompts]
        result = llm.generate(ps, callbacks=callbacks)
        
        

to

    elif isinstance(llm, BaseChatModel):
        ps = [p.format_messages() for p in prompts]
        result = llm.generate(ps, callbacks=callbacks, deployment_id='<my_id>', api_version='<my_version>')

but with answer_relevancy I hit the same issue when it tries to run:

│    91 │   def calculate_similarity(                                                              │
│    92 │   │   self: t.Self, question: str, generated_questions: list[str]                        │
│    93 │   ):                                                                                     │
│ ❱  94 │   │   question_vec = np.asarray(self.embedding.embed_query(question)).reshape(1, -1)     │
│    95 │   │   gen_question_vec = np.asarray(                                                     │
│    96 │   │   │   self.embedding.embed_documents(generated_questions)                            │
│    97 │   │   )

Any ideas?

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions