Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No test cases found error #739

Open
GantaVenkataKousik opened this issue May 5, 2024 · 4 comments
Open

No test cases found error #739

GantaVenkataKousik opened this issue May 5, 2024 · 4 comments

Comments

@GantaVenkataKousik
Copy link

GantaVenkataKousik commented May 5, 2024

See this code:

Screenshot 2024-05-06 192539

Screenshot 2024-05-06 192553

`import json
import asyncio
from deepeval.metrics import AnswerRelevancyMetric, SummarizationMetric, HallucinationMetric
from deepeval.test_case import LLMTestCase
from deepeval import assert_test
import os
import openai # Import OpenAI library

Initialize OpenAI API key

os.environ["OPENAI_API_KEY"] = "MY_OPEN_AI_KEY"

Function to retrieve summary asynchronously (replace with actual implementation)

async def get_summary(input_text):
# Call the OpenAI chatbot to generate a response
response = openai.Completion.create(
engine="text-davinci-003", # Specify the engine to use
prompt=input_text,
max_tokens=150 # Specify the maximum number of tokens for the response
)
return response.choices[0].text.strip()

Function to read questions from JSON file

def read_questions_from_json(file_path):
with open(file_path, 'r') as file:
data = json.load(file)
questions = [item['question'] for item in data['questions']]
return questions

Function to perform analysis

async def perform_analysis(test_cases):
loop = asyncio.new_event_loop()
asyncio.set_event_loop(loop)

for test_case in test_cases:
    assert_test(test_case, metrics=[test_case.metric])

async def create_test_cases(medical_questions, answers, context):
test_cases = []
tasks = []

for i, prompt in enumerate(medical_questions):
    task = asyncio.create_task(get_summary_and_create_test_case(prompt, answers[i], context, test_cases))
    tasks.append(task)

await asyncio.gather(*tasks)

return test_cases

Function to retrieve summary asynchronously and create test case

async def get_summary_and_create_test_case(prompt, expected_output, context, test_cases):
actual_output = await get_summary(prompt)
test_case = LLMTestCase(
input=prompt,
actual_output=actual_output,
expected_output=expected_output,
context=context,
metric=HallucinationMetric(threshold=0.7)
)
test_cases.append(test_case)

Main function

async def main():
# Path to the JSON file containing medical questions
file_path = "medical_questions.json"

# Read questions from JSON file
medical_questions = read_questions_from_json(file_path)

# Placeholder for expected answers
answers = ["Mock expected summary"] * len(medical_questions)

# Placeholder for context if needed
context = None

# Create test cases asynchronously
test_cases = await create_test_cases(medical_questions, answers, context)

# Perform analysis
await perform_analysis(test_cases)

if name == "main":
main()
`

OUTPUT :
`plugins: deepeval-0.21.36, anyio-4.3.0, repeat-0.9.3, xdist-3.6.1
collected 0 items
Running teardown with pytest sessionfinish...

=========================================================================== 2 warnings in 0.01s ===========================================================================
No test cases found, please try again.`

@penguine-ip
Copy link
Contributor

Hey @GantaVenkataKousik there is some formatting errors in your code can you double check your backticks. You're getting this error because you're using assert_test outside of a test file (?) or test function (?) (can't tell properly because of the formatting). Please read the docs to learn how to use the assert_test function properly: https://docs.confident-ai.com/docs/getting-started#create-your-first-metric

@GantaVenkataKousik
Copy link
Author

Hey @GantaVenkataKousik there is some formatting errors in your code can you double check your backticks. You're getting this error because you're using assert_test outside of a test file (?) or test function (?) (can't tell properly because of the formatting). Please read the docs to learn how to use the assert_test function properly: https://docs.confident-ai.com/docs/getting-started#create-your-first-metric

I have reviewed the documentation, but I'm still unable to identify the issue. These are the screenshots of the code .
Screenshot 2024-05-06 192539
Screenshot 2024-05-06 192553

@penguine-ip
Copy link
Contributor

Hey @GantaVenkataKousik I missed your message, can you come to our discord for faster response times? https://discord.com/invite/a3K9c8GRGt

assert_test is not meant to be used in a for loop, nor is it meant to be used outside of deepeval test run. come to discord to talk more

@GantaVenkataKousik
Copy link
Author

Hey @GantaVenkataKousik I missed your message, can you come to our discord for faster response times? https://discord.com/invite/a3K9c8GRGt

assert_test is not meant to be used in a for loop, nor is it meant to be used outside of deepeval test run. come to discord to talk more

Sure , I will join discord .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants