Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

added validation using tonic validate #141

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open

Conversation

varchanaiyer
Copy link

Screen.Recording.2024-05-20.at.10.39.56.PM.mp4

Demo video attached. For now, I am printing the output. But I can either push it to langsmith or to a db.

RetrievalPrecisionMetric()
])
run = scorer.score_responses([llm_response])
print(run.overall_scores)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

from stampy_chat import logging
logging.info(run.overall_scores)

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you can also send messages to discord with the logger

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That would be really cool! I will add that in!

result = chain.invoke({"query": query, 'history': history}, {'callbacks': []})

#Validate results
contexts=[c['text'] for c in get_top_k_blocks(query, 5)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're querying the vectorstore twice for each request, which will make things a lot slower - how hard would it be to reuse the previously fetched examples?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes you are right. I had a lot of challenges in this part and it took the majority of my time with this ticket.

Unfortunately, I could not figure out how to get a hook into the LangChain Semantic Similarity Selector. Without a hook, there is no way to get the previously fetched examples. I also looked at the source code of RAGAs (another RAG validation framework) and saw that they too could not figure out how to put a hook into LangChain and stopped supporting it last year.

I tested the latency and saw that querying the vectorstore does not add much latency and unlike querying an LLM, there is no additional cost to it. This is why I decided to query the vector store again. I think we can put this in the backlog and monitor if LangChain updates their API

AugmentationAccuracyMetric(),
RetrievalPrecisionMetric()
])
run = scorer.score_responses([llm_response])
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this actually returned to the user? If not, then I'd suggest doing it after notifying the frontend that everything is done, i.e. moving the if callback (...) clause before these

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense. I will make this change. The reason I kept it this way is because I was worried that a "fast" user might ask the next question before the evaluation is done running. This might overwhelm the system and spawn a lot of processes.

I don't think this will be useful for our users. We may want to track it internally first and then decide what would be the best way to display this to users. For instance, I don't think users will know what "context-precision" is or care about it.

result = chain.invoke({"query": query, 'history': history}, {'callbacks': []})

#Validate results
contexts=[c['text'] for c in get_top_k_blocks(query, 5)]
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm guessing this shouldn't be executed for each query. How about adding a flag to the settings object?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think that this is required for each query because tonic_validate, checks to see if the LLM's answer is consistent with the context that was retrieved.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, but should it always do that, is the question. This makes the query slower, so for now I'd just use it for testing, rather than always running it

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I get what you mean. I will add a flag to the settings object to prevent it from running all the time.

@mruwnik
Copy link
Collaborator

mruwnik commented May 20, 2024

you have something wrong with the pipenv dependancies, which you'll have to fix. Did you add dependancies with pipenv install <whatever>?

@varchanaiyer
Copy link
Author

you have something wrong with the pipenv dependancies, which you'll have to fix. Did you add dependancies with pipenv install <whatever>?

Yes that is how I added dependencies. Is that the wrong way of doing it? I am not familiar with pipenv (have been using vanilla virtualenv all this time), so maybe I missed a step?

@varchanaiyer
Copy link
Author

#136

@ccstan99 ccstan99 mentioned this pull request Jun 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants