Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to use this for RAG? #2

Open
aldrinjenson opened this issue Mar 7, 2024 · 1 comment
Open

How to use this for RAG? #2

aldrinjenson opened this issue Mar 7, 2024 · 1 comment

Comments

@aldrinjenson
Copy link

Hi, great job on this work!

Was wondering how we could use this approach for a RAG system.
Eg: can we modify the self_responses to make use of RAG to ensure better factuality of the system?

or, will you be able to suggest any other approaches we could try to ensure that the LLM answer(which is already retrieved from RAG) is still not a hallucinated one..

Thanks!

@jxzhangjhu
Copy link
Contributor

That's a great question. Do you want to detect or evaluate the hallucination of the RAG system?
One easy way is to modify the self_responses to incorporate your RAG pipeline and make sure the self_responses are generated by your RAG system.

Or you may check the consistency of retrieved contexts before your generation. If they are consistent, probably the final response should be consistent but you may have to double-check.

Our method mainly aims to check if LLM hallucinations or not (with or without retrieved contexts). In the current stage, our method cannot ensure generated responses with hallucinations.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants