Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: How to get the context used when calling RA.answer_question? #10

Closed
younes-io opened this issue Mar 10, 2024 · 3 comments
Closed

Comments

@younes-io
Copy link

How to get the context used when calling RA.answer_question?

(this should help when testing/debugging/evaluating, etc)

@younes-io younes-io closed this as not planned Won't fix, can't repro, duplicate, stale Mar 10, 2024
@3CE8D2BAC65BDD6AA9
Copy link

Look for def answer_question in the class GPT3TurboQAModel in QAModels.py, then , you can print the context out.

@parthsarthi03
Copy link
Owner

You can use

context, __ = RA.retrieve(question)
print(context)

@daniyal214
Copy link

But why this does not retrieve the lists of chunks of the original document. Seems returning the summarized version.
Does the QA class answers the question from the summarized text? Instead of the actual content?

What I want is QA model to ask the question from the context which should be the chunks of the actual document not the summarized one. Is this possible?

Because I've been wanting to return the URLs in response, which are receiving broken. When I dig up found out that actual doc is not going to QA model.

Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants