-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
test: implement factual knowledge integration test #43
Conversation
remove_prompt_from_generated_text: bool = True | ||
|
||
|
||
class HuggingFaceCausalLLMModelRunner(ModelRunner): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we move this to model_runners
folder in src?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the HF model runner doesn't conform to the ModelRunner
interface, as it was presumably just meant for demo purposes. I think that I will have to update the ModelRunner
interface in a separate PR before we can treat HuggingFaceCausalLLMModelRunner
as a "real" ModelRunner
.
Currently, ModelRunner
has endpoint-invocation ideas baked into the interface (for example, content_type
and accept_type
). I think it will be a good idea to update it to be more general.
4438260
to
f7e6074
Compare
Issue #, if available:
Description of changes:
This PR implements an integration test for the factual knowledge evaluation algorithm, using a custom model runner that uses GPT2 model supplied by Huggingface.
By submitting this pull request, I confirm that you can use, modify, copy, and redistribute this contribution, under the terms of your choice.