-
Notifications
You must be signed in to change notification settings - Fork 990
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add fixture for an LLM #674
Add fixture for an LLM #674
Conversation
Add in fixture Comment on those not working with Phi2 Switch to fixture Switch to fixtures Can use pytest parameterisation... Switching to command line based fixture More switching Convert others Explanatory note
5446660
to
85fe42f
Compare
When we have the GPU runner machines available, we can start expanding the |
…mer-model-fixture-01
if model_type == "PhiForCausalLM": | ||
pytest.xfail("See https://github.com/guidance-ai/guidance/issues/681") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice use of xfail
for conditional failures that come from upstream sources
LGTM once tests pass |
Also LGTM. Thanks @riedgar-ms ! |
Add a
selected_model
fixture which can provide any test with a 'live' LLM (as opposed tomodels.Mock()
). This fixture is controlled by adding--selected_model <name>
to the invocation ofpytest
. If no model is specified on the command line, then a CPU-based GPT2 model will be provided. The valid values for<name>
are the keys of theAVAILABLE_MODELS
dictionary.A few of the tests are XFAILED for Phi-2 only, pending this issue:
https://huggingface.co/microsoft/phi-2/discussions/116