Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Get tests passing against Pydantic 1 #169

Closed
simonw opened this issue Aug 20, 2023 · 13 comments
Closed

Get tests passing against Pydantic 1 #169

simonw opened this issue Aug 20, 2023 · 13 comments
Labels
bug Something isn't working
Milestone

Comments

@simonw
Copy link
Owner

simonw commented Aug 20, 2023

Refs:

I'm seeing test failures locally against pip install pydantic==1.10.2

@simonw simonw added the bug Something isn't working label Aug 20, 2023
@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

FAILED tests/test_cli_openai_models.py::test_openai_options_min_max - AssertionError: assert 'Error: tempe... equal to 0\n' == 'Error: tempe... equal to 0\n'
FAILED tests/test_keys.py::test_uses_correct_key - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_llm_default_prompt[False-logs_args1-True-True] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_llm_default_prompt[False-logs_args1-True-False] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_llm_default_prompt[False-logs_args3-True-True] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_llm_default_prompt[False-logs_args3-True-False] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_llm_default_prompt[True-logs_args5-True-True] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_llm_default_prompt[True-logs_args5-True-False] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_llm.py::test_openai_localai_configuration - assert 1 == 0
FAILED tests/test_templates.py::test_templates_list[args0] - assert 1 == 0
FAILED tests/test_templates.py::test_templates_list[args1] - assert 1 == 0
FAILED tests/test_templates.py::test_template_basic['Summarize this: $input'-extra_args0-gpt-3.5-turbo-Summarize this: Input text-None] - AttributeError: 'Options' object has no attribute 'model_dump'
FAILED tests/test_templates.py::test_template_basic[prompt: 'Summarize this: $input'\nmodel: gpt-4-extra_args1-gpt-4-Summarize this: Input text-None] - AttributeError: type object 'Template' has no attribute 'model_validate'
FAILED tests/test_templates.py::test_template_basic[prompt: 'Summarize this: $input'-extra_args2-gpt-4-Summarize this: Input text-None] - AttributeError: type object 'Template' has no attribute 'model_validate'
FAILED tests/test_templates.py::test_template_basic[prompt: 'Say $hello'-extra_args4-None-None-Error: Missing variables: hello] - AttributeError: type object 'Template' has no attribute 'model_validate'
FAILED tests/test_templates.py::test_template_basic[prompt: 'Say $hello'-extra_args5-gpt-3.5-turbo-Say Blah-None] - AttributeError: type object 'Template' has no attribute 'model_validate'
==================================================================== 16 failed, 52 passed in 8.02s ====================================================================

simonw added a commit that referenced this issue Aug 20, 2023
@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

I ran this in the llm-gpt4all environment:

pip install pydantic==1.10.2

The tests passed there too.

simonw added a commit that referenced this issue Aug 20, 2023
@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

This feature is broken too:

% llm models --options
Traceback (most recent call last):
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/bin/llm", line 33, in <module>
    sys.exit(load_entry_point('llm', 'console_scripts', 'llm')())
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/click/core.py", line 1157, in __call__
    return self.main(*args, **kwargs)
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/click/core.py", line 1078, in main
    rv = self.invoke(ctx)
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/click/core.py", line 1688, in invoke
    return _process_result(sub_ctx.command.invoke(sub_ctx))
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/click/core.py", line 1434, in invoke
    return ctx.invoke(self.callback, **ctx.params)
  File "/Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/click/core.py", line 783, in invoke
    return __callback(*args, **kwargs)
  File "/Users/simon/Dropbox/Development/llm/llm/cli.py", line 590, in models_list
    if options and model_with_aliases.model.Options.model_fields:
AttributeError: type object 'Options' has no attribute 'model_fields'

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

I'm going to drop model_fields from that implementation, and switch to using Options.schema() which is available in both Pydantic 1 and Pydantic 2.

(Pdb) model_with_aliases.model.Options.schema() {'additionalProperties': False, 'properties': {'temperature': {'anyOf': [{'maximum': 2, 'minimum': 0, 'type': 'number'}, {'type': 'null'}], 'default': None, 'description': 'What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.', 'title': 'Temperature'}, 'max_tokens': {'anyOf': [{'type': 'integer'}, {'type': 'null'}], 'default': None, 'description': 'Maximum number of tokens to generate.', 'title': 'Max Tokens'}, 'top_p': {'anyOf': [{'maximum': 1, 'minimum': 0, 'type': 'number'}, {'type': 'null'}], 'default': None, 'description': 'An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Recommended to use top_p or temperature but not both.', 'title': 'Top P'}, 'frequency_penalty': {'anyOf': [{'maximum': 2, 'minimum': -2, 'type': 'number'}, {'type': 'null'}], 'default': None, 'description': "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.", 'title': 'Frequency Penalty'}, 'presence_penalty': {'anyOf': [{'maximum': 2, 'minimum': -2, 'type': 'number'}, {'type': 'null'}], 'default': None, 'description': "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.", 'title': 'Presence Penalty'}, 'stop': {'anyOf': [{'type': 'string'}, {'type': 'null'}], 'default': None, 'description': 'A string where the API will stop generating further tokens.', 'title': 'Stop'}, 'logit_bias': {'anyOf': [{'type': 'object'}, {'type': 'string'}, {'type': 'null'}], 'default': None, 'description': 'Modify the likelihood of specified tokens appearing in the completion. Pass a JSON string like \'{"1712":-100, "892":-100, "1489":-100}\'', 'title': 'Logit Bias'}}, 'title': 'Options', 'type': 'object'}

llm/llm/cli.py

Lines 590 to 606 in 7ca9231

if options and model_with_aliases.model.Options.model_fields:
for name, field in model_with_aliases.model.Options.model_fields.items():
type_info = str(field.annotation).replace("typing.", "")
if type_info.startswith("Optional["):
type_info = type_info[9:-1]
if type_info.startswith("Union[") and type_info.endswith(", NoneType]"):
type_info = type_info[6:-11]
bits = ["\n ", name, ": ", type_info]
if field.description and (
model_with_aliases.model.__class__
not in models_that_have_shown_options
):
wrapped = textwrap.wrap(field.description, 70)
bits.append("\n ")
bits.extend("\n ".join(wrapped))
output += "".join(bits)
models_that_have_shown_options.add(model_with_aliases.model.__class__)

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

I think I can get a better presentation out of the schema information anyway.

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

Current output for llm models --options looks like this:

OpenAI Chat: gpt-3.5-turbo (aliases: 3.5, chatgpt, turbo)
  temperature: float
    What sampling temperature to use, between 0 and 2. Higher values like
    0.8 will make the output more random, while lower values like 0.2 will
    make it more focused and deterministic.
  max_tokens: int
    Maximum number of tokens to generate.
  top_p: float
    An alternative to sampling with temperature, called nucleus sampling,
    where the model considers the results of the tokens with top_p
    probability mass. So 0.1 means only the tokens comprising the top 10%
    probability mass are considered. Recommended to use top_p or
    temperature but not both.
  frequency_penalty: float
    Number between -2.0 and 2.0. Positive values penalize new tokens based
    on their existing frequency in the text so far, decreasing the model's
    likelihood to repeat the same line verbatim.
  presence_penalty: float
    Number between -2.0 and 2.0. Positive values penalize new tokens based
    on whether they appear in the text so far, increasing the model's
    likelihood to talk about new topics.
  stop: str
    A string where the API will stop generating further tokens.
  logit_bias: dict, str
    Modify the likelihood of specified tokens appearing in the completion.
    Pass a JSON string like '{"1712":-100, "892":-100, "1489":-100}'

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

It's just field_title: type\ndescription.

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

One big irritation: pytest passes cleanly with Pydantic 1, but throws a ton of warnings under Pydantic 2:

========================================================================== warnings summary ===========================================================================
../../../.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/pydantic/_internal/_config.py:219
  /Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/pydantic/_internal/_config.py:219: PydanticDeprecatedSince20: Support for class-based `config` is deprecated, use ConfigDict instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.2/migration/
    warnings.warn(DEPRECATION_MESSAGE, DeprecationWarning)

tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
  /Users/simon/Dropbox/Development/llm/llm/cli.py:598: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.2/migration/
    if options and model_with_aliases.model.Options.schema()["properties"]:

tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
  /Users/simon/.local/share/virtualenvs/llm-p4p8CDpq/lib/python3.10/site-packages/pydantic/main.py:1159: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.2/migration/
    warnings.warn('The `schema` method is deprecated; use `model_json_schema` instead.', DeprecationWarning)

tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
tests/test_llm.py::test_llm_models_options
  /Users/simon/Dropbox/Development/llm/llm/cli.py:599: PydanticDeprecatedSince20: The `schema` method is deprecated; use `model_json_schema` instead. Deprecated in Pydantic V2.0 to be removed in V3.0. See Pydantic V2 Migration Guide at https://errors.pydantic.dev/2.2/migration/
    for name, field in model_with_aliases.model.Options.schema()[

-- Docs: https://docs.pytest.org/en/stable/how-to/capture-warnings.html
==== 69 passed, 17 warnings in 1.09s ====

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

Two mypy errors, only on the Pydantic 1.10.2 matrix:

llm/templates.py:12: error: Incompatible types (expression has type "str", TypedDict item "extra" has type "Extra")  [typeddict-item]
llm/default_plugins/openai_models.py:9: error: Module "pydantic" has no attribute "field_validator"  [attr-defined]

Those are from this code:

llm/llm/templates.py

Lines 6 to 12 in 36f8ffc

class Template(BaseModel):
name: str
prompt: Optional[str] = None
system: Optional[str] = None
model: Optional[str] = None
defaults: Optional[Dict[str, Any]] = None
model_config = ConfigDict(extra="forbid")

try:
from pydantic import field_validator, Field
except ImportError:
from pydantic.fields import Field
from pydantic.class_validators import validator as field_validator # type: ignore [no-redef]

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

It looks like model_config = ConfigDict(extra="forbid") is from a feature I never finished implementing.

The idea was to allow templates to define their own model configuration options - things like temperature - but I never actually implemented it.

simonw added a commit that referenced this issue Aug 20, 2023
@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

Actually no, I'm wrong about that. It's a Pydantic 1 thing:

9ac120f

CleanShot 2023-08-19 at 21 39 08@2x

I think I may need to do some more work on Template - are the tests correctly exercising that code?

@simonw
Copy link
Owner Author

simonw commented Aug 20, 2023

The tests pass and manual testing of templates works on Pydantic 1 and Pydantic 2.

@simonw simonw closed this as completed Aug 20, 2023
@simonw simonw added this to the 0.8 milestone Aug 20, 2023
@simonw
Copy link
Owner Author

simonw commented Aug 21, 2023

Oops, tests are failing on Pydantic 1 because I ignored a Pydantic 2 warning in a way that broke pytest.

ERROR: while parsing the following warning configuration:

  ignore::pydantic.main.PydanticDeprecatedSince20

This error occurred:

Traceback (most recent call last):
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1761, in parse_warning_filter
Warning: ory: Type[Warning] = _resolve_warning_category(category_)
  File "/opt/hostedtoolcache/Python/3.8.17/x64/lib/python3.8/site-packages/_pytest/config/__init__.py", line 1800, in _resolve_warning_category
    cat = getattr(m, klass)
AttributeError: module 'pydantic.main' has no attribute 'PydanticDeprecatedSince20'

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant