Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

this change fixes an issue we encountered with pydantic validator… #899

Merged
merged 3 commits into from
Feb 4, 2020

Conversation

deuce2367
Copy link
Contributor

… decorators

- create_model() is now called using a __base__ parameter vs. a __config__ parameter
- isinstance comparisons now work as expected
- unit test created for #889 still passes as before
- apply black formatting

…corators

    - create_model() is now called using a __base__ parameter vs. a __config__ parameter
    - isinstance comparisons now work as expected
    - unit test created for fastapi#889 still passes as before
    - apply black formatting
@codecov
Copy link

codecov bot commented Jan 21, 2020

Codecov Report

Merging #899 into master will not change coverage.
The diff coverage is 100%.

Impacted file tree graph

@@          Coverage Diff          @@
##           master   #899   +/-   ##
=====================================
  Coverage     100%   100%           
=====================================
  Files         293    293           
  Lines        7692   7701    +9     
=====================================
+ Hits         7692   7701    +9
Impacted Files Coverage Δ
tests/test_filter_pydantic_sub_model.py 100% <100%> (ø) ⬆️
fastapi/utils.py 100% <100%> (ø) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 55afb70...c802d3b. Read the comment docs.

use_type = create_model(
original_type.__name__, __config__=original_type.__config__
)
use_type = create_model(original_type.__name__, __base__=original_type)
Copy link

@Code0x58 Code0x58 Jan 30, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change looks correct to me, based on the current v1.4 definition, and old v0.32 definition.

We encountered the same issue, so great to see a fix!

fastapi/utils.py Outdated
use_type = create_model(
original_type.__name__, __config__=original_type.__config__
)
use_type = create_model(original_type.__name__, __base__=original_type)
for f in original_type.__fields__.values():
use_type.__fields__[f.name] = create_cloned_field(f)
use_type.__validators__ = original_type.__validators__

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this line is redundant now

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It does seem like there may be a simpler way to refactor this bit. However I found that if I remove the create_model() call on line 96 it fails 2 of the unit tests so I don't know of a better solution for now:

======================================================== FAILURES =========================================================
__________________________________________________ test_filter_sub_model __________________________________________________

    def test_filter_sub_model():
        response = client.get("/model")
        assert response.status_code == 200
>       assert response.json() == {
            "name": "model-a-name",
            "description": "model-a-desc",
            "model_b": {"username": "test-user"},
        }
E       AssertionError: assert {'description...model-a-name'} == {'description...model-a-name'}
E         Omitting 2 identical items, use -vv to show
E         Differing items:
E         {'model_b': {'password': 'test-password', 'username': 'test-user'}} != {'model_b': {'username': 'test-user'}}
E         Use -v to get the full diff

tests/test_filter_pydantic_sub_model.py:92: AssertionError
_______________________________________________________ test_token ________________________________________________________

    def test_token():
        access_token = get_access_token(scope="me")
        response = client.get(
            "/users/me", headers={"Authorization": f"Bearer {access_token}"}
        )
        assert response.status_code == 200
>       assert response.json() == {
            "username": "johndoe",
            "full_name": "John Doe",
            "email": "johndoe@example.com",
            "disabled": False,
        }
E       AssertionError: assert {'disabled': ...Gga31lW', ...} == {'disabled': ...e': 'johndoe'}
E         Omitting 4 identical items, use -vv to show
E         Left contains 1 more item:
E         {'hashed_password': '$2b$12$EixZaYVK1fsbw1ZfbX3OXePaWxn96p36WQoeG6Lruj3vjPGga31lW'}
E         Use -v to get the full diff

tests/test_tutorial/test_security/test_tutorial005.py:231: AssertionError
============================================== 2 failed, 744 passed in 7.04s ==============================================

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I found removing line 99 still had the tests pass:

--- a/fastapi/utils.py
+++ b/fastapi/utils.py
@@ -96,7 +96,6 @@ def create_cloned_field(field: ModelField) -> ModelField:
         use_type = create_model(original_type.__name__, __base__=original_type)
         for f in original_type.__fields__.values():
             use_type.__fields__[f.name] = create_cloned_field(f)
-        use_type.__validators__ = original_type.__validators__
     if PYDANTIC_1:
         new_field = ModelField(
             name=field.name,

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, I found same. Updated the branch and unit tests.

@@ -8,10 +8,20 @@
class ModelB(BaseModel):
username: str

@validator("username")
def check_username(cls, username, values):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does adding this test new scenario? If the tests here don't trigger a failure case, I don't think adding these two validators adds anything, whereas testing for failure would make sure that the validators are inherited (worth checking in other places to make sure that isn't covered elsewhere, as this module seems to have a specific scenario in mind)

@tiangolo tiangolo merged commit 70bdade into fastapi:master Feb 4, 2020
@tiangolo
Copy link
Member

tiangolo commented Feb 4, 2020

Thanks for your contribution @deuce2367 ! 🎉 🍰

And thanks for the review and analysis @Code0x58 ! 🤓 🔍

I updated it a bit to test that the validator is preserved and merged it 🎉 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants