New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix type-safety of attribute access in BaseModel
#8651
Fix type-safety of attribute access in BaseModel
#8651
Conversation
please review |
CodSpeed Performance ReportMerging #8651 will not alter performanceComparing Summary
|
Personally, I like type-checking tests but that would be a big change on this repo that should be discussed separately. Maybe we can add this to the mypy or pyright tests? It's a bit hard to tell from the diff but I think all you did was indent these functions into a |
Yes, no other changes, just the indentation of the two methods. Unfortunately, GitHub's diff algorithm is a bit suboptimal, and makes the diff look worse than it is (it associates some
I will look into that to see where it would fit. |
I had a closer look now, but it feels like this is not quite the right place for it, no? On first glance these seem more like "meta tests" that are responsible for the execution of the type checkers, managing versions and handling results. It is not quite the right place to actually type check certain aspects of API interface like Also, the modules in |
@samuelcolvin how would you feel about enabling type-checking for tests and using includes/excludes to either (1) exclude all current files or (2) exclude all files and whitelist only some files (I'm not sure this is possible). |
This should be possible in general. At least I've successfully used something like this in mypy.ini's (pseudo-code, didn't look up the exact syntax): # blacklist tests
[mypy-tests.*]
ignore_errors = True
# whitelist specific modules/packages
[mypy-tests.test_some_module.py]
ignore_errors = False
[mypy-tests.core.*]
ignore_errors = False
I'm not quite sure though if mypy's filtering requires proper "package" semantics to work (which may require placing |
I really don't want to add type checking to tests. We have mypy tests, let's use them and if they're not sufficient, improve them. |
Using I'm no fan of "everything should be type hinted, everything should be type checked" absolutism, it's a wrong-headed as the "I don't need type hints" crowd. What we should do however is significantly simply our "mypy tests" and convert them to "static-typing tests", they should be made up of:
All those files should all be imported and not raise an error (via a parameteriesd test), thereby confirming their runtime behaviour is correct. |
Okay then let’s add relnotes and merge this without adding type checking tests |
Of course, I have not familiar enough with the existing code and test infrastructure to really judge what is best suited, but just a few thoughts on this.
The point is that runtime behavior and type-checking semantics are not necessarily two separate tasks. Even the contrary: It can be easier to look at them jointly. For instance, if you look at a typical test that requires a class UltraSimpleModel(BaseModel):
a: float
m = UltraSimpleModel(a=10.2)
with pytest.raises(ValueError):
m.c = 20 # type: ignore[attr-defined] it can be quite satisfying to see that the
For what it's worth, we have experimented with both approaches, and have made good experiences with it. Seeing the alignment of runtime behavior with the type checking semantics can be quite valuable. For instances, we have discovered cases, where a def test_attribute_semantics():
class Model(BaseModel):
foo: int
model = Model(foo=42)
with pytest.raises(AttributeError):
model.non_existing_field # type: ignore[attr-defined]
with pytest.raises(AttributeError):
model.non_existing_field = ... # type: ignore[attr-defined]
with pytest.raises(AttributeError):
del model.non_existing_field # type: ignore[attr-defined] On the
Me neither, and that's not what I wanted to suggest. I rather had a lightweight mypy profile in mind. For instance, I'd always advocate for
This could cause some duplication because often the same code makes sense as a runtime check and as a type system check, like the cases above.
I may be misunderstanding the idea, but testing for "does not type check" typically exactly implies "does raise an error at runtime", no? Doesn't that lead to the need for wrappers similar to |
@adriangb I couldn't figure out what exactly I have to do. If it refers to the GitHub labels I don't seem to have the rights to add one. Or should I write something in a change log somewhere? |
Change Summary
This PR fixes the type-safety of attribute access of
BaseModel
.I noticed that in addition to
__setattr__
,__delattr__
should get the same treatment. Otherwise the same issue would arise withdel model.non_existing_field
(albeit deleting fields is probably a bit strange to begin with).Related issue number
fix #8643
Checklist
Regarding test: Are you familiar with the notion of type tests via
# type: ignore
?I wanted to add a such a type test to make sure there cannot be a regression again. The idea is to leverage mypy's
warn_unused_ignores
feature, which you have already enabled as far as I can see. The nice thing about the feature is that# type: ignore
s are only allowed if and only if a particular line does not type check. This enables library authors to verify their interface in terms of typing guarantees. What I wanted to add is something like this to thetest_main.py
:Checking this snippet with mypy would fail on the
main
branch, because the last two lines actually don't produce type errors. After the change inmain.py
they do behave correctly and the type ignores become mandatory as one would expect. Therefore the snippet could guarantee that__getattr__
/__setattr__
/__delattr__
don't accidentally leak towards the type checker again.Unfortunately it looks like the tests are not typed checked though, i.e., the tests does something only when I do
mypy tests/test_main.py
explicitly, but running justmake
seems to ignore all type errors in the test.Could you advice on how to proceed with the test?
Wouldn't it make sense to type check the tests as well? Currently they seem to have quite a lot of type errors, so this would be a bit of work, but could add some value. In general it is always nice to see that a test that does something with
pytest.raises
actually does require some# type: ignore
, because it indicates that the runtime behavior and the type-checker time behavior are in line! So by forcing oneself to place these# type: ignore
on these "expected errors", it gets pretty clear how well the typings match with actual behavior. What do you think?Selected Reviewer: @adriangb