Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

AcceptedExtra not working as expected with dicts #47

Closed
TheOtherDude opened this issue May 3, 2019 · 3 comments
Closed

AcceptedExtra not working as expected with dicts #47

TheOtherDude opened this issue May 3, 2019 · 3 comments

Comments

@TheOtherDude
Copy link

I expected with AcceptedExtra(): to ignore missing keys in dicts, but instead it raises a Deviation from None.

Here is an example:

actual = {'a': 1, 'b': 2}
expected = {'b': 2}
with AcceptedExtra():
    validate(actual, requirement=expected)

The output is:

E           ValidationError: does not satisfy mapping requirements (1 difference): {
                'a': Deviation(+1, None),
            }

Thanks for the cool package, by the way!

@shawnbrown
Copy link
Owner

shawnbrown commented May 4, 2019

This is working as intended (under the current API) but your expectation is entirely reasonable and I think datatest should be changed to use an Extra difference in this case. For legacy reasons, it currently uses a Deviation when numbers are checked against no value.

I can resolve this in the upcoming 0.9.6 release when I clean up the acceptance API.

Below, I'll walk through your example to explain what's going on and why it was intentional—those legacy reasons. Then I'll describe the fix I have in mind.

Here's a version of your code that doesn't include an acceptance:

from datatest import validate

actual = {'a': 1, 'b': 2}
expected = {'b': 2}
validate(actual, expected)

As you noted, this fails with:

ValidationError: does not satisfy mapping requirements (1 difference): {
    'a': Deviation(+1, None),
}

Because the difference type is a Deviation, using accepted.extra() doesn't catch it. The difference can, however, be accepted with the following:

from datatest import validate, accepted

actual = {'a': 1, 'b': 2}
expected = {'b': 2}
with accepted.deviation(1):
    validate(actual, expected)

But look at this modified version of your example (using strings instead of integers):

from datatest import validate

actual = {'a': 'x', 'b': 'y'}
expected = {'b': 'y'}
validate(actual, expected)

Now this fails with that Extra difference you were expecting:

ValidationError: does not satisfy mapping requirements (1 difference): {
    'a': Extra('x'),
}

Here is what's going on: The type of difference used is determined by the type of object being checked. And with mappings, there can be situations (as in your example) where values don't have anything to be compared against—a sort of null condition.

Datatest tries to generate sensable differences for these value-vs-no value situations. And when I designed the behavior, I was trying to make it interoperate with the acceptances as smoothly as possible. But because the acceptances weren't as flexible as they are now, I had to make some tough choices. I decided on the following:

  • obj vs. no value -> Extra(obj)
  • no value vs. obj -> Missing(obj)
  • number vs. no value -> Deviation(+number, None)
  • no value vs. number -> Deviation(-number, number)

I really hated that last one but I was the least awful thing to do at the time. This scheme helped the workflow when adding acceptances. But there's no reason to keep doing this and it's not the behavior people expect to see.

The fix I have in mind

I think I can unify the no value comparison behavior as follows:

  • obj or number vs. no value -> Extra(<obj or number>)
  • no value vs. obj or number -> Missing(<obj or number>)

And then replace the current accepted.deviation() with a more capable accepted.tolerance() that can handle numeric differences regardless of type (not just Deviation objects).

This would make it so your original example would fail with:

ValidationError: does not satisfy mapping requirements (1 difference): {
    'a': Extra(1),
}

Which could then be accepted with...

from datatest import validate, accepted

actual = {'a': 1, 'b': 2}
expected = {'b': 2}
with accepted(Extra):  # <- new format in upcoming API
    validate(actual, expected)

...or with...

from datatest import validate, accepted

actual = {'a': 1, 'b': 2}
expected = {'b': 2}
with accepted.tolerance(1):  # Accepts +/- 1
    validate(actual, expected)

I'll follow up when I start to address this.

shawnbrown added a commit that referenced this issue May 7, 2019
This is a first step towards addressing issue #47.
shawnbrown added a commit that referenced this issue May 8, 2019
@TheOtherDude
Copy link
Author

That makes sense. Thanks!

shawnbrown added a commit that referenced this issue May 9, 2019
…gs().

These changes should allow for a broader range of quantative types
beyond datetimes and strictly-numeric types.

Related to issue #47.
shawnbrown added a commit that referenced this issue May 12, 2019
After testing this in production, it will be added to the docs
and the existing AllowedDeviation methods will be deprecated.

Related to issue #47.
shawnbrown added a commit that referenced this issue May 18, 2019
Now returns Missing and Extra differences when checking numbers
against a NOVALUE sentinel. See issue #47 for details.
@shawnbrown
Copy link
Owner

This has been resolved in the development version for a few weeks but I just published a new stable release (0.9.6) so the fix is available from PyPI now, too.

Thanks @TheOtherDude for filing the issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants