Skip to content

Bugfix for significant_digits (signed zero); Numpy-friendlyness; Issue #49; Failing tests for #50; #51

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 7 commits into from
Nov 30, 2016

Conversation

Bernhard10
Copy link
Contributor

This pullrequests makes several small changes. Fell free to merge only selected commits if desired.
Most commit contain changes to the main source code and tests.

@Bernhard10 Bernhard10 changed the title This pullrequests makes several small changes. Bugfix for significant_digits (signed zero); Numpy-friendlyness; Issue #49; Failing tests for #50; Nov 9, 2016
@victorhahncastell victorhahncastell self-assigned this Nov 9, 2016
@victorhahncastell
Copy link
Contributor

Hi @Bernhard10,

thanks for your PR! I'm gonna go ahead and assign this to myself as it concerns stuff I originally introduced. I'm not sure when exactly I'll get the chance to look into this, but I certainly will :)

Best,
Victor

@seperman seperman added the bug label Nov 14, 2016
@seperman
Copy link
Owner

Hey @Bernhard10
Thanks for the reporting the issue and the PR! I made some aesthetic changes to the code and variable names to make it easier to understand. Your PR has some merge conflicts now. Could you please resolve the conflicts so we can review it? The main changes you will notice are ref has become tree and there are more docstrings.

Now signed zeros compare equal. i.e.: 0.00 compared equal to -0.00 and
1*10**-12 compares equal to -1*10**-12, if significant_digits is less than 12
…calling `if self.down.t1:`

This is necessary thanks to numpy breaking convention and rising an error during conversion of an array to a bool.
…sets.

Sets use hasing for the diff, thus `__diff_numbers` is currently not invoked,
which is why the tests currently fail.
@Bernhard10
Copy link
Contributor Author

Hey @seperman
I rebased my pull-request onto the newest version of dev.

@seperman
Copy link
Owner

Hey @Bernhard10 Thanks for rebasing your PR. It seems like 2 of your tests are failing. Can you please check them out?

@Bernhard10
Copy link
Contributor Author

Hi @seperman
Yeah, they fail on purpose. They are just here to document issue #50.

I guess actually fixing this issue is not so trivial. I guess I could try to work on it, but it would probably take me a few weeks.

So for now, we can either remove the two tests or mark them as @unittest.skip or @unittest.expectedFailure

@seperman
Copy link
Owner

@Bernhard10 Cool. Yes please mark them with @unittest.expectedFailure so we can merge this PR. Then when you have a chance please look at the failed cases. I'm focusing on some other bugs at the moment. Maybe @victorhahncastell will be available later to look into the failed cases too! Thanks!

@coveralls
Copy link

coveralls commented Nov 24, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling 5418868 on Bernhard10:dev into 8a33302 on seperman:dev.

2 similar comments
@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 5418868 on Bernhard10:dev into 8a33302 on seperman:dev.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 5418868 on Bernhard10:dev into 8a33302 on seperman:dev.

PyPy uses its own fork of numpy, but its numpy support is not yet complete.
@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 4c85626 on Bernhard10:dev into 8a33302 on seperman:dev.

2 similar comments
@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 4c85626 on Bernhard10:dev into 8a33302 on seperman:dev.

@coveralls
Copy link

Coverage Status

Coverage remained the same at 100.0% when pulling 4c85626 on Bernhard10:dev into 8a33302 on seperman:dev.

@Bernhard10
Copy link
Contributor Author

Some tests fail on pypy, but this seems to be a bug in travis-ci, which tries to install the wrong version of numpy (pypy has it's own fork).
See travis-ci/travis-ci#3629

Skipping these Tests on pypy does not seem to work either.

These tests are necessary, because a user might want to compare objects containing numpy arrays with deepdiff. Unfortunately, the universal if iterable raises an error, if iterable is an numpy array.

@coveralls
Copy link

coveralls commented Nov 24, 2016

Coverage Status

Coverage remained the same at 100.0% when pulling f5fec5e on Bernhard10:dev into 8a33302 on seperman:dev.

@seperman
Copy link
Owner

@Bernhard10 Thanks for digging into the issue with Travis-ci. We have some tests that detect if they are run under Pypy and then they behave differently. So I think maybe we can go around the Travis-ci bug for now that way till they fix it.

@seperman seperman merged commit 50e4b76 into seperman:dev Nov 30, 2016
@Bernhard10 Bernhard10 deleted the dev branch November 30, 2016 07:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants