Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TST: Fix locally failing tests. #1549

Merged
merged 6 commits into from Apr 4, 2014

Conversation

Projects
None yet
2 participants
@jseabold
Copy link
Member

commented Apr 3, 2014

This partially address #1521. I still don't see the grouputils failures. Is that pandas 0.11.0? If so, we can close that because we haev 0.12.0 as minimum now.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Apr 3, 2014

Should the GMM tests marked with np.testing.decorators.knownfailureif give an error? They do for me locally. It looks like the decorator works, but instead of marking as a knownfail it marks as an error for me.

@josef-pkt

This comment has been minimized.

Copy link
Member

commented Apr 3, 2014

When I ran nosetests xxx on the commandline, then I always got the knownfailures as errors. IIRC
But they don't show up as errors in the python shell.

I haven't checked in a while but it was because of the way numpy.testing and nose interacted.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Apr 3, 2014

Yeah, that's what I see too. Not ideal for local testing.

I removed the generator in the test_multi. All of the tests still run but it doesn't list the whole shebang if we run the tests in verbose mode.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Apr 3, 2014

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Apr 3, 2014

I'm punting on that. We should probably do conditional skips. I could've sworn we had known failures before that didn't raise an error from the command line.

@josef-pkt

This comment has been minimized.

Copy link
Member

commented Apr 3, 2014

I think we only use skipif to avoid knownfailure.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Apr 4, 2014

Changed knownfailure to skipif.

jseabold added a commit that referenced this pull request Apr 4, 2014

Merge pull request #1549 from jseabold/test-fails
TST: Fix locally failing tests.

@jseabold jseabold merged commit 726bd90 into statsmodels:master Apr 4, 2014

1 check passed

continuous-integration/travis-ci The Travis CI build passed
Details

@jseabold jseabold deleted the jseabold:test-fails branch Apr 4, 2014

@josef-pkt josef-pkt added the PR label Apr 14, 2014

PierreBdR pushed a commit to PierreBdR/statsmodels that referenced this pull request Sep 2, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.