Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt at making travis output shorter. #1179

Closed
wants to merge 1 commit into from

Conversation

pelson
Copy link
Member

@pelson pelson commented Aug 31, 2012

I think this makes the travis-ci output better (easier to get down to the test failures). Obviously it hides the standard out, but not the standard error, which is also helpful.

@mdboom
Copy link
Member

mdboom commented Aug 31, 2012

I'm not a fan of this -- if the Numpy build fails I want to know about it, and why. If it succeeds, I'm fine with throwing it away. Maybe something that piped the output to a temporary and then cat'ing that if the command failed?

@travisbot
Copy link

This pull request fails (merged 8d24560 into cf7618c).

@pelson
Copy link
Member Author

pelson commented Aug 31, 2012

if the Numpy build fails I want to know about it, and why

Really? There is nothing matplotlib could do that would be causing it to fail. It would be purely a numpy or travis issue which, admittedly, would need somebody to chase it down at some point.

My feeling is that this is a helpful change, but if that is not the general consensus then lets close this PR down.

@travisbot
Copy link

This pull request fails (merged 8d24560 into cf7618c).

@mdboom
Copy link
Member

mdboom commented Aug 31, 2012

In my experience, whenever something is removed from the build log, the next build that fails is the time you need that information.

The Numpy output is useful even if it's not our project. If it starts to emit compiler warnings because Travis updated their compiler, and that results in the int64 type being off (this actually happened), I want to see the Numpy compilation output so I know why our matplotlib tests are failing. If they were binary Numpy packages from a major distribution, I'd probably feel differently, but as long as we're building from source, I think the compiler output is useful.

For a similar reason, I don't think we should throw stdout away in the matplotlib build -- there is a lot of information about the environment that setup.py prints out that is useful for tracking down and trying to reproduce bugs.

I am, however, all for tools for reformatting of the output that draws attention to the more important parts, I just don't like the idea of throwing anything away.

@pwuertz
Copy link
Contributor

pwuertz commented Aug 31, 2012

I think this will resolve once travis supports building artifacts. At the moment, the stdout is the only information you can get from a worker.

Once travis offers saving output and/or files one could do the following:

Setup a new repository that uses the travis environment to build dependencies like numpy, build a tarball and upload that to github. Any problems regarding the dependencies can be resolved here.

Then for testing the matplotlib master one simply downloads and unpacks the tarballs. No lengthy and unnecessary compilation steps required. The stdout only needs to show the important parts, everything else could be piped to logfiles that can also be saved using the future artifact feature..

Unfortunately there is no way of securely uploading data from workers at the moment, so one has to wait for the announced feature until then..

@NelleV
Copy link
Member

NelleV commented Aug 31, 2012

I also like having the whole build output. It gives good information when there's a test failure. Hopefully, soon, we won't have to look at the output at all :)

Talking about the travis output, there's currently a test failing because it cannot import test_figure. I had the same error on my computer, and fixed it by resetting my development environment from scratch: indeed, I notived that test_figure had not been installed properly (I do not know why). The travis instance seems to install it properly as it compiles it in bytecode, but, if we can, it would be worth checking the file is indeed "here".

Running nosetests -s lib worked fine on my computer when test_figure was not installed properly, as it runs the tests on the local instance (on the other hand, it doesn't know about the "KnownFailure" plugging and before running this command, you have to build inplace).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants