Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Where are all our testing machines? #1153

Closed
jseabold opened this Issue Oct 25, 2013 · 8 comments

Comments

Projects
None yet
3 participants
@jseabold
Copy link
Member

commented Oct 25, 2013

We get travis, pythonxy, nipy, ubuntu testing? Anything else? What are the URLs. If we collect them here, I'll make a PR adding a landing page for these to the docs.

@josef-pkt

This comment has been minimized.

Copy link
Member

commented Oct 25, 2013

That's it, just the three.

https://travis-ci.org/statsmodels/statsmodels/builds
http://nipy.bic.berkeley.edu/waterfall?category=statsmodels
https://code.launchpad.net/~pythonxy/+recipe/statsmodels-daily-current (*)

plus coverage
https://coveralls.io/r/statsmodels/statsmodels

we don't have a home yet for the vbench, PR is waiting and ready

(*) which is currently red, which means a build failure not a test failure. test failures still leave it green.

All the other version compatibility testing from python 2.6 to 3.3, and various versions of our dependencies, happens on my computer, when I run my tox tests.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2013

How do we get the build log of the failure?

@josef-pkt

This comment has been minimized.

Copy link
Member

commented Oct 25, 2013

I don't know

the Ubuntu version top line has a build log, but I don't see a problem in there, nor do I know if this is the right log
example https://launchpadlibrarian.net/154937999/buildlog.txt.gz

maybe an internal error because it stopped after 20 seconds.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2013

I can set up the vbench as part of the doc builds. We just need to update this at release right?

@josef-pkt

This comment has been minimized.

Copy link
Member

commented Oct 25, 2013

We just need to update this at release right?

not sure what you mean.
We would like to run this in regular intervals, so we can get some feedback when we hurt or improve our performance.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2013

I just mean we don't need nightly vbench builds, so it's something I could do by hand when I upload release docs.

@jseabold

This comment has been minimized.

Copy link
Member Author

commented Oct 25, 2013

I.e., you can run them by hand and check perf but we only need to advertise at release.

@TomAugspurger

This comment has been minimized.

Copy link
Contributor

commented Oct 27, 2013

Yep, wouldn't need to be run nightly since it will go through the git history. And then before a potentially performance sensitive commit the submitter would run bench on the branch.

I may need to work on the test_perf part some more. It may have some issues on non-Linux systems. I can't even run the pandas vbench on my Mac right now.

@jseabold jseabold closed this Apr 2, 2014

jseabold added a commit to jseabold/statsmodels that referenced this issue Apr 3, 2014

PierreBdR pushed a commit to PierreBdR/statsmodels that referenced this issue Sep 2, 2014

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.