New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Test failures: newton_cg and newton_cg_backtracking #2
Comments
Well, funny enough, that's normal and expected. Unless something new is going on, it's likely that these tests are failing because the number of iterations changed by 1 or 2. If you wouldn't mind verifying, run ctest -V -I 170,171,1 and ctest -V -I 172,173,1 and let know what is says. As a longer discussion, most of these tests check two primary things. One, that we get the right answer. Two, that the number of iterations remains the same. Generally, if I make an algorithmic change and really mess things up, the number of iterations changes dramatically and I like to be alerted to that. Unfortunately, due to numerical differences between platforms, one or two problems have their iteration count change when running on a new machine. Really, this isnt an issue with the problem, but that my testing infrastructure needs to be improved to account for this. I've been meaning to add an issue about this, so I'm glad you reported it since that means I'll be forced to fix it now. |
Maybe I can take a look, but I lack the expertise to know how much different is significant enough. The results here seem fair enough, though. |
Yup. Just what I thought. Nothing is broken except for my testing mechanisms. I'll try to fix this with something clever. |
Just tried on a nother platform, a modern i7 with same OS, and I get an extra failure in test 45, for
Just as an extra datapoint. |
I've been trying to figure out a good way to fix this and I'm at a bit of a loss. Really, I have the iteration check in the unit tests in order to figure out when I've screwed up the algorithms with a change. However, this does appear to be slightly platform dependent. In any case, I'm hesistant to put in an iteration range rather than a fixed iteration because I really need to see when I've changed any of the algorithms. Really, I'd like to have two tests modes: neurotic, that checks the exact iteration, and normal, which uses a range. Maybe there's a way to trick ctest into doing this. |
Python's nose has an option to make certain tests as slow, so they are not run by default, but they do on 'full' mode. If ctest has something similar, you could have two tests: one with a range, and another "neurotic" with fixed number and marked as slow. |
I found a possible option: http://www.mantidproject.org/Writing_Performance_Tests Set the neurotic ones as performance tests. |
Good call. I'll have a look at this. |
Some update, on develop, on a Fedora 23 laptop with an i5, GCC 5.1.1, compilation flags
|
This is a manifestation of the same problem, which will likely require the same fix. Basically, the
and set |
I have successfuly built and installed Optizelle, but
make test
reports two errors:I have activated Python, C++, and OpenMP bindings. I have only tested Release mode with flags O2 and O3, and march=native and default.
I am on a Fedora 20 box, with gcc 4.8.3, jsoncpp 0.6.0 release 0.11.rc2 from the repositories, and cmake 2.8.12.2.
The text was updated successfully, but these errors were encountered: