Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic testing on Travis with reference images #156

Closed
rossant opened this issue Feb 28, 2014 · 16 comments
Closed

Automatic testing on Travis with reference images #156

rossant opened this issue Feb 28, 2014 · 16 comments

Comments

@rossant
Copy link
Member

rossant commented Feb 28, 2014

Requires basic image registration algorithm. I think some work may have been done during the code camp?

Related to #89.

@larsoner
Copy link
Member

larsoner commented Jul 8, 2014

@lcampagn your idea of a git-based system would work for developers, but not all users would have git installed in the first place. (And then vispy.test() would fail.) We could just have a system where individual files are downloaded as needed by tests, and if we ever need to replace one, we just move to a different filename. It's not ideal, but it would work for most use cases, and be easy to add/modify/understand.

@campagnola
Copy link
Member

I have added a "test-data" repository for storing test images. I'll suggest a very basic API:

from vispy.util.testing import test_image_match
# draw something
test_result = canvas.screenshot()
test_image_match(test_result, "some unique key")

In a standard test (started with make test), this would fetch the most recent test images from the test-data repo (using git), look up the exact test using the unique key string, and raise an exception if the images do not match.

There would also be a separate testing "audit" mode (maybe by running make test-audit), where calling test_image_match would open a GUI showing the current and saved test results, and allow the user to select one or the other as the "correct" result. (Then the new test data should be sent back to the repo by PR and reviewed by others). This would give us a semi-automated way to curate a large set of image-based tests.

@campagnola
Copy link
Member

not all users would have git installed in the first place

I am not really concerned if average users are not able to run the image tests. The purpose of this is to ensure that we generate consistent results across code changes. We have plenty of other tests that are skipped if the requisite packages are not importable.

@campagnola
Copy link
Member

@suraj, would you like to start a vispy.util.testing module with a simple test_image_match function (just the test, not the audit)? For now you can use git manually to handle the image data, and later on we can decide on the best way to automate it.

@mssurajkaiga
Copy link
Member

That would be good.

@campagnola
Copy link
Member

Maybe best to make a PR based on the master branch, and then merge the same code into your visuals PR.

@larsoner
Copy link
Member

larsoner commented Jul 8, 2014

This should all just go in vispy.testing as opposed to vispy.utils.testing.

@campagnola
Copy link
Member

Oops!
(shouldn't testing be in util anyway?)

@larsoner
Copy link
Member

larsoner commented Jul 8, 2014

I was motivated by a few things to separate it out a couple months back. First, it's more consistent with numpy (and I assumed other packages) to have testing-related code live in the testing submodule. Second, I think of utils as a place to store code that is unrelated to other submodules, yet isn't "big enough" or coherent enough to warrant its own namespace. For example, it would make more sense in my mind to have vispy.io than vispy.utils.io if we eventually supported reading/writing a bunch of 2D/3D file formats. When I was working on the test splitting code, it formed into a reasonably coherent and large chunk of code, so it was weird to keep it in utils. Third, splitting it out fit better with the "flat is better than nested" idea. Fourth, it helps ensure that code bits stay as orthogonal as possible (esp. with Almar's test that importing a given submodule doesn't import any others), or at least that we program with that idea in mind.

@almarklein
Copy link
Member

Requires basic image registration algorithm.

I think not. You're not getting there with rigid registration, and elastic registration (apart from being hard) will result in wobbling. The whole points is that the two images to compare are already almost aligned.

Instead I propose we compare the images in a way that small variations in the location of edges are tolerated. We could do this in the Fourier domain, or perhaps using morphology.

@almarklein
Copy link
Member

Having vispy.testing seems reasonable to me, since it is not part of the main code.

@campagnola
Copy link
Member

@suraj, I pushed a first commit to the test-data repo, so it should be forkable now.

@mssurajkaiga
Copy link
Member

Thanks. It's forkable now.

@larsoner
Copy link
Member

larsoner commented Jul 9, 2014

@mssurajkaiga FYI I have a little bit of code you might find useful. Testing for git existence (simple enough):

https://github.com/LABSN/expyfun/blob/master/expyfun/_git.py#L12

But the actually useful thing is the convenience wrapper it uses for subprocess.Popen. We might want to incorporate into vispy for making git calls:

https://github.com/LABSN/expyfun/blob/master/expyfun/_utils.py#L178

@mssurajkaiga
Copy link
Member

@Eric89GXL Thanks.

@larsoner
Copy link
Member

We have this now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants