-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Convert tests to fbp-spec, for run/edit from Flowhub #15
Comments
Current thinking is to drive tests using FBP protocol, in a fashion suitable for any FBP runtime. This will require the test fixture expressed as a graph, and test input/output data expressed as a set of IPs. It is desirable that Flowhub can visualize actual versus expected values, to aid debugging. This could be done by dedicated/specialized edge inspectors which can handle two values and "diffing". |
Some prototyping in MicroFlo https://twitter.com/jononor/status/587565844768624640 |
https://github.com/flowbased/fbp-spec is the project which should enable this |
I think it is realistic to move to fbp-spec for our graphtests now. Most important is that we keep the easy workflow of being able to add a test by just giving name, url/params and drop in an expected output image. For this we will need to wrap the current testing utilities into NoFlo components. Suggested components:
Right now we do couple of intermediate validations, like check if we have expected data for something on disk. Currently in Mocha these are formulated testcases, but they are more like independent assertions. For this, use exported ports from middle of the fixture graph? |
At this point the fixture graph may be getting so complicated that some way of introspecting it may be highly desired. Ideally this would be visual in Flowhub. |
While the separate components/graph from two comments up would be the ideal, an intermediate step could be to have one fat component for the whole thing - then split up later as needed. |
Another potential benefit is to generate documentation directly from the testcases, ref noflo/noflo-ui#51 |
Many of our testcases are data-driven end-to-end reference tests, see
spec/graphtests.yml
Each case specifies:
Actual output is verified against reference output using
gegl-imgcmp
with a certain tolerance. The tool also provides a diff when there are differences, which can be useful for debugging failures, and ruling out false-positives.We should also be able to test error conditions and performance in similar fashion, as these can also be set up and observable from outside (black-box).
Adding new cases, and modifying existing ones should be possible to do entirely from within Flowhub.
The text was updated successfully, but these errors were encountered: