Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix <is_valid_xml> output test assert check #3530

Merged
merged 2 commits into from Feb 2, 2017

Conversation

Projects
None yet
6 participants
@jvolkening
Copy link
Contributor

commented Feb 1, 2017

I tried to use the <is_valid_xml> assertion in a tool test and got a python error:

History item  different than expected
Expected valid XML, but could not parse output. 'module' object has no attribute 'fromstring'

This commit appears to fix the issue.

@galaxybot galaxybot added the triage label Feb 1, 2017

@bgruening
Copy link
Member

left a comment

LGTM. Wondering if this ever worked.

@bgruening

This comment has been minimized.

Copy link
Member

commented Feb 2, 2017

@jvolkening if you like you can include such a tool in your PR: https://github.com/galaxyproject/galaxy/blob/dev/test/functional/tools/checksum.xml

This will than make sure this functionality will never break again.

(test tool should be listed here as well:

<tool file="checksum.xml" />
)

@bgruening

This comment has been minimized.

Copy link
Member

commented Feb 2, 2017

Love it! Great work @jvolkening!

@jvolkening

This comment has been minimized.

Copy link
Contributor Author

commented Feb 2, 2017

@bgruening Incidentally, I don't quite understand why the test tool you linked to (checksum.xml) doesn't throw an error. Isn't it wrong for the output file in <tests> to have a different name that that in <outputs>?

@nsoranzo

This comment has been minimized.

Copy link
Member

commented Feb 2, 2017

That's indeed weird, it seems that the output name inside <test> is ignored, also planemo lint does not throw any warning, ping @jmchilton.

@jvolkening If you want to fix this test output name as part of this PR, that would be fantastic.

@jmchilton

This comment has been minimized.

Copy link
Member

commented Feb 2, 2017

Created an issue for the checksum problem here #3544 - always make sure tests are red before they are green. @jmchilton should have taken that advice.

@nsoranzo nsoranzo merged commit f691965 into galaxyproject:release_17.01 Feb 2, 2017

5 checks passed

api test Build finished. 256 tests run, 0 skipped, 0 failed.
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details
framework test Build finished. 136 tests run, 0 skipped, 0 failed.
Details
integration test Build finished. 24 tests run, 0 skipped, 0 failed.
Details
toolshed test Build finished. 580 tests run, 0 skipped, 0 failed.
Details
@jvolkening

This comment has been minimized.

Copy link
Contributor Author

commented Feb 2, 2017

When putting together the test tool for <is_valid_xml> last night, I was actually looking through the documentation to see if there was syntax to check for intentional failure. For example, something like this:

<tests>
    <test>
        <param name="input" value="simple_line.txt" />
        <output name="output" checksum="sha1$8156d7ca0f46ed7abac98f82e36cfaddb2aca041" />
        <output name="output" expect_failure="true" checksum="sha1$e6638f3c4912dc03fe7816071d56a3d385b2f649" />
    </test>
</tests>

i.e. rather than expecting the whole job to fail (which attribute already exists for the <test> element) specify that a specific output check is expected to fail. I actually tried to look through the code to see if I could implement something myself, but couldn't readily find the relevant spot. Don't know if an output can be checked twice as per above or if they would need to be separate <test>s

@bgruening

This comment has been minimized.

Copy link
Member

commented Feb 2, 2017

@jvolkening something like this is possible:

<test expect_exit_code="0" expect_failure="false">

Also do you know this one: https://docs.galaxyproject.org/en/master/dev/schema.html

@jvolkening

This comment has been minimized.

Copy link
Contributor Author

commented Feb 3, 2017

@jvolkening something like this is possible:

<test expect_exit_code="0" expect_failure="false">

@bgruening I had seen the expect_failure attribute on <test>, but that applies to the whole command and the docs say no output tests are allowed when it's used. I was thinking more along the lines of testing specific output assertions, which I guess would primarily be useful to make sure the assertions themselves were working properly (e.g. are not returning true when they shouldn't be). I guess another way to do the same thing without adding to the schema would be to write a separate suite of test tools with bad output assertions and make sure the tests fail.

Anyway, just thoughts. Thanks for all of the guidance as I'm learning the ecosystem.

Also do you know this one: https://docs.galaxyproject.org/en/master/dev/schema.html

Sure, spent a lot of time there already.

@bgruening

This comment has been minimized.

Copy link
Member

commented Feb 3, 2017

@jvolkening

This comment has been minimized.

Copy link
Contributor Author

commented Feb 3, 2017

But how do you test the tests? If you have

<output name="foo.tsv">
    <assert_contents>
        <has_n_columns n="3" />
    </assert_contents>
</output>

and the test passes, either your output is okay or the function checking <has_n_columns> is buggy. Intentional failure would test for the latter.

@martenson martenson added this to the 17.05 milestone May 16, 2017

@martenson martenson added kind/bug and removed triage labels May 16, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.