Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Smart shed_diff. #167

Merged
merged 2 commits into from
May 1, 2015
Merged

Smart shed_diff. #167

merged 2 commits into from
May 1, 2015

Conversation

jmchilton
Copy link
Member

  • Fix shed_diff so its exit code actually is non-zero if there are differences.
  • By default - don't report differences for tool shed populates attributes (toolshed, changeset_revision). This can be disabled with --raw.
  • Tests for ensure --raw and smart differencing are working and exit codes are fixed.
  • Update shed test webapp to inject the tool shed populated fields into respective arguments.

Implements #141.

 - Fix shed_diff so its exit code actually is non-zero if there are differences.
 - By default - don't report differences for tool shed populates fields (toolshed, changeset_revision). This can be disabled with --raw.
 - Tests for ensure --raw and smart differencing are working and exit codes are fixed.
 - Update shed_app test app to inject the tool shed populated fields into respective arguments.
@hexylena
Copy link
Member

hexylena commented May 1, 2015

Fix shed_diff so its exit code actually is non-zero if there are differences.

+1

@hexylena
Copy link
Member

hexylena commented May 1, 2015

Update shed test webapp to inject the tool shed populated fields into respective arguments.

can you just subprocess.check_command(['sh', 'run_shed.sh', '--daemon'])?

@@ -29,6 +29,12 @@
" this to testtoolshed.",
default=None,
)
@click.option(
"--raw",
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

unused?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is used inside of planemo.shed.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops. Missed that. Sorry!

@hexylena
Copy link
Member

hexylena commented May 1, 2015

+1

@jmchilton
Copy link
Member Author

can you just subprocess.check_command(['sh', 'run_shed.sh', '--daemon'])?

That would really increase the runtime of those tests - it takes well less than a second to start the server and run those tests. I also like mocking out the tool shed interface because it really crystalizes exactly the API planemo is consuming.

There is one functional test that actually exercises a real tool shed (https://github.com/galaxyproject/planemo/blob/master/tests/test_shed.py) and certainly it would be cool if that would automatically start a tool shed and stuff but I have not gotten around to it yet. It is a little bit harder than just run_tests.sh - since it should configure an isolated environment for the test.

jmchilton added a commit that referenced this pull request May 1, 2015
@jmchilton jmchilton merged commit 01ddcc2 into galaxyproject:master May 1, 2015
@jmchilton jmchilton mentioned this pull request May 1, 2015
2 tasks
@hexylena
Copy link
Member

hexylena commented May 1, 2015

That all sounds fair. Definitely harkens back to your point about "reason for travis and a reason for buildbot".

If you'd like long running tests, the IUC jenkins bot is underutilised! :)

@jmchilton
Copy link
Member Author

Thanks for the offer - it isn't so much about CI though. I definitely want that tool shed test running on Travis - another minute is not my concern. I think my other comment was about Galaxy and that minute would drawfs in comparision to the hours it would take to run Galaxy's functional tests. What I love is PLANEMO_SKIP_GALAXY_TESTS=1 nosetests is taking literally a couple seconds on my personal laptop. A hour matters in CI, a minute doesn't IMO - but a minute does matter during development. I love refactoring quickly and with impunity...

@hexylena
Copy link
Member

hexylena commented May 1, 2015

Gotcha, sounds good. Didn't realise that shed tests ran that quickly. Made a offhand assumption that it'd be 15-30m to really test all the shed related functionality and corner cases. (Especially if testing against shed@master and shed@dev)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants