Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use cliff for CLI layer #100

Merged
merged 8 commits into from Jan 10, 2018
Merged

Use cliff for CLI layer #100

merged 8 commits into from Jan 10, 2018

Conversation

masayukig
Copy link
Collaborator

This commit makes stestr use cliff[1] for CLI layer. The cliff project
provides a lot of advantages for stestr cli which has subcommands.
Instead of just using argparse, we should leverage cliff to provide a
more plished CLI experience.

[1] https://pypi.python.org/pypi/cliff

Closes Issue #62

@coveralls
Copy link

Coverage Status

Coverage decreased (-4.4%) to 54.445% when pulling b83be5a on masayukig:use-cliff into f5099f8 on mtreinish:master.

@masayukig
Copy link
Collaborator Author

The coverage decreased... :( And I noticed that we couldn't see the source code on that site. For example, https://coveralls.io/builds/13385582/source?filename=stestr%2Fcommands%2Flast.py

Copy link
Owner

@mtreinish mtreinish left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So from a quick glance it looks good for the most part, I still need to do some manual testing for backwards compat (and also do some performance testing, to make sure there is little to no degradation there) The only other thing is a bunch of docs need to be updated with this. For example:

http://stestr.readthedocs.io/en/latest/internal_arch.html#cli-layer

http://stestr.readthedocs.io/en/latest/api.html#commands

You can push those doc changes up as another patch on the PR branch (to keep the doc updates separate but merge at the same time as the PR)

from stestr.commands.run import run_command
from stestr.commands.slowest import slowest as slowest_command

__all__ = ['failing_command', 'init_command', 'last_command',
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to break backwards compatibility and external consumers. (like I'm pretty sure I used this in os-testr somewhere) We can't remove this interface to the command functions without a deprecation period.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

setup.cfg Outdated
@@ -55,3 +64,6 @@ input_file = stestr/locale/stestr.pot
keywords = _ gettext ngettext l_ lazy_gettext
mapping_file = babel.cfg
output_file = stestr/locale/stestr.pot

[wheel]
universal = 1
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this needed? (I don't actually know what this does)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh, I just copy & paste from tempest code.. I should have removed this. I'll remove this.

@masayukig
Copy link
Collaborator Author

OK, I'll push doc changes later.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 57.444% when pulling 893f53f on masayukig:use-cliff into f5099f8 on mtreinish:master.

@mtreinish
Copy link
Owner

Just wanted to let you know my thinking on the timing for this. I'm planning to push a release in the near future I'm mostly waiting on the open bug fix reviews to land and also for the --analyze-isolation bug to get fixed. After that release (which will likely be 1.1.0) I think we can start testing this for backwards compat and performance and move forward with it.

@masayukig
Copy link
Collaborator Author

Sure, yeah, this is a bit big change. I agree with you. We should really care about them for stable testing.

@mtreinish
Copy link
Owner

ok, 1.1.0 is released now so this should be good to get ready to land. The issues from the preliminary review still need to be addressed as well as it needs to be rebased.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.88% when pulling 45e745d on masayukig:use-cliff into 24600a8 on mtreinish:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.88% when pulling e707d24 on masayukig:use-cliff into 24600a8 on mtreinish:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.88% when pulling 2d0e4e8 on masayukig:use-cliff into b4c2707 on mtreinish:master.

@masayukig
Copy link
Collaborator Author

I just updated the branch. I'm still not sure what is the best workflow with github..

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.888% when pulling 03731ea on masayukig:use-cliff into ed588cd on mtreinish:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.888% when pulling 927c3ba on masayukig:use-cliff into ed588cd on mtreinish:master.

masayukig and others added 2 commits October 27, 2017 16:03
This commit makes stestr use cliff[1] for CLI layer. The cliff project
provides a lot of advantages for stestr cli which has subcommands.
Instead of just using argparse, we should leverage cliff to provide a
more polished CLI experience.

[1] https://pypi.python.org/pypi/cliff

Closes Issue mtreinish#62
This commit updates the docs for using cliff for CLI layer.
@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.886% when pulling 18d210e on masayukig:use-cliff into 3340b56 on mtreinish:master.

@masayukig
Copy link
Collaborator Author

I already updated some docs and rebased on the latest master. @mtreinish, can you have a look at this?

@mtreinish mtreinish dismissed their stale review October 27, 2017 19:11

The lastest version has been updated to address earlier comments. Additional review still needed

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 60.886% when pulling 1164397 on masayukig:use-cliff into 2a03372 on mtreinish:master.

@mtreinish
Copy link
Owner

Ok, I started doing some performance testing on things this afternoon. Using cliff seems to have a pretty noticeable performance penalty. I haven't tested that we've got 100% compatibility on all args yet, because this raised questions for me on whether this is worth the trade off of functionality for performance.

The raw data from my tests is below:

Using argparse:

stestr --help:

stestr --help  0.20s user 0.04s system 99% cpu 0.237 total
stestr --help  0.25s user 0.03s system 99% cpu 0.289 total
stestr --help  0.20s user 0.03s system 99% cpu 0.230 total
stestr --help  0.21s user 0.03s system 99% cpu 0.243 total
stestr --help  0.20s user 0.04s system 99% cpu 0.237 total

stestr list (stestr tests):

stestr list  1.05s user 0.16s system 98% cpu 1.225 total
stestr list  1.03s user 0.15s system 99% cpu 1.175 total
stestr list  0.62s user 0.09s system 99% cpu 0.718 total
stestr list  0.61s user 0.09s system 99% cpu 0.700 total
stestr list  0.63s user 0.12s system 99% cpu 0.748 total

stestr list (nova unit tests):

stestr list  6.35s user 0.50s system 103% cpu 6.640 total
stestr list  6.02s user 0.58s system 104% cpu 6.301 total
stestr list  6.17s user 0.60s system 104% cpu 6.463 total
stestr list  6.22s user 0.64s system 104% cpu 6.556 total
stestr list  6.11s user 0.52s system 103% cpu 6.420 total

stestr run (stestr tests):

stestr run  31.73s user 4.30s system 300% cpu 11.999 total
stestr run  31.16s user 4.14s system 283% cpu 12.453 total
stestr run  31.17s user 4.26s system 302% cpu 11.722 total
stestr run  31.16s user 4.34s system 293% cpu 12.106 total
stestr run  31.30s user 4.20s system 303% cpu 11.696 total

stestr run (nova unit tests):

stestr run  2724.81s user 24.60s system 328% cpu 13:56.26 total

stestr last (stestr tests):

stestr last  0.28s user 0.03s system 99% cpu 0.313 total
stestr last  0.26s user 0.03s system 99% cpu 0.292 total
stestr last  0.29s user 0.04s system 99% cpu 0.334 total
stestr last  0.31s user 0.05s system 99% cpu 0.364 total
stestr last  0.25s user 0.03s system 99% cpu 0.286 total

stestr last (nova unit tests):

stestr last  7.79s user 0.23s system 99% cpu 8.019 total
stestr last  7.73s user 0.15s system 99% cpu 7.891 total
stestr last  7.81s user 0.15s system 99% cpu 7.961 total
stestr last  8.09s user 0.19s system 99% cpu 8.292 total
stestr last  7.79s user 0.21s system 99% cpu 8.009 total

stestr slowest (stestr tests):

stestr slowest  0.22s user 0.02s system 99% cpu 0.250 total
stestr slowest  0.23s user 0.04s system 99% cpu 0.277 total
stestr slowest  0.21s user 0.07s system 99% cpu 0.278 total
stestr slowest  0.22s user 0.05s system 99% cpu 0.270 total
stestr slowest  0.25s user 0.04s system 99% cpu 0.290 total

stestr slowest (nova unit tests):

stestr slowest  3.59s user 0.13s system 99% cpu 3.725 total
stestr slowest  3.51s user 0.13s system 99% cpu 3.645 total
stestr slowest  3.71s user 0.11s system 99% cpu 3.827 total
stestr slowest  3.68s user 0.11s system 99% cpu 3.792 total
stestr slowest  3.60s user 0.10s system 99% cpu 3.697 total

stestr failing (stestr tests):

stestr failing  0.22s user 0.03s system 99% cpu 0.250 total
stestr failing  0.25s user 0.03s system 99% cpu 0.286 total
stestr failing  0.22s user 0.04s system 99% cpu 0.266 total
stestr failing  0.21s user 0.04s system 99% cpu 0.244 total
stestr failing  0.22s user 0.04s system 99% cpu 0.268 total

stestr failing (nova unit tests):

stestr failing  0.24s user 0.03s system 99% cpu 0.271 total
stestr failing  0.21s user 0.06s system 99% cpu 0.270 total
stestr failing  0.24s user 0.04s system 99% cpu 0.289 total
stestr failing  0.21s user 0.06s system 99% cpu 0.271 total
stestr failing  0.24s user 0.05s system 99% cpu 0.297 total

With the Cliff Patch Applied

stestr --help

stestr --help  0.85s user 0.06s system 95% cpu 0.953 total
stestr --help  0.84s user 0.04s system 99% cpu 0.877 total
stestr --help  0.69s user 0.04s system 99% cpu 0.727 total
stestr --help  0.69s user 0.05s system 99% cpu 0.739 total
stestr --help  0.68s user 0.05s system 99% cpu 0.731 total

stestr list (stestr tests):

stestr list  1.45s user 0.17s system 99% cpu 1.631 total
stestr list  1.57s user 0.19s system 99% cpu 1.765 total
stestr list  1.55s user 0.16s system 99% cpu 1.715 total
stestr list  1.53s user 0.17s system 99% cpu 1.709 total
stestr list  1.55s user 0.17s system 99% cpu 1.731 total

stestr list (nova unit tests):

stestr list  10.81s user 1.03s system 104% cpu 11.353 total
stestr list  10.80s user 1.09s system 105% cpu 11.308 total
stestr list  10.81s user 1.07s system 104% cpu 11.326 total
stestr list  10.62s user 1.08s system 104% cpu 11.154 total
stestr list  10.90s user 1.01s system 105% cpu 11.283 total

stestr run (stestr tests):

stestr run  67.95s user 4.76s system 305% cpu 23.784 total
stestr run  74.04s user 5.40s system 324% cpu 24.511 total
stestr run  73.58s user 5.21s system 329% cpu 23.894 total
stestr run  73.11s user 5.05s system 335% cpu 23.279 total
stestr run  72.31s user 5.09s system 335% cpu 23.095 total

stestr run (nova unit tests)

stestr run  2679.45s user 24.36s system 324% cpu 13:52.13 total

stestr last (stestr tests):

stestr last  0.78s user 0.05s system 99% cpu 0.831 total
stestr last  0.78s user 0.06s system 99% cpu 0.840 total
stestr last  0.74s user 0.05s system 99% cpu 0.795 total
stestr last  0.63s user 0.06s system 99% cpu 0.697 total
stestr last  0.65s user 0.05s system 99% cpu 0.695 total

stestr last (nova unit tests):

stestr last  8.26s user 0.17s system 99% cpu 8.436 total
stestr last  8.31s user 0.23s system 99% cpu 8.543 total
stestr last  8.41s user 0.24s system 99% cpu 8.658 total
stestr last  8.40s user 0.21s system 99% cpu 8.612 total
stestr last  9.85s user 0.22s system 99% cpu 10.125 total

stestr slowest (stestr tests):

stestr slowest  0.65s user 0.04s system 99% cpu 0.697 total
stestr slowest  0.64s user 0.05s system 99% cpu 0.686 total
stestr slowest  0.63s user 0.05s system 99% cpu 0.675 total
stestr slowest  0.77s user 0.03s system 99% cpu 0.805 total
stestr slowest  0.63s user 0.04s system 99% cpu 0.672 total

stestr slowest (nova unit tests):

stestr slowest  4.24s user 0.12s system 99% cpu 4.363 total
stestr slowest  3.99s user 0.12s system 99% cpu 4.116 total
stestr slowest  4.04s user 0.13s system 99% cpu 4.171 total
stestr slowest  4.05s user 0.08s system 99% cpu 4.141 total
stestr slowest  4.04s user 0.11s system 99% cpu 4.155 total

stestr failing (stestr tests):

stestr failing  0.67s user 0.03s system 99% cpu 0.703 total
stestr failing  0.61s user 0.04s system 99% cpu 0.648 total
stestr failing  0.61s user 0.05s system 99% cpu 0.665 total
stestr failing  0.62s user 0.04s system 99% cpu 0.666 total
stestr failing  0.66s user 0.03s system 99% cpu 0.695 total

stestr failing (nova unit tests):

stestr failing  0.71s user 0.06s system 99% cpu 0.773 total
stestr failing  0.62s user 0.04s system 99% cpu 0.658 total
stestr failing  0.63s user 0.03s system 99% cpu 0.661 total
stestr failing  0.60s user 0.06s system 99% cpu 0.663 total
stestr failing  0.71s user 0.05s system 99% cpu 0.761 total

@masayukig
Copy link
Collaborator Author

oh, it's an interesting result. I'm thinking to do a performance test in my environment, too, just in case.

@mtreinish
Copy link
Owner

The interesting thing is that the nova unit tests (which I just picked as an example for a very large test suite) didn't really show a real change in performance. Although I only did one sample because I was impatient and my laptop gets kinda hot running the tests(and occasionally thermally throttles) so it's not really reliable data. I'll try rerunning them on my desktop which is more powerful and unlikely to have cooling issues.

The concerning result to me was that running stestr's tests (which is a more modestly sized test suite) took ~2x the time to finish. I think the results that are < 1 second don't really matter either way. If something goes from 0.2 seconds to 0.7 seconds I don't think that's a really big deal.

@masayukig
Copy link
Collaborator Author

I did similar tests. And I got an interesting result. The performances are almost same between applying the patch and not applying the patch. I'm not sure the reason why I got like this result, though..

w/o patch
stestr  stestr --help > /dev/null  0.19s user 0.02s system 99% cpu 0.202 total
        stestr --help > /dev/null  0.19s user 0.01s system 99% cpu 0.194 total
        stestr --help > /dev/null  0.17s user 0.02s system 99% cpu 0.189 total
        stestr --help > /dev/null  0.19s user 0.01s system 99% cpu 0.197 total
        stestr --help > /dev/null  0.17s user 0.02s system 99% cpu 0.192 total

stestr  stestr list > /dev/null  1.03s user 0.10s system 100% cpu 1.129 total
        stestr list > /dev/null  1.07s user 0.07s system 100% cpu 1.132 total
        stestr list > /dev/null  1.06s user 0.06s system 100% cpu 1.121 total
        stestr list > /dev/null  1.05s user 0.07s system 100% cpu 1.117 total
        stestr list > /dev/null  1.03s user 0.08s system 100% cpu 1.116 total

stestr  stestr run > /dev/null  25.15s user 2.09s system 290% cpu 9.373 total
        stestr run > /dev/null  25.24s user 2.28s system 309% cpu 8.885 total
        stestr run > /dev/null  25.51s user 2.27s system 315% cpu 8.820 total
        stestr run > /dev/null  22.54s user 2.00s system 277% cpu 8.830 total
        stestr run > /dev/null  24.61s user 1.95s system 297% cpu 8.937 total

stestr  stestr last > /dev/null  0.23s user 0.00s system 99% cpu 0.231 total
        stestr last > /dev/null  0.21s user 0.01s system 99% cpu 0.219 total
        stestr last > /dev/null  0.21s user 0.01s system 99% cpu 0.221 total
        stestr last > /dev/null  0.21s user 0.01s system 99% cpu 0.218 total
        stestr last > /dev/null  0.22s user 0.01s system 99% cpu 0.225 total

stestr  stestr slowest > /dev/null  0.21s user 0.01s system 99% cpu 0.218 total
        stestr slowest > /dev/null  0.19s user 0.02s system 99% cpu 0.210 total
        stestr slowest > /dev/null  0.20s user 0.01s system 99% cpu 0.210 total
        stestr slowest > /dev/null  0.20s user 0.01s system 99% cpu 0.209 total
        stestr slowest > /dev/null  0.20s user 0.01s system 99% cpu 0.212 total

stestr  stestr failing > /dev/null  0.18s user 0.03s system 99% cpu 0.211 total
        stestr failing > /dev/null  0.19s user 0.01s system 99% cpu 0.196 total
        stestr failing > /dev/null  0.20s user 0.00s system 99% cpu 0.201 total
        stestr failing > /dev/null  0.19s user 0.01s system 99% cpu 0.201 total
        stestr failing > /dev/null  0.18s user 0.02s system 99% cpu 0.199 total

nova    stestr list > /dev/null  10.97s user 1.39s system 111% cpu 11.105 total
        stestr list > /dev/null  10.95s user 1.38s system 111% cpu 11.072 total
        stestr list > /dev/null  11.03s user 1.34s system 111% cpu 11.102 total
        stestr list > /dev/null  10.98s user 1.39s system 111% cpu 11.107 total
        stestr list > /dev/null  11.01s user 1.38s system 111% cpu 11.119 total

w/ patch
stestr  stestr --help > /dev/null  0.23s user 0.03s system 99% cpu 0.254 total
        stestr --help > /dev/null  0.23s user 0.02s system 100% cpu 0.242 total
        stestr --help > /dev/null  0.23s user 0.01s system 100% cpu 0.247 total
        stestr --help > /dev/null  0.22s user 0.02s system 100% cpu 0.245 total
        stestr --help > /dev/null  0.22s user 0.02s system 99% cpu 0.241 total

stestr  stestr list > /dev/null  1.06s user 0.08s system 100% cpu 1.140 total
        stestr list > /dev/null  1.14s user 0.05s system 100% cpu 1.189 total
        stestr list > /dev/null  1.06s user 0.09s system 100% cpu 1.147 total
        stestr list > /dev/null  1.04s user 0.10s system 100% cpu 1.136 total
        stestr list > /dev/null  1.05s user 0.09s system 100% cpu 1.133 total

stestr  stestr run > /dev/null  25.34s user 2.04s system 307% cpu 8.892 total
        stestr run > /dev/null  26.83s user 2.32s system 322% cpu 9.042 total
        stestr run > /dev/null  25.70s user 2.27s system 301% cpu 9.269 total
        stestr run > /dev/null  24.16s user 1.99s system 297% cpu 8.802 total
        stestr run > /dev/null  25.39s user 2.12s system 307% cpu 8.942 total

stestr  stestr last > /dev/null  0.22s user 0.02s system 99% cpu 0.242 total
        stestr last > /dev/null  0.22s user 0.01s system 99% cpu 0.230 total
        stestr last > /dev/null  0.22s user 0.01s system 99% cpu 0.229 total
        stestr last > /dev/null  0.22s user 0.02s system 99% cpu 0.233 total
        stestr last > /dev/null  0.23s user 0.01s system 99% cpu 0.240 total

stestr  stestr slowest > /dev/null  0.21s user 0.02s system 99% cpu 0.232 total
        stestr slowest > /dev/null  0.21s user 0.01s system 99% cpu 0.221 total
        stestr slowest > /dev/null  0.21s user 0.01s system 99% cpu 0.219 total
        stestr slowest > /dev/null  0.19s user 0.02s system 99% cpu 0.218 total
        stestr slowest > /dev/null  0.18s user 0.04s system 99% cpu 0.220 total

stestr  stestr failing > /dev/null  0.21s user 0.01s system 99% cpu 0.220 total
        stestr failing > /dev/null  0.19s user 0.02s system 99% cpu 0.210 total
        stestr failing > /dev/null  0.19s user 0.02s system 99% cpu 0.211 total
        stestr failing > /dev/null  0.20s user 0.01s system 99% cpu 0.212 total
        stestr failing > /dev/null  0.21s user 0.00s system 99% cpu 0.211 total

nova    stestr list > /dev/null  11.10s user 1.28s system 111% cpu 11.129 total
        stestr list > /dev/null  11.04s user 1.44s system 111% cpu 11.218 total
        stestr list > /dev/null  11.08s user 1.35s system 111% cpu 11.159 total
        stestr list > /dev/null  11.07s user 1.34s system 111% cpu 11.147 total
        stestr list > /dev/null  11.01s user 1.41s system 111% cpu 11.154 total

@mtreinish
Copy link
Owner

Hmm, I wonder if it's the stdout redirect to /dev/null, I didn't do that in my testing. Do you get the same results with printing the output to the console?

@masayukig
Copy link
Collaborator Author

I got almost the same result with printing the output to my console, too. (I did only "stestr list" tests, though.)

w/o patch (stestr tests)
stestr list  1.03s user 0.08s system 100% cpu 1.117 total
stestr list  1.03s user 0.09s system 100% cpu 1.122 total
stestr list  1.05s user 0.07s system 100% cpu 1.118 total
stestr list  1.04s user 0.07s system 100% cpu 1.114 total
stestr list  1.04s user 0.08s system 100% cpu 1.118 total


w/ patch (stestr tests)
stestr list  1.08s user 0.07s system 100% cpu 1.149 total
stestr list  1.05s user 0.08s system 100% cpu 1.124 total
stestr list  1.08s user 0.05s system 100% cpu 1.129 total
stestr list  1.06s user 0.07s system 100% cpu 1.133 total
stestr list  1.06s user 0.07s system 100% cpu 1.132 total

@masayukig
Copy link
Collaborator Author

I'm wondering if we can see a testing time graph like on openstack-health. The data could be unstable, though. Do we need to extract data to make a graph from https://travis-ci.org/mtreinish/stestr/pull_requests manually..?

@mtreinish
Copy link
Owner

We probably could build some external automation based on travis results for collecting an aggregating the data. There is an api for everything so there should be some way to trigger a worker to populate a subunit2sql db when a test job finishes. Alternatively if travis supports secrets (which I don't know if it does or not) we could just add the subunit2sql db population as a post-processing job in the definition.

@mtreinish
Copy link
Owner

So the thing which is a known quantity is that there is a performance hit for stevedore to do the scan of the python namespace (so it can discover the plugins for the commands). So depending on how your python environment is setup this can take longer. Like if my python environment has a lot more packages installed the time to check them all will be longer. But, your results are significantly different from what I was seeing, I'm trying to figure out what else could be different. Were you running with py2 or py3?

@masayukig
Copy link
Collaborator Author

masayukig commented Nov 29, 2017

The above results are on py36. But I also did tests on py27, and I got a similar result to to my py36.
Here's the raw data.
https://docs.google.com/spreadsheets/d/1ddNvHps92k2LDbFbRTY5hlOGp-K3xR1UpTUX2W4_NNw/edit?usp=sharing

  • py36: openSUSE Tumbleweed/Homebuilt PC (Intel Xeon E3-1245 v3 3800 MHz (4 cores) )
  • py27: openSUSE 42.3/Dell Inc. PowerEdge R410 (Intel Xeon L5640 2261 MHz (12 cores) )

@mtreinish
Copy link
Owner

Ah, ok you're using real hardware for running your testing. Try it on your laptop and see if you've get similar results to me. My results might totally be invalid, but you're running things on machines with a lot more resources than my laptop. So I'm wondering if this only shows up on slower hardware. In the mean time I'll try running the tests on my bigger machines too.

@masayukig
Copy link
Collaborator Author

OK, I'll test it on my small virtualmachine later.
I actually tested on my laptop workstation(Intel Core i7-6820HQ 3600 MHz (4 cores) ), too, and, I've got similar results to the above (not like yours).

@masayukig
Copy link
Collaborator Author

OK, I'll test it on my small virtualmachine later.

I did that. So, the differences weren't that much than I expected..

w/o patch
% for i in {1..10}; do    time stestr list; done | grep "stestr list"  # for stestr
stestr list  0.96s user 0.10s system 98% cpu 1.080 total
stestr list  1.01s user 0.10s system 99% cpu 1.118 total
stestr list  1.00s user 0.12s system 99% cpu 1.130 total
stestr list  1.04s user 0.09s system 99% cpu 1.137 total
stestr list  1.01s user 0.14s system 98% cpu 1.161 total
stestr list  1.06s user 0.14s system 98% cpu 1.211 total
stestr list  0.96s user 0.16s system 98% cpu 1.139 total
stestr list  0.92s user 0.16s system 98% cpu 1.098 total
stestr list  0.98s user 0.11s system 98% cpu 1.109 total
stestr list  0.96s user 0.13s system 98% cpu 1.099 total
% for i in {1..10}; do    time stestr list; done | grep "stestr list" # for nova
stestr list  9.20s user 0.63s system 10% cpu 1:30.65 total
stestr list  9.56s user 0.66s system 99% cpu 10.256 total
stestr list  9.20s user 0.65s system 99% cpu 9.890 total
stestr list  9.01s user 0.60s system 99% cpu 9.650 total
stestr list  8.87s user 0.58s system 99% cpu 9.482 total
stestr list  8.79s user 0.61s system 99% cpu 9.427 total
stestr list  8.86s user 0.51s system 99% cpu 9.392 total
stestr list  8.91s user 0.54s system 99% cpu 9.477 total
stestr list  8.83s user 0.56s system 99% cpu 9.414 total
stestr list  8.82s user 0.60s system 99% cpu 9.447 total

w/ patch
% for i in {1..10}; do    time stestr list; done | grep "stestr list" # for stestr
stestr list  1.12s user 0.16s system 98% cpu 1.301 total
stestr list  1.09s user 0.08s system 98% cpu 1.182 total
stestr list  1.07s user 0.11s system 98% cpu 1.190 total
stestr list  1.05s user 0.12s system 98% cpu 1.185 total
stestr list  1.09s user 0.11s system 98% cpu 1.214 total
stestr list  1.04s user 0.13s system 99% cpu 1.187 total
stestr list  1.08s user 0.10s system 99% cpu 1.186 total
stestr list  1.06s user 0.11s system 98% cpu 1.184 total
stestr list  1.10s user 0.08s system 99% cpu 1.186 total
stestr list  1.08s user 0.09s system 98% cpu 1.180 total
% for i in {1..10}; do    time stestr list; done | grep "stestr list" # for nova
stestr list  9.13s user 0.57s system 99% cpu 9.730 total
stestr list  8.99s user 0.54s system 99% cpu 9.556 total
stestr list  8.93s user 0.58s system 99% cpu 9.545 total
stestr list  8.88s user 0.62s system 99% cpu 9.526 total
stestr list  8.84s user 0.56s system 99% cpu 9.429 total
stestr list  8.88s user 0.57s system 99% cpu 9.478 total
stestr list  8.98s user 0.51s system 99% cpu 9.518 total
stestr list  8.87s user 0.56s system 99% cpu 9.458 total
stestr list  8.87s user 0.54s system 99% cpu 9.441 total
stestr list  8.90s user 0.54s system 99% cpu 9.473 total

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 61.398% when pulling fe66b21 on masayukig:use-cliff into 241584e on mtreinish:master.

@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 61.398% when pulling 865b222 on masayukig:use-cliff into 241584e on mtreinish:master.

@masayukig masayukig mentioned this pull request Dec 28, 2017
@coveralls
Copy link

Coverage Status

Coverage decreased (-1.4%) to 61.398% when pulling a637125 on masayukig:use-cliff into ae58186 on mtreinish:master.

@mtreinish mtreinish merged commit 41972d0 into mtreinish:master Jan 10, 2018
@mtreinish
Copy link
Owner

Sorry this took so long for me to get back to. I did some more manual testing for backwards compat and it seems fine for the most part. There are a few small differences with exit codes which are different. There is also the shell mode difference, running stestr bare now opens up the shell mode, while doing it before printed the usage (and said there were missing arguments). So we probably should say the next release will be 2.0.0 to account for these user facing changes.

As for the performance I'm still a bit concerned about it. Searching the entire python namespace for the entrypoints isn't free and I think the size of our local python environments may be accounting for the differences in our testing. I still need to do some more testing on that front, and if it becomes to be a big enough issue we can always revert this.

@mtreinish
Copy link
Owner

Just running some more numbers on my desktop (which is way more powerful than my laptop) and it provides some more interesting insights:

stestr unit tests:
Argparse:

stestr run  43.19s user 6.02s system 1089% cpu 4.518 total
stestr run  42.47s user 6.05s system 1072% cpu 4.523 total
stestr run  42.96s user 6.10s system 1084% cpu 4.526 total
stestr run  41.78s user 6.20s system 1065% cpu 4.503 total
stestr run  42.93s user 5.91s system 1038% cpu 4.701 total

Cliff:

stestr run  71.92s user 6.27s system 1017% cpu 7.686 total
stestr run  72.10s user 6.07s system 1016% cpu 7.693 total
stestr run  72.31s user 6.07s system 1025% cpu 7.647 total
stestr run  73.05s user 6.31s system 991% cpu 8.002 total
stestr run  71.83s user 6.11s system 1002% cpu 7.771 total

Nova Unit Test

argparse

stestr run  2178.49s user 25.79s system 1391% cpu 2:38.39 total
stestr run  2196.85s user 26.15s system 1420% cpu 2:36.52 total
stestr run  2186.40s user 24.76s system 1417% cpu 2:36.01 total
stestr run  2188.69s user 24.82s system 1420% cpu 2:35.84 total
stestr run  2198.71s user 26.17s system 1408% cpu 2:38.01 total

Cliff

stestr run  2183.32s user 40.91s system 1275% cpu 2:54.35 total
stestr run  2185.94s user 25.57s system 1411% cpu 2:36.73 total
stestr run  2186.60s user 25.58s system 1409% cpu 2:36.98 total
stestr run  2195.86s user 26.52s system 1412% cpu 2:37.39 total
stestr run  2196.66s user 26.49s system 1417% cpu 2:36.86 total

The difference in run time for the stestr test suite is still concerning to me, with the cliff patch it's ~70% slower. I'll ask some other people to do some performance comparisons to give us a better idea of what's going on.

@masayukig
Copy link
Collaborator Author

OK, thanks. I'm fine with reverting if it'll be a significant issue.
Let's see.

@masayukig masayukig deleted the use-cliff branch January 11, 2018 07:22
@masayukig masayukig mentioned this pull request Jan 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants