Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upCommand-line flag to run specs multiple times #795
Comments
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
afeld
Feb 19, 2013
I'm currently just using a loop in bash to do this:
for i in `seq 10`; do rspec spec; done
but with a lot of puts from the server, it's hard to pull out the test results for each run from the noise.
afeld
commented
Feb 19, 2013
|
I'm currently just using a loop in bash to do this:
but with a lot of |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
What specific advantages would |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
afeld
Feb 20, 2013
My rspec output is separated by a gajillion lines of logging, enough to push the output of previous runs out of the shell history. I suppose when using for I could
- redirect output from rspec (separate from app output, if that's possible?), or
- redirect all output to a file and then grep it for the results
...though seems like this issue might come up for enough people where -n (or equivalent) would be a generally useful flag. Where --order rand gives some confidence of idempotence between individual specs, this would do the same for multiple runs of the same test.
afeld
commented
Feb 20, 2013
|
My rspec output is separated by a gajillion lines of logging, enough to push the output of previous runs out of the shell history. I suppose when using
...though seems like this issue might come up for enough people where |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
dchelimsky
Feb 20, 2013
Member
@afeld what sorts of intermittent failures are revealing themselves when you run examples in a loop?
|
@afeld what sorts of intermittent failures are revealing themselves when you run examples in a loop? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
afeld
Feb 20, 2013
We have single-pagey app, so with specs that drive selenium, there are sometime race conditions where the test is checking for the presence of an element on the page that gets created via JS (and sometimes needs to wait on AJAX). We're using capybara, but it's a bit opaque as to when it waits for things to complete and when it doesn't.
Is there a way to capture the rspec output specifically (in the shell or programmatically)? I'm happy to create a wrapper gem for this if it doesn't seem appropriate in core. I shall call it am_i_nuts_or_did_i_just_see_that_spec_fail.
cc: @jnicklas
afeld
commented
Feb 20, 2013
|
We have single-pagey app, so with specs that drive selenium, there are sometime race conditions where the test is checking for the presence of an element on the page that gets created via JS (and sometimes needs to wait on AJAX). We're using capybara, but it's a bit opaque as to when it waits for things to complete and when it doesn't. Is there a way to capture the rspec output specifically (in the shell or programmatically)? I'm happy to create a wrapper gem for this if it doesn't seem appropriate in core. I shall call it cc: @jnicklas |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
dchelimsky
Feb 20, 2013
Member
Is there a way to capture the rspec output specifically (in the shell or programmatically)? I'm happy to create a wrapper gem for this if it doesn't seem appropriate in core.
I'd say a wrapper gem makes sense. We used to do this in rspec-rails, but I don't think it does anymore. Cucumber probably does so you can peek at that for ideas.
I'd say a wrapper gem makes sense. We used to do this in rspec-rails, but I don't think it does anymore. Cucumber probably does so you can peek at that for ideas. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
afeld
Feb 20, 2013
Will do. If anyone else knows of other examples that hook into rspec to retrieve the results, please let me know!
afeld
commented
Feb 20, 2013
|
Will do. If anyone else knows of other examples that hook into rspec to retrieve the results, please let me know! |
afeld
closed this
Feb 20, 2013
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
afeld
Feb 20, 2013
Apparently I wasn't the first person to think of this: https://github.com/oggy/respec, https://github.com/dblock/rspec-rerun and #456 all refer to re-running failed examples... I'll probably do a pull request on one of them to have an option to run X times and report which tests are flaky.
afeld
commented
Feb 20, 2013
|
Apparently I wasn't the first person to think of this: https://github.com/oggy/respec, https://github.com/dblock/rspec-rerun and #456 all refer to re-running failed examples... I'll probably do a pull request on one of them to have an option to run X times and report which tests are flaky. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
taylor
Oct 1, 2014
Try this for seeing when things fail:
for i in `seq 50` ; do rspec spec ; [[ ! $? = 0 ]] && break ; done
taylor
commented
Oct 1, 2014
|
Try this for seeing when things fail:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
avit
Jan 20, 2015
Contributor
Running the test suite repeatedly (in the same process) would also be helpful for tracing memory leaks.
|
Running the test suite repeatedly (in the same process) would also be helpful for tracing memory leaks. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
lionelperrin
Oct 9, 2015
Hello,
I discover this request right now, looking for the same option.
This ability to repeat a test suite exists in several test framework. For instance https://code.google.com/p/googletest/wiki/AdvancedGuide#Repeating_the_Tests.
This is useful to detect memory leak and in general checking that resources are correctly managed. (File closed as expected, no shadow child process or thread, etc...) Spawning several time using a bash script is not suitable in these case because cleanup will be performed by the OS.
In addition, as stated in this issue, the output of the bash script is not very user friendly.
Regards,
Lionel
lionelperrin
commented
Oct 9, 2015
|
Hello, I discover this request right now, looking for the same option. This ability to repeat a test suite exists in several test framework. For instance https://code.google.com/p/googletest/wiki/AdvancedGuide#Repeating_the_Tests. This is useful to detect memory leak and in general checking that resources are correctly managed. (File closed as expected, no shadow child process or thread, etc...) Spawning several time using a bash script is not suitable in these case because cleanup will be performed by the OS. In addition, as stated in this issue, the output of the bash script is not very user friendly. Regards, Lionel |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
avit
Oct 9, 2015
Contributor
@lionelperrin the linked issue #1852 that duplicates this one also suggests running repeated tests out-of-process. I agree, this doesn't address the use case of testing for object allocations/memory leaks, or for repeatability between test suite setup & teardown.
|
@lionelperrin the linked issue #1852 that duplicates this one also suggests running repeated tests out-of-process. I agree, this doesn't address the use case of testing for object allocations/memory leaks, or for repeatability between test suite setup & teardown. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
JonRowe
Oct 12, 2015
Member
If you want to check for object allocations / memory leaks you're better off writing a test that checks for that specifically by instantiating and measuring within that spec, that way you eliminate the extra noise that RSpec itself introduces into that environment.
|
If you want to check for object allocations / memory leaks you're better off writing a test that checks for that specifically by instantiating and measuring within that spec, that way you eliminate the extra noise that RSpec itself introduces into that environment. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
JonRowe
Oct 12, 2015
Member
Additionally you could also temporarily wrap a spec you wanted to repeat in a ruby iterator to repeat that spec, so theres another easy way to do it...
e.g.
ENV.fetch('REPEAT',1).to_i.times do |i|
specify 'flakey spec' { ... }
endOr implement with an around hook:
around(:example, flakey: true) do |ex|
ENV.fetch('REPEAT',1).to_i.times do |i|
ex.run
end
end|
Additionally you could also temporarily wrap a spec you wanted to repeat in a ruby iterator to repeat that spec, so theres another easy way to do it... e.g. ENV.fetch('REPEAT',1).to_i.times do |i|
specify 'flakey spec' { ... }
endOr implement with an around hook: around(:example, flakey: true) do |ex|
ENV.fetch('REPEAT',1).to_i.times do |i|
ex.run
end
end |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
lionelperrin
Oct 13, 2015
Thank you for these ideas. It is true that it is possible to test allocations/leaks with a workaround.
I understand that I can't expect an implementation of a standard 'repeat' option in future versions.
I'll go for a simple repeat loop then...
lionelperrin
commented
Oct 13, 2015
|
Thank you for these ideas. It is true that it is possible to test allocations/leaks with a workaround. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
avit
Oct 13, 2015
Contributor
you're better off writing a test that checks for that specifically by instantiating and measuring within that spec
That's assuming one knows where the problem is happening. What I was suggesting could make test suites more helpful as a way to narrow that down.
An around :all block is helpful for one example group. I guess what I would be looking for is something like around :suite.
That's assuming one knows where the problem is happening. What I was suggesting could make test suites more helpful as a way to narrow that down. An |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
JonRowe
Oct 14, 2015
Member
Wouldn't really help, too much noise, adding a spec would increase the allocations each time, you need to measure per example, you could use an around :example to achieve that for all examples if you really must of course.
|
Wouldn't really help, too much noise, adding a spec would increase the allocations each time, you need to measure per example, you could use an |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
duane
commented
Apr 12, 2017
|
This would be highly useful for profiling memory and performance issues. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cconstantine
commented
Feb 21, 2018
|
I could use this as well for memory testing |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
JonRowe
Feb 21, 2018
Member
As previously stated, you're better off profiling directly, or having one test that does things multiple times to look for memory leaks and performance issues. Running specs multiple times on our side would introduce a lot of extra noise from our own instantiation.
|
As previously stated, you're better off profiling directly, or having one test that does things multiple times to look for memory leaks and performance issues. Running specs multiple times on our side would introduce a lot of extra noise from our own instantiation. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cconstantine
Feb 22, 2018
I disagree.
I just spent 2 days attempting to find a memory leak that didn't have a clear origin. I was hoping to add memory profiling code to the global rspec (in before/after/around callbacks), but that is really only useful if the test code runs multiple times. I could spend time adding a N.times {test_stuff} to each individual test, or I could add a flag to the command line.
It would also be really helpful for finding flaky tests, which is a constant struggle.
You may disagree (and, as an owner, you have final say), but I'd really rather my test framework have a useful feature than be forced to hack another solution up. Another option would be to add explicit support for memory profiling/testing. I'll add another issue/feature request for that.
cconstantine
commented
Feb 22, 2018
•
|
I disagree. I just spent 2 days attempting to find a memory leak that didn't have a clear origin. I was hoping to add memory profiling code to the global rspec (in before/after/around callbacks), but that is really only useful if the test code runs multiple times. I could spend time adding a It would also be really helpful for finding flaky tests, which is a constant struggle. You may disagree (and, as an owner, you have final say), but I'd really rather my test framework have a useful feature than be forced to hack another solution up. Another option would be to add explicit support for memory profiling/testing. I'll add another issue/feature request for that. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
dchelimsky
Feb 22, 2018
Member
Just curious @cconstantine - have you checked out https://github.com/dblock/rspec-rerun? I have no idea if it's still usable, but it might give you what you want.
|
Just curious @cconstantine - have you checked out https://github.com/dblock/rspec-rerun? I have no idea if it's still usable, but it might give you what you want. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cconstantine
Feb 22, 2018
That might work if I wanted to force every test to fail.
What I want is almost an inverse of that gem; keep running tests (up to N times) until it fails.
cconstantine
commented
Feb 22, 2018
|
That might work if I wanted to force every test to fail. What I want is almost an inverse of that gem; keep running tests (up to N times) until it fails. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
JonRowe
Feb 22, 2018
Member
If you're setting up your memory profiling code in around hooks it's also possible to run the examples as many times as you like.
around(:example) do |example|
MyMemoryProfiler.do
n.times { example.run }
end
endIf you want to run until failure something like:
around(:example) do |example|
failed = false
until failed
example.run rescue failed = true
end
# whatever logic you want to run to make it "refail etc"
end|
If you're setting up your memory profiling code in around hooks it's also possible to run the examples as many times as you like. around(:example) do |example|
MyMemoryProfiler.do
n.times { example.run }
end
endIf you want to run until failure something like: around(:example) do |example|
failed = false
until failed
example.run rescue failed = true
end
# whatever logic you want to run to make it "refail etc"
end |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
lumpidu
Mar 13, 2018
I don't understand the hesitation here to accept a really useful addition to rspec. Using a command line parameter -n gives us the possibility to run every spec multiple times or a combination of specs without changing the specs via an around hook in foresight. We use rspec to test device firmware and to automatically download logs from the device in case of a failure. It is an important part of the test process to repeat tests that sometimes show failures and depending on the changes inside the firmware under test, we repeat different specs differently. it's rather hacky to require passing envirionment variables to specs or changing a single line of a spec just to add such an important feature to an otherwise excellent tool. Also repeating a complete spec via a shell for loop is a bummer: we copy ssh keys, remove data and logs from the device, etc. in the before(:suite) hook.
lumpidu
commented
Mar 13, 2018
|
I don't understand the hesitation here to accept a really useful addition to rspec. Using a command line parameter |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
JonRowe
Mar 13, 2018
Member
I don't understand the hesitation here to accept a really useful addition to rspec.
- Maintenance burden.
- Forseen internal changes required to do it.
- Unforseen internal changes required to do it.
- Formatter changes to handle new output status for a spec that passed and failed
- It's simply not a previously design use case of RSpec. It will be hacky to implement.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
xaviershay
Mar 13, 2018
Member
I'll also add:
- We already have a very wide configuration API. The further we expand it the more unwieldy it becomes for users.
- At this point we generally require new features to be implemented first as extension gems, and then to see support, before considering including them in core.
|
I'll also add:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
myronmarston
Mar 13, 2018
Member
At this point we generally require new features to be implemented first as extension gems, and then to see support, before considering including them in core.
To @xaviershay's point, does someone want to take a stab at implementing this as an extension gem? If there are any APIs you'd need to do so, let us know.
To @xaviershay's point, does someone want to take a stab at implementing this as an extension gem? If there are any APIs you'd need to do so, let us know. |
afeld commentedFeb 19, 2013
I have some integration tests that are failing inconsistently, and as I'm fixing them I'd love to be able to run specific specs multiple times to see which still fail intermittently. Something like the following:
Thoughts?