-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some problems with using shellspec for integration tests #121
Comments
Hi, I have read this Issue and am considering adding two features (change working directory and new formatter). However, since it may be difficult to implement by design, it may be more appropriate to run multiple instances of a ShellSpec (e.g., using Make or GNU paralells). working directory of the testWe will need to accommodate the new directory structure. Each (App) directory may contain subdirectories, so I think we need a way to know the base directory to run the tests. We might be better off using other tools like Make, but we're going to struggle with parallel execution and formatter output. formatterI think a new formatter that outputs results for each specfile name only is a good idea. As you may have noticed, ShellSpec allows you to specify separately which formatter to output to the display ( Order of test resultsLooking at the output of the In ShellSpec, the output is displayed in the same order in parallel execution as in serial execution. This process is done with a parallel-executor, not a formatter, so if you want to output the output at the time you start executing, you can't achieve this with a custom formatter alone. Test execution timeIn The profiler feature of ShellSpec is implemented in a tricky way. ShellSpec count numbers in the background while running the test when the profiler enabled. And each test time calculats based on the total execution time and the count taken for each test execution. It is possible to get the time in milliseconds with bash, ksh, zsh, but it needs to be implemented additionally. Test title
ShellSpec allows you to give multiple top-level titles, so I think the file name is better. If I were to implement it myselfI would make a formatter that just alternates between the name of the file and the results of the tests I ran (no test run time). What do you think? How to make custom formatterIt should be work, but I've barely tested it. And I also haven't documented it. This means that changes are likely to be introduced in the future. This directory may be helpful. When run This is a (lack of explanation) document of the reporting protocol. I'm sorry. I know there is a lot to be explain. |
Thanks for a fast answer! Reading through it, I started to think that maybe it is more reasonable to use a combination of The problem of using mixed Still reiterating on a possibility to achieve this with the working directory of the testMy idea is to formatterI know about Order of test resultsRight, but couldn't the executor make a callback to the formatter with the current Note that in Test execution timeI am not sure what accuracy here is relevant, so maybe the issue of milliseconds is not an issue. Still I understand that with profiling on, the junit formatter prints the time of each test. If the information is only available when all tests are done (not just the given Test titleI agree the file name (or rather relative path) is better. If I were to implement it myselfLooks fine. Although I think it doesn't make much sense for such a formatter to print more than a summary - failed, passed, skipped. How to make custom formatterThanks. Given that you already kind of agreed to take care of that yourself and the solution may go beyond the formatter itself, then probably at the end I will not try doing it myself. |
I realised that trying to use combination of |
Actually maybe not... I can use |
I believe that your demonstration is now working with #135 and #137 merged into the master. If you specify Formatter-related implementation will be done later than the next version 0.28.0. After addressing some options, I will release 0.28.0 (this weekend?). |
Problems with
|
Fixed in 6813789. The problem was So I changed it to normalize to a relative path and then process it, but I needed to make an exception if it was the same path as the project root directory (i.e.
Because the integration tests are so slow, I mainly test with unit tests. The equivalent of integration testing is done here in a separate form from the main tests for shellspec, but only slightly. Currently, I am expanding the scope of the unit test a bit. Eventually, I will test the parts that cannot be tested in the unit test with the integration test.
I'll comment later. For now I have fixed the bug. |
Thank you very much! I confirm the bug is fixed. I see that also the paths in the jUnit report are fine and that you improved the description of |
For compatibility, I changed it to a warning rather than an error in the next release. 7d6c666.
Renamed the global options file from |
@ko1nksm Thanks, looks fine! |
What I want to achieve
I am considering to use your great project to drive integration/system tests for a project I am responsible for in the NA61/SHINE experiment at CERN.
I have a directory structure like this:
Each
App
directory would contain a singletest_spec.sh
file defining our expectations towards the given app. There would be an order of 100 apps. These apps actually may run quite long (from seconds to several minutes each), so it is quite important to run them in parallel. Eachtest_spec.sh
file would essentially define a separate and independent test suite with multiple example blocks.The tests would be run with GitLab CI so both a terse output showing the progress online, as well as the junit generator would be needed.
The goal is to run
shellspec
in the top directory executing all tests in subdirectories. It would also be nice to be able to execute it from the subdirectory.The demonstration of my use case
I prepared an example in the https://gitlab.com/amarcine/test-shellspec repository. The apps are scripts but in real life these would be compiled programs or sets of xml files constituting an input to the main executable of the project (available in the $PATH when
shellspec
is executed). If you grep for "NOTE", you get my comments regarding what I would like to have and what I dislike in the current behaviour.The problems
working directory of the test
Unfortunately my demonstration does not work currently, because each test is run from the directory in which
shellspec
was executed, while the executables under test expect to be run from the directories they reside in (App1 etc.). I understand that I could make this work puttingcd
in eachtest_spec.sh
file with the relative path to the App directory, but this is very ugly. Furthermore, then it is not possible to executeshellspec
in the App directory and the test becomes location dependent (cannot be moved).I understand that currently there is no such feature, but would it be possible to add an option to
shellspec
, e.g.-C
like intar
ormake
, tocd
to the directory of each spec file before executing it?formatter
Right now using any formatter which prints messages about which tests are run (documentation, tap), I have 2 problems:
For unit tests suites our project uses
ctest
. I show a (truncated) example output below. What I like about this formatting is that I get a message when the test starts and another when it ends. I also get the progress expressed assequential number of the current test / number of all tests
. I only get an information which test fails, but not the failure report, which is fine, because that appears in the xml report. Last but not least I get the timing for each test.I would like to get something similar with
shellspec
. The "tests" printed should correspond to the level of parallelization, i.e. a spec file, so instead of printing titles of all examples, I should get only the top example group title. So instead of a full report this would be rather a terse summary of all test suites run.Looking at the documentation output I have an impression that this should be possible, but I couldn't understand how to implement a custom formatter myself. Could you give some instructions on how to do that (unless you find it a nice feature to have and you implement it yourself)?
The
ctest -j 8
example outputThe text was updated successfully, but these errors were encountered: