-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Using a declarative data-driven approach to organize the test suite (like TestML) #5
Comments
Very interesting! I was also a bit worried that a test library would influence the behaviour of the tests too much. Just the call to a testing routine can already change LuaJIT's behaviour and thus maybe conceal bugs. This approach looks very promising IMHO |
@ladc Glad you like it. We've been using a similar approach to organize the test suite for OpenResty (NGINX + LuaJIT) for years already: https://github.com/openresty/lua-nginx-module/tree/master/t It works out pretty well. But OpenResty relies on the Perl variant of TestML, which still requires a short Perl code prologue in each of the test files. For LuaJIT's test suite, I hope we use TestML without any Perl in the test file, so that we can run the test files as-is with whatever test runners we can think of (including Lua/Shell, Perl, Python, and etc). Just for the reference, I've been writing about the declarative data-driven testing approach used in OpenResty below: https://openresty.gitbooks.io/programming-openresty/content/testing/index.html |
@agentzh Thanks for the references. It seems like the tests would also be easy to parallellize with this approach. But indeed I guess we want to avoid a dependency on Perl :-) BTW, I think busted and co are valuable tools for testing a library, but a framework for testing a compiler/ interpreter should be external and should not require one to modify the code. |
@ladc No, we won't depend on Perl. What I'm saying is that OpenResty just ticks with the Perl variant of TestML. TestML itself is just a dead simple DSL. It can and should be agnostic to the test harness implementations. |
@ladc We could ship a Lua/Shell test harness by default with the test suite. But we also allow 3rd-party test harnesses with much more powerful features to directly run the same test suite without any modifications. That's what I'm saying. Also I could quickly code up a Perl harness so that we can have something runnable very quickly. We can always replace the Perl harness with a pure Lua/shell equivalent later, without reworking any test files. This is what I'm saying :) This is the beauty and the power of data driven and separation, isn't it? ;) |
@agentzh OK, then I'm all for it! I guess it wouldn't be too hard to write a Lua test harness. |
@ladc It's just harder for me to use Lua than Perl for this. Perl is born for text processing anyway :) I'll commit a quick Perl prototype today or tomorrow so that we can play with it immediately :) |
@agentzh Great, thanks! |
Can you show an example of what a some LuaJIT tests would look like in TestML because its hard to compare using nginx tests because there testing something higher level. |
@fsfod Sure. It should not be too far from this test file for lua-cjson: https://github.com/openresty/lua-cjson/blob/master/tests/agentzh.t , but without everything above the |
is print overridden or are you parsing everything out of the console? |
@fsfod It depends on the concrete test harness, of course. For this particular test harness used in openresty's fork of lua-cjson, it parses it directly from stdout (it could do it in another way if we wish, of course). |
@fsfod We need to be careful about using native Well, it depends on the actual test case context, I think :) We might need both. |
I think a mixed approach would be best:
Here's my usual work-flow when some test fails:
One has to understand that we're testing an interpreter + compiler. If something goes wrong, the result is often undefined/random or the VM crashes. Even for an That's why I don't think a print/expect-style is a good idea for testing LuaJIT. Also, it becomes unpractical for checking results inside loops. You really don't want to embed hundreds or thousands of lines with expected results. Thus you need asserts, anyway. That would mean two different mechanisms, which increases complexity. Related: moving everything over to [ I found it tricky enough to test for multiple architectures and their variants (soft-float/hard-float, ABI differences), because this requires matching I still think the test suite needs to be refactored, because only loosely related tests have been thrown together. OTOH keeping strongly related tests together is easier with a declarative approach. The test runner helps in isolating the sub-tests, if needed. The test runner could also have a 'hard' (slow) mode, which shuffles the sub-tests. A similar analysis is needed for crashes, where it needs to locate the failing sub-tests by running only specific parts. |
I suggest we use a declarative data-driven format to organize the test cases in each individual test file so that it is independent of the actual test runner and framework. A good test specification syntax I've been using myself for many years is TestML:
http://testml.org/specification/language//index.html
This way we can free ourselves in using what way or what combinations of ways to run the tests without touching the test files themselves.
This also makes it easy when we add unit tests for LuaJIT's internal pipelines (like individual JIT compiler's optimization passes and intermediate IR form) in the future.
I hope we can draw a clean line between test case representation and test running strategies.
The text was updated successfully, but these errors were encountered: