-
Notifications
You must be signed in to change notification settings - Fork 338
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support for multiple language runners, or sharing specs across projects #827
Comments
Any plans on implementing this? |
In the case of multiple step implementations for the same set of specifications for e.g. testing a mobile app written in swift and kotlin for IOS and Android This means using a script to generate/replace the
ProposalAllow the option to install multiple language runners. For example
Will modify the manifest as follows
The first one is the default runner it runs the command
To run the kotlin runner pass an option to the run command
|
@zabil This seems like a nice solution. Here is the exact use case i have in mind that i would love for gauge to support: My company has both iOS and Android apps. As quality engineer i would like to:
To put icing on the cake, if reports can be adapted to show test results depending upon the platforms they were run on, it will be next level of awesomeness. I know that this is probably a far fetched idea at this point given the current backlog and roadmap, but if this becomes supported (at some point in future), it will be a dream framework for me(and probably many more). |
I have a different scenario, but where I think this solution would help me as well. We have a lot of different products, many of them Windows heavy C-Sharp ones while others are web. At the moment we use gauge with the c-sharp runner and use Selenium for our web tests, but I've started looking into taiko and it seems great. So what I would like to do is
|
I created a POC project that supports multiple language runners by proxying via a main language runner (Java). Not a great way to do in reality though. |
There are two features requested here
I am interested in feature number 2. My take on this would be that we have the configuration as mentioned by @zabil:
In the case the command is run something like this, we execute both language runners: The main gauge application will then broadcast the step execution to the runners and at most one should return with a success on running the step. In case the step is not found on either then the test is failed, in case the test is found on more than one runner then the test is failed. |
Closing this as an old issue. However, any PR's to fix this will be merged. |
Hi, We are currently trying to migrate our automation framework to Gauge+Taiko where we have a requirement like UI automation in Javascript and backend components automation in Python. Looks like there is already some chat on the same requirement in this thread. Is there any plan for rolling out this feature in upcoming release ? If not Is there any workaround for my use case ? Thanks in advance ! |
hi @RanjithkumarE - as of now there are no plans to build this feature. Honestly, Gauge is meant for User Journey/Acceptance testing, which if you think about it, is agnostic of the components of the application. That said, in your case you can always have two test suites, one in javascript and another in python. |
Thank you for your inputs ! Two suites in the sense like two different project with different manifest file or Do we have a way to control the different implementation in different language in same project ? |
I meant two independent projects. There isn't a mechanism to have different implementation languages in one gauge project presently. |
Hi @RanjithkumarE , I use Gauge for a very specific project which needs to do the same thing: run test suites with two different languages. In my case, we have a bash script that writes the manifest file just before running the tests, that way, the manifest is created according to the language that we need to use to run the tests. Such a workaround allowed two suites to coexist in the same project, but as @sriv mentioned it's also important to evaluate whether that approach makes sense in your context. In case you want to check out the bash script and the rest of the project here it is: https://github.com/aceleradora-TW/trilha-de-exercicios/blob/master/gaugectl |
Thank you @yrachid, I have gone through your workaround, it's really nice. even we thought of doing something like this during our initial research. But we have some use case where end to end test case would require both Javascript(ui taiko tests) and Python implementation(backend) components which requires manifest to support list kind of structure to support multiple language implementation which might also need to be handled by gauge internally. |
(Tagging @olejkavn ) If this feature were to be implemented, there are some design decisions to be made. Some initial thoughts:
And so far I haven't gone into the IDE support. Gauge's VSCode support is via Langauge Server Protocol, so in order to support features like goto definition, it would need to know much more than what it does today. Honestly, this is something that I need to try to even state if it is possible. If we can thrash out some of these, we can probably come up with an approach to this. |
Expected behavior
Ability to test projects that contain heterogeneous components (multiple runtime languages) with compatible gauge language runners. Preferably with scenarios that span languages, as system under test is also connected together.
In a similar case, the ability to test potential component designs in different languages without having to copy/paste all the test specifications/data between Gauge projects, or hack the Gauge project language in the manifest.
Actual behavior
When testing projects that contain multiple language components (eg: heterogeneous microservices) or trying to provide a language neutral test framework to multiple dev teams (eg: to test multiple PoCs), and when tests require language-specific implementation, Gauge forces the use of multiple test projects, which prevents cross-language test scenarios, and/or causes significant repetition of test specs between projects, violating DRY principle.
Possibly related
#160
The text was updated successfully, but these errors were encountered: