Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for multiple language runners, or sharing specs across projects #827

Closed
ghost opened this issue Oct 2, 2017 · 14 comments
Closed

Comments

@ghost
Copy link

ghost commented Oct 2, 2017

Expected behavior

Ability to test projects that contain heterogeneous components (multiple runtime languages) with compatible gauge language runners. Preferably with scenarios that span languages, as system under test is also connected together.

In a similar case, the ability to test potential component designs in different languages without having to copy/paste all the test specifications/data between Gauge projects, or hack the Gauge project language in the manifest.

Actual behavior

When testing projects that contain multiple language components (eg: heterogeneous microservices) or trying to provide a language neutral test framework to multiple dev teams (eg: to test multiple PoCs), and when tests require language-specific implementation, Gauge forces the use of multiple test projects, which prevents cross-language test scenarios, and/or causes significant repetition of test specs between projects, violating DRY principle.

Possibly related

#160

@singhverse
Copy link

Any plans on implementing this?

@zabil
Copy link
Member

zabil commented Feb 13, 2019

hack the Gauge project language in the manifest.

In the case of multiple step implementations for the same set of specifications for e.g. testing a mobile app written in swift and kotlin for IOS and Android

This means using a script to generate/replace the manifest.json before running the gauge specifications. A Gauge project can have multiple step implementation directories and switch manifests. (Gauge does not support swift or kotlin, this is just a sample)

manifest-swift.json

{
  "Language": "swift",
  "Plugins": [
    "html-report"
  ]
}

manifest-kotlin.json

{
  "Language": "kotlin",
  "Plugins": [
    "html-report"
  ]
}

Proposal

Allow the option to install multiple language runners. For example

$ gauge install swift
$ gauge install kotlin

Will modify the manifest as follows

{
  "Language": ["swift", "kotlin"],
  "Plugins": [
    "html-report"
  ]
}

The first one is the default runner it runs the command

$ gauge run specs

To run the kotlin runner pass an option to the run command

$ gauge run specs --language kotlin

@singhverse
Copy link

@zabil This seems like a nice solution. Here is the exact use case i have in mind that i would love for gauge to support:

My company has both iOS and Android apps. As quality engineer i would like to:

  1. Define the expected behavior once and for all in spec files and never have to worry about maintaining it in 2-3 places anymore.
  2. Tag the specs/scenarios for all applicable target platforms. Say "iOS "Android" "Api" , for tests implemented on iOS, Android and Services respectively.
  3. Write the step implementations in respective languages in the same project. Swift for iOS, Kotlin/Java for Android and Python for services.
  4. While running the spec, i would specify the tags to tell gauge which (1 or more) implementations to run and gauge is intelligent enough to use the appropriate runner(s).

To put icing on the cake, if reports can be adapted to show test results depending upon the platforms they were run on, it will be next level of awesomeness.

I know that this is probably a far fetched idea at this point given the current backlog and roadmap, but if this becomes supported (at some point in future), it will be a dream framework for me(and probably many more).

@Christoffer-Green
Copy link

I have a different scenario, but where I think this solution would help me as well.

We have a lot of different products, many of them Windows heavy C-Sharp ones while others are web. At the moment we use gauge with the c-sharp runner and use Selenium for our web tests, but I've started looking into taiko and it seems great. So what I would like to do is

  1. Use c-sharp for configuring systems checking pre-requisites (This way we can use internal NuGet packages against our backend).
  2. Use taiko to run the acual tests.

@trung
Copy link
Contributor

trung commented Jun 18, 2019

I created a POC project that supports multiple language runners by proxying via a main language runner (Java). Not a great way to do in reality though.

@luiscarrasco
Copy link

There are two features requested here

  1. Ability to run the implementation of the steps in different languages while using the same project an
  2. Ability to implement a step in a language of choice to leverage the tools available in a specific language.

I am interested in feature number 2. My take on this would be that we have the configuration as mentioned by @zabil:

{ "Language": ["swift", "kotlin"], "Plugins": [ "html-report" ] }

In the case the command is run something like this, we execute both language runners:
$ gauge run specs --multilang

The main gauge application will then broadcast the step execution to the runners and at most one should return with a success on running the step. In case the step is not found on either then the test is failed, in case the test is found on more than one runner then the test is failed.

@zabil
Copy link
Member

zabil commented Jul 17, 2020

Closing this as an old issue. However, any PR's to fix this will be merged.

@zabil zabil closed this as completed Jul 17, 2020
@RanjithkumarE
Copy link

Hi,

We are currently trying to migrate our automation framework to Gauge+Taiko where we have a requirement like UI automation in Javascript and backend components automation in Python. Looks like there is already some chat on the same requirement in this thread.

Is there any plan for rolling out this feature in upcoming release ? If not Is there any workaround for my use case ?

Thanks in advance !

@sriv
Copy link
Member

sriv commented Aug 26, 2020

hi @RanjithkumarE - as of now there are no plans to build this feature. Honestly, Gauge is meant for User Journey/Acceptance testing, which if you think about it, is agnostic of the components of the application.

That said, in your case you can always have two test suites, one in javascript and another in python.

@RanjithkumarE
Copy link

hi @RanjithkumarE - as of now there are no plans to build this feature. Honestly, Gauge is meant for User Journey/Acceptance testing, which if you think about it, is agnostic of the components of the application.

That said, in your case you can always have two test suites, one in javascript and another in python.

Thank you for your inputs !

Two suites in the sense like two different project with different manifest file or Do we have a way to control the different implementation in different language in same project ?

@sriv
Copy link
Member

sriv commented Aug 26, 2020

I meant two independent projects. There isn't a mechanism to have different implementation languages in one gauge project presently.

@yrachid
Copy link

yrachid commented Aug 26, 2020

Hi @RanjithkumarE ,

I use Gauge for a very specific project which needs to do the same thing: run test suites with two different languages. In my case, we have a bash script that writes the manifest file just before running the tests, that way, the manifest is created according to the language that we need to use to run the tests. Such a workaround allowed two suites to coexist in the same project, but as @sriv mentioned it's also important to evaluate whether that approach makes sense in your context.

In case you want to check out the bash script and the rest of the project here it is: https://github.com/aceleradora-TW/trilha-de-exercicios/blob/master/gaugectl

@RanjithkumarE
Copy link

Thank you @yrachid,

I have gone through your workaround, it's really nice. even we thought of doing something like this during our initial research. But we have some use case where end to end test case would require both Javascript(ui taiko tests) and Python implementation(backend) components which requires manifest to support list kind of structure to support multiple language implementation which might also need to be handled by gauge internally.

@sriv
Copy link
Member

sriv commented Sep 30, 2020

(Tagging @olejkavn )

If this feature were to be implemented, there are some design decisions to be made. Some initial thoughts:

  • How would a project specify which runners it would use? (This could be defined in manifest.json)
  • If there are multiple runners, how should gauge decide precedence?
  • How would gauge know which runner holds a step?
    • currently, gauge sends a request to the runner and the runner responds indicating if it could find an implementation.
  • How would gauge identify duplicate step implementations?
  • Datastores - How would Gauge handle Scenario/Spec/Suite Datastores when there are multiple runners involved? This would warrant some out-of-proc persistent storage, which would bring in other overheads.

And so far I haven't gone into the IDE support. Gauge's VSCode support is via Langauge Server Protocol, so in order to support features like goto definition, it would need to know much more than what it does today. Honestly, this is something that I need to try to even state if it is possible.

If we can thrash out some of these, we can probably come up with an approach to this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

8 participants