Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dependent test #774

Closed
master-bytes-krafter opened this issue May 20, 2019 · 16 comments
Closed

Dependent test #774

master-bytes-krafter opened this issue May 20, 2019 · 16 comments
Labels
enhancement ✨ Suggestions for adding new features or improving existing ones. framework 🏗️ Pertains to the core structure and components of the Kotest framework.

Comments

@master-bytes-krafter
Copy link

Hello
Do you propose an idiomatic way to write test/groups of tests that depend on the outcome of other tests (i.e pointless to run if some other test failed) ?
This is particularly interesting to test some scenarios during integration testing and some testing frameworks provide this feature (e.g: TestNG)

Regards

@sksamuel
Copy link
Member

Do you mean the @Test(dependsOnMethods = { "initEnvironmentTest" }) that TestNG provides?

@master-bytes-krafter
Copy link
Author

Hello, yes exactly, same mechanics as dependsOnMethods or dependsOnGroups. The main goal here is to avoid runing some tests (potentialy costly) when we know that they will fall anyway because the system is in shambles :)
Regards

@sksamuel
Copy link
Member

Seems like a good idea. The annotation based syntax won't work for kotlintest though, perhaps as part of the test config.

"this is my test".config(dependencies = "some other test") {

or "This is my test").dependencies("some other test").config(...) {

@master-bytes-krafter
Copy link
Author

master-bytes-krafter commented May 22, 2019

Yes it would be perfect.
Now, how do yo want to proceed ? do you want me to contribute and create a PR/MR to kotlintest ? or is it something you would like to implement by yourself ?

@sksamuel
Copy link
Member

sksamuel commented May 22, 2019 via email

@LeoColman LeoColman added the enhancement ✨ Suggestions for adding new features or improving existing ones. label May 22, 2019
@master-bytes-krafter
Copy link
Author

master-bytes-krafter commented May 24, 2019 via email

@master-bytes-krafter
Copy link
Author

Hello all,
FYI I started working on the implementation of this feature. One question, how fast you guys check and integrate pull request ?
Regards

@sksamuel
Copy link
Member

sksamuel commented Jun 7, 2019 via email

@sksamuel sksamuel added this to the 3.4 milestone Jun 9, 2019
@sksamuel
Copy link
Member

sksamuel commented Jun 9, 2019

If we have something workable we can target 3.4 for this.

@sksamuel sksamuel mentioned this issue Jun 9, 2019
39 tasks
@master-bytes-krafter
Copy link
Author

After testing for two days here are my thoughts:

  • This feature is not quite simple to implement => specs dependencies need to be handled differently than test dependencies. Top level tests do not have a config (not 100% sure though)
  • However i found a possible workaround that is acceptable for my use case and it works right now, it involves the fact that tests order and specs execution order can be controlled, there can be a single instance mode and tests listeners. So dependency management can be controlled in test by test basis. The only thing is that I can fail() a test based on some conditions but i cannot skip it, but that is not a major issue, the goal being not running the time consuming part of a test.

What do you think ? Maybe this feature can be rewritten to : add a way to specify if a test can be skipped dynamically on runtime (passing a function returning a boolean in test configuration ?)

@LeoColman
Copy link
Member

We recently added a way to skip tests in runtime, by throwing the SkipTestException #522 .

It's pending release, but you can use the snapshot if needed for the moment

@sksamuel
Copy link
Member

sksamuel commented Jun 20, 2019

@master-bytes-krafter you can use a TestExtension if you want to control whether a test is executed at runtime. Note: That is different from a TestListener.

@sksamuel sksamuel mentioned this issue Jul 13, 2019
17 tasks
@sksamuel sksamuel modified the milestones: 3.4, 3.5 Jul 13, 2019
@sksamuel
Copy link
Member

sksamuel commented Sep 4, 2019

@Kerooker this is a must have for 4.0 IMO.

@sksamuel sksamuel modified the milestones: 3.5, 4.0 Sep 4, 2019
@sksamuel sksamuel added the framework 🏗️ Pertains to the core structure and components of the Kotest framework. label Sep 4, 2019
@sksamuel sksamuel mentioned this issue Sep 4, 2019
88 tasks
@master-bytes-krafter
Copy link
Author

Hello, sorry for the delay, FYI we found a very convenient way to do this with the current version of kotlintest. Sure it relies on ordering files and tests in certain order and requires to write a little bit of code (30 or so lines approximately), but we can live with it for now.
Of course it would be great that it was a part of kotlintest but the analysis we made concluded that for a third party contributor the investment to implement it by ourselves would be significant and we cannot implement this for now.

@sksamuel sksamuel removed this from the 4.0 milestone Jan 12, 2020
@sksamuel
Copy link
Member

Marking as won't fix.

@Burtan
Copy link

Burtan commented Nov 24, 2022

Any plans for this feature?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement ✨ Suggestions for adding new features or improving existing ones. framework 🏗️ Pertains to the core structure and components of the Kotest framework.
Projects
None yet
Development

No branches or pull requests

4 participants