Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ability to run spec files in a specific order #390

Open
cristopher-rodrigues opened this issue Jan 24, 2017 · 68 comments
Open

Ability to run spec files in a specific order #390

cristopher-rodrigues opened this issue Jan 24, 2017 · 68 comments
Labels
E2E Issue related to end-to-end testing existing workaround stage: proposal 💡 No work has been done of this issue type: feature New feature that does not currently exist

Comments

@cristopher-rodrigues
Copy link

How can I run all tests in a order without rename the files using a number to determine the order to execute?

There is a another way that can I run the tests in a custom order?

For now, I’m renaming the files using the prefix 1_xxx.js, 2_xxx.js

@jennifer-shehane jennifer-shehane added the type: feature New feature that does not currently exist label Jan 24, 2017
@brian-mann
Copy link
Member

No, you shouldn't need to run tests in any specific order. That is usually indicative of a testing anti pattern, whereby you are relying on state being built up between all the tests.

Can you explain your situation / use case?

@brian-mann brian-mann added type: question and removed type: feature New feature that does not currently exist labels Jan 25, 2017
@cristopher-rodrigues
Copy link
Author

cristopher-rodrigues commented Jan 27, 2017

For example: before my other tests, i need create a user. But what i really want is https://docs.cypress.io/docs/configuration#section-global

// ***********************************************************
// This example support/index.js is processed and
// loaded automatically before your other test files.
//
// This is a great place to put global configuration and
// behavior that modifies Cypress.
//
// You can change the location of this file or turn off
// automatically serving support files with the
// 'supportFile' configuration option.
//
// You can read more here:
// https://on.cypress.io/guides/configuration#section-global
// ***********************************************************

// Import commands.js and defaults.js
// using ES2015 syntax:
import "./commands"
import "./defaults"

// Alternatively you can use CommonJS syntax:
// require("./commands")
// require("./defaults")

TXS @brian-mann !

@fvanwijk
Copy link
Contributor

fvanwijk commented Jan 31, 2017

Ordering tests could be convenient if you want to run the often failing tests first. It saves you some time in CI.
Another use case is when you have a lot of scenarios and you want to organize them in a way that makes more sense than alphabetically.

@digitaldavenyc
Copy link

digitaldavenyc commented May 24, 2017

@brian-mann I have a use case for why it is really important to be able to run tests in order. We are using Cypress for functional live tests.

When running live tests we are dealing with real customer data that is somewhat dependent on state, in order for us to run our tests we need to hit real endpoints and perform actual CRUD operations. If we cannot control what test packages are called in an order the tests will fail since clearing out and creating data in order is a big part of live testing.

I believe this is somewhat related to this issue: #263

@digitaldavenyc
Copy link

Right now we are resorting to labelling our test folders 01_Users, 02_Invoices, etc... to control the test ordering which just makes me very sad.

@charlestonsoftware
Copy link

As @digitaldavenyc points out full blown web apps often test complex system interactions.

Should every test be able to run independently and in a vacuum? Sure. In theory. Real world and theoretical science do not always get along. It may be anti-pattern but coloring 100% within "the pattern" lines means my test scripts take 27 hours to complete instead of 27 minutes.

Use Case--

My app allows users to add locations to a data set on the back end. On the front end it renders a map of locations.

Front-End test: Make sure 10 locations are returned from Charleston SC

This is heavily dependent on the data set that includes two primary elements: location data + user-set options such as "default radius to be searched".

I could write the test to first pull the options data (default: 5 mile radius), THEN pull all location data (5k entries), THEN let the script loop through all 5,000 to calculate the distance to create the baseline of valid results, THEN run the front-end query and make sure it returns the same list my test calculated.

I've now created another point of failure as my test script code is more complex and prone to errors. Not too mention it takes a LOT longer to run by not being able to make data assumptions.

OR

I could write a test that ASSUMES my "load_5k_locations_spec.js" has executed and passed. The test is now pull the option data, load a "valid_5_miles_from_charleston.json" fixture, run the front-end query and compare it against the displayed location divs.

An order of magnitude faster AND far less complex test script code.

NOW... take the above and run the test for five different radius options.

I'd rather pull a valid_5_mile.json, valid_10_mile.json, etc. and compare against an assumed set of data that can ONLY be a valid assumption if I am certain my "load_5k_locations_spec.js" ran BEFORE all my "5miles_from_charleston.js" and "10miles_from_charleston.js" scripts ran.

Bonus Points: Have a Cy.passed( "load_5k_locations_spec" ) command the returns true|false if the specified test passed on the prior run -- makes it easy to completely skip a test if a prior test run failed.

No - not perfect "by the pattern" rulebook implementations, but in the real world people have deadlines. Making tools that are malleable enough to meet user's needs versus doing things "strictly by the book" is what makes them powerful.

I'm fairly certain chain saws are not designed to be used to make ice sculptures - but people do it. If you try to do the same and cut your arm off because you don't have the skills to use that tool in a manner it was not intended, consider it a "learning curve".

@bahmutov
Copy link
Contributor

I strongly suggest NOT going down the path "real world requires running spec files in order". It is just JavaScript and you can easily set state before each test the way you want, without relying on the previous test. In fact we are working on automatic test balancing so different specs will be running on different CI machines and in different order (if you use --parallel flag). So any test order will not be guaranteed.

Instead, split tests into functions and just call these functions to set everything up.

@surfjedi
Copy link

surfjedi commented Sep 5, 2018

I agree with @cristopher-rodrigues that running tests in order has major advantages - speed wise. You can save time but not reloading the login page(or doing requests for i) each time

@jennifer-shehane jennifer-shehane added the stage: proposal 💡 No work has been done of this issue label Sep 14, 2018
@jennifer-shehane jennifer-shehane changed the title I run all tests in a order Ability to run tests in a specific order Feb 14, 2019
@jennifer-shehane jennifer-shehane added type: feature New feature that does not currently exist and removed type: question labels Feb 14, 2019
@jennifer-shehane jennifer-shehane changed the title Ability to run tests in a specific order Ability to run spec files in a specific order Feb 14, 2019
@RadhikaVytla
Copy link

How to run login spec before create user spec?

@araymer
Copy link

araymer commented Mar 26, 2019

It is, indeed, a huge pain to run e2e tests with an SSO redirect (which can't be removed) if you cannot make sure your login procedure runs as the first test to set a valid token. Otherwise, you have to write logic in every single test file to handle that, which seems silly.

@jameseg
Copy link

jameseg commented Apr 12, 2019

@RadhikaVytla you could run the login_spec first as an isolated job (CI), and depend all other test specs on that job being successful. Could save you some time if login isn't working.

@jameseg
Copy link

jameseg commented Apr 12, 2019

@araymer can you set the token through an auth api call? If so, just use a beforeEach (write it once) and this way each test can use it's own separate auth token

@jennifer-shehane
Copy link
Member

@araymer You can also move this to the support file within a beforeEach as described here: https://on.cypress.io/writing-and-organizing-tests#Support-file

@lazurey
Copy link

lazurey commented Jun 12, 2019

Being able to specify the order could be very useful when testing a process composed by several steps. I'd like each step to be a standalone test, but they need to be executed in a specified order. Currently I can only follow the advice that naming them with 01, 02..

@id-dan
Copy link

id-dan commented Jun 24, 2019

Hi, All!
I have the same problem. I want to run tests in a specific order:
1.SigUp, 2.Login, 3.ShareTool, 4.Publication and then 5.Account.

However tests run like on this screenshot:
image

First "ShareTool.feature" runs and then next tests according to structure in cypress project:
image

@loxator
Copy link

loxator commented Jun 26, 2019

Just found an easy hack, add a number before your file or folder and it will run those in that order.

For example, @id-dan you can do something like 01-SignUp/01-SignUp.feature/, 01-SignUp/02-Login.feature/.....05-Account/... and so on.

Cypress should pick up the files in that order and run them.

@Julyanajs
Copy link

Thx @loxator your suggestion solved my problem! =)

@malimccalla
Copy link

This would be a great feature. I currently have two test files that take a lot longer to run and are "more flakey" than others resulting in the occasionally fail for non legitimate reasons.

Due to the file names (upgrade_page.test.ts & settings_page.test.ts) both these files run at the end of a 15+min e2e suite. I could save time and money with my CI provider if I were able to run these tests first so I can bail out of remaining tests if they fail

@fusionshoes
Copy link

Any news on this @brian-mann ? Cy best practice for each test to run standalone is excellent and perfectly architected, but there's definitely an added convenience to being able to order tests, and I'm curious if you and your guru coders are considering it?
Thanks!

@majackson
Copy link

majackson commented Oct 22, 2019

To reiterate, this feature would be really useful in order to run often-failing/fragile tests first, thereby ensuring that failures are flagged as quickly as possible. Some workarounds for this have been described in this issue thread (such as numbering tests in intended order), but this isn't really a solution, as it relies on a cumbersome manual renaming of test files. If test ordering could be specified by some configuration file, this file could be generated and updated by subsequent test runs, putting frequently-failing tests first.

I would love to get an update on whether this feature is being considered.

@jholland918
Copy link

jholland918 commented Oct 25, 2019

No, you shouldn't need to run tests in any specific order. That is usually indicative of a testing anti pattern

That's correct for unit tests, however integration and end to end tests are a different beast. I appreciate the additional power cypress gives us for injecting state directly into the page and the cy.request feature to avoid having to do things like ordered tests. But sometimes you just need ordered tests and the whole "tests should be able to run in any order" is simply a cargo cult mindset without considering the context.

@jennifer-shehane
Copy link
Member

We have not closed this issue as wontfix, so it is still under consideration as a proposed feature. We do read the comments and are always willing to reconsider our opinions on the best way to run tests once more use cases come to light.

As noted above, there is a complication introducing test file ordering combined with Cypress parallelization. We have our own algorithm that runs the longest running spec files firsts in order to spread out the load to get the shortest runtime across machines (with new specfiles running very first since we have no previous data to know how long it will take to run). So combining any type of ordering with the algorithm we use with our --parallel flag would have to be considered which makes this interesting to implement.

We would also have to come up with a new API to to define ordering of files.

If you have some specific specs that need to run first before others. I would consider using our --spec flag. So, in your CI having a job that runs first with cypress run --spec 'setup_db.js' for example. Then after this job is complete, trigger a new job that runs cypress run with all specs. This may not work in all situations, but it's a way to run a single / multiple files in an order and then also run the rest of the files in parallel later to save time.

We have limited resources, so we have to make hard decisions on what we work on. This issue is still in the 'proposal' stage, which means no work has been done on this issue as of today, so we don't have an estimate on when this will be delivered.

@craig-dae
Copy link

@jennifer-shehane I have tried that. However, it has a problem that maybe you can help solve.

IIRC, you can't run a spec file that is not contained in integration. I forget the error, but it didn't seem to work.

This means that if I explicitly run spec files in phase 1 that are contained in integration, they will be re-run in phase 2. I don't want them to be re-run. Is there a way to mark a test as "do-not-run" when you run all tests?

Also, a "do-not-run" option would be great for another reason. We have a very large project that had a part of it put on hold, so we could focus on other parts. This leaves a section of code and tests that we don't want to delete, but we also don't want to be forced to continue to maintain. I would like the tests left in integration, but I have no desire to fix them if they start failing, and I don't want to run them. But if we come back to this part of our app, we would like to begin using them again. Our current strategy is to just move all the tests to an archive folder outside of integration.

@bahmutov
Copy link
Contributor

bahmutov commented Oct 21, 2020 via email

@craig-dae
Copy link

craig-dae commented Oct 21, 2020

@bahmutov didn't even realize that existed. I see it now. Yes, I'll add that. Thanks!

So, this will still run if I run it explicitly with --spec?

@ddehart
Copy link

ddehart commented Jan 8, 2021

I stumbled upon this feature request searching for a solution for an adjacent use case.

I have tests that are heavily dependent upon an API responding in ways that I expect. I'd like to be able to write a test that's sort of like a pre-flight checklist using cy.intercept that asserts that the API responds as I expect before running any other tests and then bails on running the entire suite if that test doesn't pass.

I think this is also related to #518, so I'd need both the ability to specify that the pre-flight check runs first and that the entire run aborts if it fails.

@craig-dae
Copy link

@ddehart You know, that's a great point. Just today, we had a meeting about using Lighthouse to test our app for accessibility. There is a lighthouse/cypress plugin I was interested in using. If I go this route, I want my lighthouse tests to be completely separate from my integration testing, but I still want to use cypress to do it.

I would like a way of running integration tests first, followed by lighthouse tests.

A way of doing this might be to follow the Ansible pattern. In Ansible, you can apply tags at the playbook or task level. I could tag all my integration tests as integration, and lighthouse tests as lighthouse, and then do something like cypress run --tags=lighthouse, which would run everything with the lighthouse tag and ignore everything else. I could also have a tag called always which will always run, regardless of what tags I choose.

I think that would work smoothly with cypress.

@bahmutov
Copy link
Contributor

bahmutov commented Jan 8, 2021

@craig-dae I would place different types of tests in different subfolders and then run them using --spec ... argument. For example if you want to run regular UI tests first, then run the Lighthouse tests

- npx cypress run --spec 'cypress/integration/ui/**/*.js`
- npx cypress run --spec 'cypress/integration/lighthouse/**/*.js`

Note: there is also https://github.com/bahmutov/cypress-select-tests but it relies on rewriting tests and in general I would not rely on it

@craig-dae
Copy link

@bahmutov that might actually do it for me. That looks like it will work in github actions. I think that would also solve @ddehart's problem. Thanks!

@jwetter
Copy link

jwetter commented Feb 12, 2021

Just a little piece of history as it relates to fitting users needs.

Intel used to dictate to its' customers the products it provided. They normally would fit 70-90% of the customers needs. Intel executives figured that because they produced a superior product that it didn't matter if customers had to implement the other 10-30% to complete the customers needs. Competitors decided to listen to customers to given them 95-100% of their needs since it was minor tweaks to fit their needs. Intel market share plummeted forcing them to change. They still haven't recovered the market loss.

The point I am trying to make is that while Cypress does a lot of things really well, there still is a need from the users to have some structured order to certain testing scenarios. For instance there are some tests that I have that need to be ran in order (some elements of integration where I check record creation and modification) but most of mine I would want ran in parallel. To be against this idea of having some test execution in a specific order is to also acknowledge that while Cypress is a great design that it is not designed for long term usage because it is inflexible to the needs of the users. Relying on plugins is not a good route as sooner or later they cease being maintained which means loss of efficiency in test execution or worse, incompatibility due to structure changes that are likely to occur in Cypress.

@craig-dae
Copy link

@jwetter I mean, while that all makes sense, @bahmutov literally just gave you a solution in the previous comment for your outlier scenario.

Cypress is opinionated about how testing should work, rather than being checkbox heroes. This is why I like them. Their opinions have frustrated me, mostly because they have constantly ended up being correct. They've steered me to far better testing practices than I'd be engaging in if I were able to force them to bend to my incorrect ways of doing things.

But again, your use-case sounds like it is solvable pretty trivially, by organizing the phases of your tests by top-level directories in cypress/integration, and then running them using the command level wildcard examples above. The only problem I can think of with it is how it might behave in combination with the Dashboard?

@bahmutov
Copy link
Contributor

@craig-dae and @jwetter using the wildcard pattern and Dashboard also works - just use the group parameter to put separate runs into the same logical recorded run

- npx cypress run --spec 'cypress/integration/ui/**/*.js` --record --group ui
- npx cypress run --spec 'cypress/integration/lighthouse/**/*.js` --record --group lighthouse

@jwetter
Copy link

jwetter commented Feb 12, 2021

@craig-dae and @bahmutov
I have already implemented the workarounds listed here to suite my needs. The steps taken are still just workarounds and while currently working for my needs, the cautionary story I point out in my post remains.

To be clear, I only posted because I think this tool has great promise to become an industry standard. The reason Selenium is still an industry standard is because they changed their software to meet the needs of their users where reasonable. Cypress could beat Selenium but only if it is meeting the needs of the users.

@jeanlescure
Copy link

jeanlescure commented Feb 12, 2021

Having this issue surpassing the 4 year mark and given that users keep providing sensible use cases, it would be all-around beneficial to begin solving the issue by formalizing the testFiles option as the official way of running tests in order.

The following tasks would accomplish the aforementioned goal:

  • Acknowledge that being able to use the testFiles option to run tests in order is a side-effect of this line of code as well as this other one right below the previous, by writing code comments above each of these lines. The comments should specify that the side-effect is welcome and should not be tampered with (for example adding a .sort() in either of those would introduce an inevitably painful backwards compatibility problem to 4+ years worth of users that have implemented the testFiles option to run tests in order)
  • Update the description on this schema to mention that this is the official way of running tests in order
  • Add a spec here that verifies that when given this.setup({ testfiles: ['./b.js', './a.js', '**/23/*.jsx', '**/15/*.jsx'] }) then checking this.config should contain ['./b.js', './a.js', '**/23/*.jsx', '**/15/*.jsx'] on the testFiles property (as in expecting the array order doesn't change)
  • Finally update the testFiles entry in official documentation to formalize it as the official way of running tests in order

P.D. I originally had the intention of doing the tasks myself and submit a PR but I haven't had the time, so by submitting this reply do not think I expect the Cypress team to complete the tasks, it is truly an invitation to any colleague and/or hobbyist to take the lead in making sure that the "unofficial" fix (that has earned me 200+ points in Stackoverflow so far 😄 ) becomes a properly recognized feature 🚀

@diggabyte
Copy link

diggabyte commented Mar 16, 2021

Not all ordered tests are anti-pattern. We have a very valid use case.

We distribute and balance specs across workers. We use run duration and passing status to update the balancing after each pass (very much like Cypress Dashboard). The value in doing so is that we place failing tests first on subsequent runs for faster feedback for developers when running under CI.

Very often tests that pass locally might fail in CI. So devs might have to wait for many other specs to run before receiving feedback on a test they are trying to fix. This is a huge waste of time.

Perhaps provide a flag to force cypress to respect ordering when not using Cypress Dashboard tooling so it doesn't interfere with all the --parallel logic? Not everyone who needs this option uses Cypress Dashboard (in fact, most of them probably don't).

In other words, something like an --ordered flag that can't be used with --parallel flag could be a reasonable compromise.

@craig-dae
Copy link

craig-dae commented Mar 16, 2021

I very much would like to echo @diggabyte's point that, any tests that have failed or have been flaky recently should be run first, followed by the order Cypress currently does (longest first I think?)

Cypress is great. This causes me to use it more. The more I use it, the more I rely on it. The longer my tests get. The more parallel boxes I run it on (8 now). The more impatiently I wait to see if my tests pass. The more I beat my head against my keyboard when a frequently-failing test gets run on the bottom half of my list of tests.

If Cypress just did this by default (flaky and failing tests first), @diggabyte would not have to manually order his tests (for the reason he stated). Seems like a relatively cheap LOE to add a lot of value.

Update: Actually, it turns out that this is already a feature if you get the Business-level subscription. You don't get it on the Team subscription, which is what we have. Makes sense I guess. They gotta get paid, and if you're using Cypress THAT much, $300/mo is not unreasonable.

@NPC
Copy link

NPC commented Mar 16, 2021

Is there currently a way to define that if a test fails, then the whole run should fail? It could be a good addition to ordering — I'd place general (bird-eye-view) tests first, and if they passed — move on to more detailed ones.

Currently our full set takes about an hour, which basically means that if something fails — we'll only know about it the next day (still better than hearing about it from our end, so I'm thankful for what Cypress allows us to do). Ability to order tests AND configure cypress to stop on first error (or specify which tests are “critical” — there may already be a way to do it?) would help improve the responsiveness of this process.

@MuckT
Copy link

MuckT commented Apr 26, 2021

@NPC checkout cypress-fast-fail; however, sometimes it doesn't play nice with other plugins.

Tests can be ordered in folders like this:

test-folder
  test.00.spec.js
  test.01.spec.js
  test.02.spec.js

@jennifer-shehane
Copy link
Member

@NPC As part of Cypress’s pricing, we included the ability to cancel test runs when a test fails. This setting is accessible from the Dashboard for organizations starting at the Business Plan.

This offers a solution for those running tests using the Cypress Dashboard - and also ensures a parallelized run is cancelled so that all parallel running specs will also be cancelled to save time on the run.

To get this feature, you will need to update to Cypress 6.8.0 and also be a member of an organization subscribed to a Business Plan.


This feature was implemented with parallelized runs in the Dashboard in mind since this was the hardest use case to address. We had to build this feature specifically to continue to receive all of the tests in a cancelled run to ensure proper reporting. Now that we have a mechanism to cancel runs across these channels of communication, we can now consider a way to initiate cancelling test runs when a test fails from the Test Runner when not recording to the Dashboard. (Likely this would be implemented by some CLI flag or config specifying `cancelOnFailures: true)

See this issue for cancelling test runs when a test fails from the Test Runner when not recording to the Dashboard.

@AlliterativeAlice
Copy link

I would also appreciate being able to do this without having to resort to the workaround of changing my test file names, so that I can run the tests that fail most often first.

@impurist
Copy link

Been wanting a determinate test order escape hatch for several years now.
While I appreciate Cypress have a testing philosophy. That philosophy does not fit the requirements of many users.
It is Ivory tower thinking. And it's a huge. black mark against Cypress.
Forcing us to use poor naming convention work arounds when a simple config option allowing a determinate order is all we need.
Escape Hatches are Required

@bahmutov
Copy link
Contributor

For Cypress v10 just list the specs in the order you want them to run

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    // baseUrl, etc
    supportFile: false,
    fixturesFolder: false,
    setupNodeEvents(on, config) {
      config.specPattern = [
        'cypress/e2e/spec2.cy.js',
        'cypress/e2e/spec3.cy.js',
        'cypress/e2e/spec1.cy.js',
      ]
      return config
    },
  },
})

Of course, if you want to parallelize them using Cypress Dashboard it will change the order based on timings / failed tests first / new specs first.

@nagash77 nagash77 added the E2E Issue related to end-to-end testing label May 16, 2023
@gowtham-ncompass
Copy link

gowtham-ncompass commented Sep 18, 2023

For Cypress v10 just list the specs in the order you want them to run

const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    // baseUrl, etc
    supportFile: false,
    fixturesFolder: false,
    setupNodeEvents(on, config) {
      config.specPattern = [
        'cypress/e2e/spec2.cy.js',
        'cypress/e2e/spec3.cy.js',
        'cypress/e2e/spec1.cy.js',
      ]
      return config
    },
  },
})

Of course, if you want to parallelize them using Cypress Dashboard it will change the order based on timings / failed tests first / new specs first.

Is there a way where i can run particular spec file twice ?

`const { defineConfig } = require('cypress')

module.exports = defineConfig({
  e2e: {
    // baseUrl, etc
    supportFile: false,
    fixturesFolder: false,
    setupNodeEvents(on, config) {
      config.specPattern = [
       'cypress/e2e/spec1.cy.js',
        'cypress/e2e/spec2.cy.js',
        'cypress/e2e/spec3.cy.js',
        'cypress/e2e/spec1.cy.js',
      ]
      return config
    },
  },
})`

For the above config , spec1.cy.js is running only once. Is there any way to run the same file twice or more?

@skiKrumbRob
Copy link

Has there been any update on this for an official test ordering solution since last conversation over a year ago? We have a need for this on our project as well. @jennifer-shehane is it still CY's official stance that this will not be implemented?

@bahmutov
Copy link
Contributor

@skiKrumbRob why is the solution with explicit spec order not working for you? If you need to run specs in specific order in parallel and have concrete requirements and example, open an issue in https://github.com/bahmutov/cypress-split and it would be simple to implement

@skiKrumbRob
Copy link

skiKrumbRob commented Oct 27, 2023

@bahmutov Forgive my learning brain here, but I digested this full thread yesterday trying to figure out if/how ordering can be done and perhaps my eyes glazed over that bit by the end of this monster thread. Pretty new to Cypress and learning on the fly. Is there any documentation for the explicit spec order that I could look through to get a better grip on where to set it up etc.?
And thanks for responding.

@kalcefer
Copy link

Hello, I have this folder structure.
image

Inside the folders. the same structure of test suites

Inside the folder, the tests are performed in the correct numbering order.

01-02-03...10-11-12...20-21-22

But at full startup, the folders are always executed in a strange order

01-02-03-04-05-06-08-10-11-12-13-07-09

Inside folders 05, 07, 09 there are also folders with the same name structure.

all tests are independent of each other and the order is not affected.

but the tests differ in complexity and weight. And I would like to understand. where is my mistake.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
E2E Issue related to end-to-end testing existing workaround stage: proposal 💡 No work has been done of this issue type: feature New feature that does not currently exist
Projects
None yet
Development

No branches or pull requests