Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[feature] introduce subtests #3946

Closed
andykais opened this issue Jun 6, 2019 · 2 comments
Closed

[feature] introduce subtests #3946

andykais opened this issue Jun 6, 2019 · 2 comments
Labels
status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior type: feature enhancement proposal

Comments

@andykais
Copy link

andykais commented Jun 6, 2019

Is your feature request related to a problem or a nice-to-have?? Please describe.
I often run into the decision between writing short tests with a lot of boilerplate and larger tests where its hard to describe which part specifically failed.

Consider the following scenario:

descibe('some tests', () => {
  it('test1', () => {
    const someObject1 = fromSomeCall()
    expect(someObject1).to.matchSnapshot()
  })
  it 'test2', () => {
    const someObject2 = fromSomeOtherCall()
    expect(someObject2).to.matchSnapshot()
  })
})

this is good because I can test these two calls individually, if one fails I know exactly what went wrong based on the test names. However, if I want to test that these two objects are in fact equal, then I can rewrite the test like so:

descibe('some tests', () => {
  it('test1', () => {
    const someObject1 = fromSomeCall()
    expect(someObject1).to.matchSnapshot()
    const someObject2 = fromSomeOtherCall()
    expect(someObject2).to.deep.equal(someObject1)
  })
})

however, now if this test fails it is harder to tell exactly what went wrong.

Describe the solution you'd like
I would like to see the idea of subtests in pytest or node-tap. In mocha, we like the idea of completely independent tests, this is great because we can run one test and iterate quickly. However, I still think there is value to be added in adding more metadata to individual tests. What I am proposing is creating sub sections in a test that cannot be ran separately, the whole test still needs to be ran, but now we have more information on where a test failed.

Here is the syntax I imagine:

descibe('some tests', () => {
  it('test1', () => {
    const someObject1 = subtest('test fromSomeCall result', () => {
      const someObject1 = fromSomeCall()
      expect(someObject1).to.matchSnapshot()
      return someObject1
    })
    subtest('test fromSomeOtherCall result matches fromSomeCall', () => {
      const someObject2 = fromSomeOtherCall()
      expect(someObject2).to.deep.equal(someObject1)
    })
  })
})

Describe alternatives you've considered
currently I just use plenty of comments in my tests, but that information isnt immediately useful when reading test failures on CI and doesnt necessarily section out tests well. The other other alternative I already described, which is simply breaking up tests further, but this means its harder to combine testing different but related pieces of code without writing more overlapping tests.

@andykais andykais added the type: feature enhancement proposal label Jun 6, 2019
@andykais
Copy link
Author

andykais commented Jun 6, 2019

[edit] I have discovered the package https://www.npmjs.com/package/mocha-steps which accomplishes some of what I want, e.g. tests are coupled together. The piece it is missing though is a shared variable scope.

@JoshuaKGoldberg
Copy link
Member

I personally would love this feature!... But per #5027 we're not trying to make any significant changes. Wontfixing.

If anybody feels strongly this should exist, do yell at me and we can take another look. Cheers!

@JoshuaKGoldberg JoshuaKGoldberg closed this as not planned Won't fix, can't repro, duplicate, stale Dec 27, 2023
@JoshuaKGoldberg JoshuaKGoldberg added the status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior label Dec 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status: wontfix typically a feature which won't be added, or a "bug" which is actually intended behavior type: feature enhancement proposal
Projects
None yet
Development

No branches or pull requests

2 participants