Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow to skip tests programmatically #7245

Closed
medikoo opened this issue Oct 23, 2018 · 48 comments
Closed

Allow to skip tests programmatically #7245

medikoo opened this issue Oct 23, 2018 · 48 comments

Comments

@medikoo
Copy link

medikoo commented Oct 23, 2018

🚀 Feature Proposal

Mocha supports skipping tests programmatically (in both before and it) as:

describe("Some test" () => {
  it("Should skip", function () {
    if (true) {
      this.skip(); // test marked as skipped, no further part run
    }
    notInvoked();
  });
}):

Motivation

It's very useful for cases where during tests setup we find out whether test can be pursued or not e.g. we need some external data, but due to some unavailability we can't so we decide to skip tests.

Is this somewhere on a roadmap?

@palmerj3
Copy link
Contributor

This seems like a bad idea to me. Currently you can do it.skip() to explicitly skip a particular test and it's not even executed.

Skipping programmatically, and only running a portion of your test suite as a result, doesn't seem like it's serving the purpose of tests. A test failure at that point would be beneficial so the problem(s) could be fixed. And if they can't be fixed then marking a test as skipped explicitly like I've showed above is an appropriate reaction.

@medikoo
Copy link
Author

medikoo commented Oct 23, 2018

Skipping programmatically, and only running a portion of your test suite as a result, doesn't seem like it's serving the purpose of tests. A test failure at that point would be beneficial so the problem(s) could be fixed

It serves integration tests. Where tests depend on some external factors, and unavailability of some external resource, shouldn't indicate a problem with a project (reported with fail) but fact that test cannot be pursued (hence skipped)

@palmerj3
Copy link
Contributor

I'll leave this here for others to discuss. But personally I don't think this is a great idea.

Your hypothetical example would not give confidence that any given change in a code base caused problems, thus why I said a test failure is beneficial.

@medikoo
Copy link
Author

medikoo commented Oct 23, 2018

Your hypothetical example would not give confidence that any given change in a code base caused problems, thus why I said a test failure is beneficial.

Yes, it's not for a case, where we want to confirm our project is free of bugs on its own (that should be solved with unit tests or mocked integration tests).

It's about the case, where we test integration with external resource (such tests might be run on different schedule). Having fails for both resource unavailability and an errors in handling of it, makes tests unreliable, as it produces false positives and in result increases the risk of ignoring the latter type of the issue.

@palmerj3
Copy link
Contributor

For that case you could consider using jest.retryTimes

@medikoo
Copy link
Author

medikoo commented Oct 23, 2018

For that case you could consider using jest.retryTimes

I don't want to retry, I want to abort without fail (and also signal that test was not really run)

@mattphillips
Copy link
Contributor

@medikoo I agree with @palmerj3

I think being able to dynamically disable tests kind of misses the point of testing.

Instead of disabling because of unavailability of some resource I would argue that you probably want to be alerted to this with the failure and then resolve the real problem of the resource not being available.

jest.retryTimes should help with this if it's just a case of the resource being flakey but if it is completely down then you have a bigger problem IMO and want to know about it 😄

@medikoo
Copy link
Author

medikoo commented Oct 23, 2018

then resolve the real problem of the resource not being available.

When speaking of external resource I mean something I do not own or control, so it's not an issue I can resolve.

And uptime monitoring of external resources which alarms/informs me whether given service is accessible or not, is a different thing which I don't see as a part of integration tests.

@SimenB
Copy link
Member

SimenB commented Oct 23, 2018

This is part of jasmine (pending global function), but I think it was an explicit choice not to port it to Circus.

@aaronabramov might have thoughts on that?

@aaronabramov
Copy link
Contributor

what if the external source starts failing all the time? then you'll just have a skipped test that will never run.

i think for very specific scenarios you can just use:

test('stuff', () => {
  if (serviceUnavailable) {
    logSomething();
    return;
  }

  // else test something
});

but i agree with @palmerj3 that having this in the core doesn't look like a great idea

@palmerj3 palmerj3 closed this as completed Nov 1, 2018
@medikoo
Copy link
Author

medikoo commented Nov 1, 2018

@aaronabramov it's what we do now (return and log), still as we have large number of tests, those logs usually come unnoticed.

If they would be skipped, then in the final summary any skip if happened will be noticed.

@ryanmark
Copy link

ryanmark commented Nov 21, 2018

This is a pretty common use case. Sometimes writing tests is hard and takes a long time to do it correctly. Sometimes writing a test to work only in certain circumstances is achievable in far less time and better than writing no tests at all or permanently skipping tests.

So for all the devs with deadlines, here is a hacky workaround:

let someTestName = 'some test';
let someTestCB = () => {
  it("Should skip", function () {
    notInvoked();
  });
};
if ( process.env['RUN_ALL_TESTS'] == 'yes' ) describe(someTestName, someTestCB);
else describe.skip(someTestName, someTestCB);

@SimenB
Copy link
Member

SimenB commented Nov 21, 2018

Another possibility

if (thing) test.only('skipping all other things', () => {
  console.warn('skipping tests');
});

// ...all other things

Jest itself uses this to skip certain tests on windows

@wolever
Copy link

wolever commented Mar 19, 2019

To weigh in on this: this is already a feature supported by Mocha which is especially useful in beforeAll blocks when a precondition is being checked. For example, there are a number of tests in my suite which should only run if an external service is available, and with Mocha this is trivial:

describe('External service tests', () => {
  before(async function() {
    try {
      await fetch('https://external-service')
    } catch (e) {
      console.log('external service not running; skipping tests:', e)
      this.skip()
    }
  })
  … the rest of the suite …
})

Based on the responses here (and elsewhere on Google), the only options for this kind of test are:

  1. Build some other infrastructure for running them. This is undesirable for the obvious reason that "build a whole separate test runner" is… not ideal.
  2. Wrap each test in a function so it reports a pass if the service is not running, but otherwise does nothing. This is undesirable because tests will be reported as passing when they have not, in fact passed.

@wolever
Copy link

wolever commented Mar 19, 2019

And to address a couple of the common issues that have been raised with this kind of testing:

But what if the tests never run because the service is always down?

This is a business decision: I've decided that the cost of "test suite fails every time $service goes down" is higher than the cost of "certain portions of the test suite are not exercised until someone responds to the pagerduty and fixes the broken service".

Why not retry until the service comes back?

  1. Business decision (see above)
  2. For Reasons (which you will have to trust me are Good), developers don't have consistent access to all the services, and it's simplest to skip the tests they can't run.

@okorz001
Copy link

okorz001 commented Apr 10, 2019

Jest does not allow you to skip a test once it's begun, but you can always extract that logic and conditionally use it or it.skip to define your problematic tests. This is arguably a better practice than inline skipping since this results in consistent enabling/disabling of related tests. (I suppose it's clunky if you only have a single test though.)

For example:

const {USER, PASSWORD} = process.env
const withAuth = USERNAME && PASSWORD ? it : it.skip
if (withAuth == it.skip) {
    console.warn('USERNAME or PASSWORD is undefined, skipping authed tests')
}

withAuth('do stuff with USERNAME and PASSWORD', () => {
    // ...
})

@kaiyoma
Copy link

kaiyoma commented Apr 15, 2019

What if you want to skip certain tests on certain OSes? That seems like a pretty valid reason for programmatically skipping tests.

@SimenB
Copy link
Member

SimenB commented Apr 16, 2019

@kaiyoma see above

Another possibility

if (thing) test.only('skipping all other things', () => {
  console.warn('skipping tests');
});

// ...all other things

Jest itself uses this to skip certain tests on windows

@Asday
Copy link

Asday commented May 13, 2019

describe('something wonderful I imagine', () => {
  it('can set up properly', () => {
    setUp()
  })

  it('can do something after setup', () => {
    skipIf('can set up properly').failed()

    setUp()
    doSomethingThatDependsOnSetUpHavingWorked()
  })
})

Idea being that I want one test to fail telling me exactly "hey dingus, you broke this part", not one test, and all others that depend on whatever its testing to go well.

Basically I want dependency in js.

@ClaytonSmith
Copy link

I agree with @wolever 100%.

@palmerj3 and @aaronabramov: Your reasoning for not providing this feature is predicated on a false assumptions about the business need of our test application. Your assumptions are understandable in the context of application self-testing but for external resource test, the model breaks down fast.

@dandv
Copy link
Contributor

dandv commented Jun 23, 2019

My use case for conditionally skipping tests is when the resource is only available during certain times of the day/week. For example, testing the API consistency of a live stock market data service doesn't make sense on weekends, so those tests should be skipped.

Yes, I assume the risk that the API response format changed over the weekend, but that's a business decision, as others have mentioned.

@okorz001's withAuth workaround is nifty, but breaks IDEs. VS Code, WebStorm etc. won't recognize withAuth as a test, and won't enable individual test running and status:

image

@Asday
Copy link

Asday commented Jun 23, 2019

I'm assuming you have a very good reason to not mock the API calls in tests, so I won't ask.

Can't you just perform your check within the tests?

const hasAuth => USER && PASSWORD

describe('something wonderful, () => {
  it('does something with auth', () => {
    if (!hasAuth) { it.skip() }

    // ...
  })
})

@SimenB
Copy link
Member

SimenB commented Jun 24, 2019

describe('auth tests', () => {
  if (!(USER && PASSWORD)) {
    it.only('', () => {
      console.warn('Missing username and password, skipping auth tests');
    });
  }

  // actual auth tests
});

// all non-auth tests

You could have a helper as well, sort of like invariant.

import {skipTestOnCondition} from './my-test-helpers'

describe('auth tests', () => {
  skipTestOnCondition(!(USER && PASSWORD));

  // actual auth tests
});

// all non-auth tests

If you don't like describe blocks, just split the tests out into multiple files and have the check at the top level.

Again, Jest does something very similar: https://github.com/facebook/jest/blob/3f5a0e8cdef4983813029e511d691f8d8d7b15e2/packages/test-utils/src/ConditionalTest.ts


I don't think we need to increase the API surface of Jest for this, when it's trivial to implement in user land

@Asday
Copy link

Asday commented Jun 24, 2019

I don't think we need to increase the API surface of Jest for this, when it's trivial to implement in user land

I do still want this, which isn't trivial in userland:

#7245 (comment)

@SimenB
Copy link
Member

SimenB commented Jun 24, 2019

Seems also like something you can use --bail to achieve - it won't execute tests after a failing one (in a single file). That's also an entirely different feature request than what I interpret this issue to be about - you want access to other test's state from within a test.

@jasonnathan
Copy link

In our case, we have different teams working on different aspects of the pipeline. We use ENV flags to decide if a service is available for integration testing. For teams to work independently, we wanted to see if we could run full test suites base on a given ENV set up in the pipeline or skip it altogether.

Using your suggested approach above @medikoo , it would mean a member from team X would have to go back and touch code when team Y completes their service.

If the mocked specifications worked well and were well tested in the first place, there shouldn't be a need to do this at all.

Please consider this use case.

@elialgranti
Copy link

Please reconsider this feature request:

Unit testing frameworks are used not only for unit testing. They are also valuable in integration testing.
Obviously not all tests are suitable for all environments. jest already provides skip() as a way to "comment out" parts of the test suit without having to do the actual cumbersome commenting out of the code. Having a predicate in the skip method would enable switching on and off parts of the test suite to suit the environment (windows vs. unix, local dev vs. build server, etc.) the tests are running on less cumbersome.

@wolever
Copy link

wolever commented Sep 16, 2019

@elialgranti There has been more discussion on this (and it seems like core devs are in favour of if) in #8604

@dandv
Copy link
Contributor

dandv commented Oct 6, 2019

@SimenB: I've tried the it.only structure you suggested for synchronous test skipping but it's failing the non-auth tests outside the describe as well:

image

https://repl.it/repls/SillyPastDictionary

Filed #9014.

@andreabisello
Copy link

pytest permits to skip test programmatically , because of the missing of preconditions.

if a test cannot pass because of missing of preconditions, it will give a false positive.

for example, i need to test pressing a button my lamp switch on.

if there is no electric power, the test need to skip, otherwise you will obtain a false positive.

there will be another test somewhere that test there is electrical power.

now in jest i need to avoid the assertions that cannot success.

test("something", async () => {

const precondition = something

if(precondition) {
do stuff
}

})

but this is a boilerplate.

@nickpalmer
Copy link

Another reason to have a feature for this: We have some VERY long running tests on sections of a system that doesn't change much. We want to keep these running on CI, where we don't care if they take a long time, but devs shouldn't have to worry about them while developing. Many other test runners have ways of classifying tests (small, big, ci, etc...) and then you can pick which tests you want to run while developing.

@moltar
Copy link

moltar commented May 3, 2020

Another use case is ability to run some tests only in CI.

E.g. I have some canary integration tests that run against a production system, using secrets, which are stored in CI only. Developers, including open source devs, simply just don't have access to these keys. So the tests would fail on their systems.

@justingrant
Copy link

Another use-case similar to @moltar's is when a server may have or not-have the capability to run a particular test. For example, I'm writing tests to verify that Daylight Savings Time is handled correctly, but if it's run on a local timezone without DST (which I can detect programmatically) then I want to skip the test but I want to let users know that the test is skipped.

Here's how I'm doing it now, which seems to be working pretty well.

const tz = Intl.DateTimeFormat().resolvedOptions().timeZone || process.env.tz
const dstTransitions = getDstTransitions(2020)
const dstOnly = dstTransitions.start && dstTransitions.end ? it : it.skip
dstOnly(`works across DST start & end in local timezone: ${tz}`, function() { 
  . . .

@GursheeshSingh
Copy link

GursheeshSingh commented May 6, 2020

#3652 (comment)

const testIfCondition = mySkipCondition ? test.skip : test;
describe('Conditional Test', () => {
  testIfCondition('Only test on condition', () => {
    ...
  });
});

@joshuapinter
Copy link

We ended up doing something like this:

function describe( name, callback ) {
  if ( name.toLowerCase().includes( "bluetooth" ) && BLUETOOTH_IS_DISABLED  )
    return this.describe.skip( name, callback );
  else
    return this.describe( name, callback );
}

Not perfect but works well and is unobtrusive. This prevents the usage of describe.each but I'm happy to get feedback on how to make this function handle those situations as well.

@xenoterracide
Copy link

my use case for this is tests relying on external services that we have no control over, obviously some part of the test should be mocked, but it would also be good to actually test the request to the service.

@KelvinSan
Copy link

Was this ever resolved ?

@tv42
Copy link

tv42 commented Aug 20, 2020

I only want to run integration tests against service Foo when I've started service Foo and indicated it to my tests with $FOO_PORT. Every single other test framework makes that very convenient.

@glassdimlygr
Copy link

glassdimlygr commented Aug 27, 2020

Has anyone successfully gotten tests to skip based on an async condition? I have found that jest parses tests before my async condition is resolved even with various assiduous logic in beforeAll or setupFile. This seems like a trivial task but for some reason it's actually hard. This is what I did to check if $FOO_PORT was in use and run test conditionally.

I'd still like to see this feature in core.

@orgalaf
Copy link

orgalaf commented Feb 10, 2021

Pretty annoying that this isn't accepted as a feature. The arguments against feel a bit weak. Especially as there is a clear use case for it. In any case if you're still looking for a solution there is a pretty simple pattern here: https://stackoverflow.com/questions/58264344/conditionally-run-tests-in-jest/66143240#66143240

@mikaelkundert
Copy link

mikaelkundert commented Feb 23, 2021

My use case is to run a matrix of tests. I use describe.each() and test.each() and I want a report of use cases that are todo or skip.

  • Why todo? Because there are tests waiting to writing their expects for the matrix point
  • Why skipped? Well, you might have a matrix point that before and running test are the same action = no point to execute

If you have 20 preconditions with 20 tests to go through, you can't expect one to write 400 test cases of copy pasting similar tasks to run and manually describe the scenario names individually.

Having this.skip() and this.todo() or similar way to specify within the test body would help this tremendously to get good report of tests (including ones that are in todo/skipped).

@jmclean-cnexus
Copy link

FWIW, you can get around this using jest-circus. You have to mutate the state of a test case, so be careful. See the below:

  1. Create a CustomEnvironment for jest:
const NodeEnvironment = require('jest-environment-node');
const { stateService } = require('./dist/app/utils/state.service');

/**
 * @jest-environment ./jest-environment
 */
class CustomEnvironment extends NodeEnvironment {
    constructor(config, context) {
        super(config);
        this.global.stateService = stateService
    }

    async handleTestEvent(event, state) {
        if (event.name === 'run_describe_start') {
          stateService.setValueOf('skip',false)
        }
        if (event.name === 'test_start') {
          try {
            if(event.test.status !== 'skip' ) this.checkForPreviousFailures(state.currentlyRunningTest)
          } catch(e) {
            stateService.setValueOf('skip', true)
          }
        }
        if (event.name === 'test_done') {
          if(event.test.errors.some(e => e[0].name === 'DependencyError')) {
            event.test.errors = []
            event.test.status = 'skip'
          }
        }
      }

    checkForPreviousFailures(state) {
      if(state.parent.children.find(child => child.errors.length > 0)) throw new Error('Found an Error')
    }
}
module.exports = CustomEnvironment
  1. Create a state service:
// A Simple Key-Value store for JS
export class StateService {
    private _store: any = {};

    constructor() { }

    get store() {
        return this._store;
    }

    getValueOf(key: string) {
        if(this._store[key] !== undefined) return this._store[key];
        else throw new Error(`${key} does not exist within the current store`); 
    }

    setValueOf(key: string, value: any) {
        this._store[key] = value;
    }

    appendTo(key: string, value: any, create: boolean = true) {
        if(this._store[key]) {
            if(!Array.isArray(this._store[key])) throw `key=${key} is not an array type, thus it cannot be appended`
            this._store[key].push(value);
        } else {
            if(create) {
                this._store[key] = [value]
            } else {
                throw new Error(`${key} does not exist within the current store`)
            }
        }
    }
    
}

export const stateService = new StateService();
  1. write a dependency service
import { StateService } from './state.service';
// @ts-ignore
let stateService: StateService = global.stateService;

export class DependencyError extends Error {
    constructor(message: string) {
        super(message);
        this.name = "DependencyError"
    }
}

export const checkDependency = () => {
    if(stateService.getValueOf('skip')) throw new DependencyError("A dependency check has failed")
}
  1. write your test case:
import { checkDependency } from "../../utils/dependency.service"

describe('Domain1', () => {
    describe('E2E flow 1', () => {
        it('should be true', () => {
            expect(true).toBeTruthy()
        })
        it('should fail', () => {
            checkDependency()
            expect(true).toBeFalsy()
        })
        it('should skip this', () => {
            checkDependency()
            expect(true).toBeTruthy()
        })
        it('should skip this too', () => {
            checkDependency()
            expect(true).toBeTruthy()
        })
    })
    describe('E2E flow 2', () => {
        it('should be true', () => {
            expect(true).toBeTruthy()
        })
        it('should be true', () => {
            checkDependency()
            expect(true).toBeTruthy()
        })
        it('should be true', () => {
            checkDependency()
            expect(true).toBeTruthy()
        })
        it('should be true', () => {
            checkDependency()
            expect(true).toBeTruthy()
        })
    })
})
  1. Observe results:
 FAIL  src/api/app/e2e/positive/dependency.spec.ts
  Domain1
    E2E flow 1
      ✓ should be true (3 ms)
      ✕ should fail (2 ms)
      ○ skipped should skip this
      ○ skipped should skip this too
    E2E flow 2
      ✓ should be true
      ✓ should be true
      ✓ should be true
      ✓ should be true

  ● Domain1 › E2E flow 1 › should fail

    expect(received).toBeFalsy()

    Received: true

       8 |         it('should fail', () => {
       9 |             checkDependency()
    > 10 |             expect(true).toBeFalsy()
         |                          ^
      11 |         })
      12 |         it('should skip this', () => {
      13 |             checkDependency()

      at Object.<anonymous> (src/api/app/e2e/positive/dependency.spec.ts:10:26)

Test Suites: 1 failed, 1 total
Tests:       1 failed, 2 skipped, 5 passed, 8 total
Snapshots:   0 total
Time:        2.123 s
Ran all test suites matching /src\/api\/app\/e2e\/positive\/dependency.spec.ts/i.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

@ali-hellani
Copy link

ali-hellani commented Mar 5, 2021

There are two ways to disable tests in Jest which i believe not documented yet (!! Really !!)

With a conditional statement, you can call pending() anywhere inside the test function, or the other option is to not provide a function body for the test (only name), here is an example:

testIf(element.active)(testName, async () => {
      if (parents && parents.filter((e) => e in testsFailed).length > 0) {
        await Helpers.log(
          testName + ' ==> ' + '(Skipped test due to a failed dependent test)',
          'warning'
        )
        pending()
      }

      await saveTestState(element.function, async () => {
        try {
          await FuncInstance[functionName](element, extra)
        } catch (error) {
          testsFailed = { ...testsFailed, [functionName]: true }
          throw new Error(error)
        }
      })
})

Screenshot 2021-03-05 at 13 10 17

Screenshot 2021-03-05 at 13 05 06

@josiah-roberts
Copy link

This is also extremely useful in theory-based testing that uses the assume paradigm. Allowing the caller to generate test cases and then if (!someTestCaseConstraintCheck) skip(); is very useful.

@github-actions
Copy link

This issue has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.
Please note this issue tracker is not a help forum. We recommend using StackOverflow or our discord channel for questions.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators May 10, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests