Please note we have a code of conduct, please follow it in all your interactions with the project.
Before you submit an issue, please search the issue tracker, maybe an issue for your problem already exists, and the discussion might inform you of workarounds readily available.
Please provide steps to reproduce if you found a bug or ideally fork the repository and add a failing test that demonstrates what is wrong. This will help us understand and fix the issue faster.
Before you submit your pull request, consider the following guidelines:
-
Search GitHub for an open or closed PR that relates to your submission. You don't want to duplicate effort.
-
Fork the project and install NPM dependencies.
-
Run tests before you start working, to be sure they all pass and your setup is working correctly:
yarn test
-
Be sure to include appropriate test cases. Tests help make it clear what the PR is fixing and also make sure the changes won't break over time.
-
Commit your changes using a descriptive commit message that follows defined commit message conventions. Adherence to these conventions is necessary because release notes are automatically generated from these messages.
-
Push the code to your forked repository and create a pull request on GitHub.
-
If somebody from project contributors suggests changes:
- Make the required updates.
- Re-run all test suites to ensure tests are still passing.
- Commit them and push. Don't rebase after you get a review, so it is clear what changes you did in the last commit. The PR will be squash merged, so its history is irrelevant.
That's it! Thank you for your contribution!
This project now uses yarn v4 to manage dependencies. You will need to install it, the easiest way is by using corepack
:
corepack enable
Our proxy tests use different loopback addresses to ensure traffic correctness.
In contrary to Windows and Linux, macOS comes with only one loopback address - 127.0.0.1
.
Therefore it is necessary to run the following once per system startup:
sudo ifconfig lo0 alias 127.0.0.2 up
sudo ifconfig lo0 alias 127.0.0.3 up
sudo ifconfig lo0 alias 127.0.0.4 up
Arch linux is not officially supported by Playwright, which causes problems in tests. You need to install dependencies manually:
yay -S libffi7 icu66 libwebp052 flite-unpatched
sudo ln -s /usr/lib/libpcre.so /usr/lib/libpcre.so.3
There are a few small differences between how testing in jest and vitest works. Mostly, they relate to what to do, and not do anymore.
You will need to use this tsconfig.json in the test
folder in the package (say, if you were adding a test to packages/core
and there wasn't a tsconfig.json
file already there)
{
"extends": "../../../tsconfig.json",
"include": ["**/*", "../../**/*"],
"compilerOptions": {
"types": ["vitest/globals"]
}
}
Mocks are pretty much the same when it comes to jest vs vitest. One crucial difference is that you no longer need to unmock modules in an afterAll block, as they are mocked per test file.
jest.mock('node:os', () => {
const original: typeof import('node:os') = jest.requireActual('node:os');
return {
...original,
platform: () => 'darwin',
freemem: jest.fn(),
};
});
afterAll(() => {
jest.unmock('node:os');
});
vitest.mock('node:os', async (importActual) => {
const original = await importActual<typeof import('node:os')>();
return {
...original,
platform: () => 'darwin',
freemem: jest.fn(),
};
});
Given the following two samples:
import os from 'node:os';
console.log(os.platform());
import { platform } from 'node:os';
console.log(platform());
You will need to mock the module based on how you import it in the source code. This means, if you will import the default export, you will need to add a default
property to the mocked object. Otherwise, you will need to mock the module as is.
So, for example 1:
vitest.mock('node:os', async (importActual) => {
const original = await importActual<
typeof import('node:os') & { default: typeof import('node:os') }
>();
const platformMock = () => 'darwin';
const freememMock = vitest.fn();
return {
...original,
platform: platformMock,
freemem: freememMock,
// Specifically, you'll need to also mock the `default` property of the module, as seen below
default: {
...original.default,
platform: platformMock,
freemem: freememMock,
},
};
});
And for example 2:
vitest.mock('node:os', async (importActual) => {
const original = await importActual<typeof import('node:os')>();
const platformMock = () => 'darwin';
const freememMock = vitest.fn();
return {
...original,
platform: platformMock,
freemem: freememMock,
};
});
In previous jest code, we had to cast mocked functions as jest.MockedFunction
. This is technically still needed, but vitest gives us a utility function that casts it for us: vitest.mocked()
. It doesn't do anything runtime wise, but it helps with type inference.
import os from 'node:os';
const mockedPlatform = vitest.mocked(os.platform);
You no longer need to reset spies to their original implementation. This is done automatically for you via vitest's restoreMocks
option.
With that said, if you create spies in a beforeAll
/beforeEach
hook, you might need to call this at the start of your file: vitest.setConfig({ restoreMocks: false });
, as otherwise your spies will be reset before your tests run.
In previous jest code, you could do something like this:
const spy = jest.spyOn(os, 'platform').mockReturnValueOnce('darwin');
expect(os.platform()).toBe('darwin');
expect(spy).toHaveBeenCalledTimes(1);
const spy2 = jest.spyOn(os, 'platform').mockReturnValueOnce('linux');
expect(os.platform()).toBe('linux');
expect(spy).toHaveBeenCalledTimes(2);
This is no longer valid in vitest. You will need to re-use the same spy instance.
const spy = vitest.spyOn(os, 'platform').mockReturnValueOnce('darwin');
expect(os.platform()).toBe('darwin');
expect(spy).toHaveBeenCalledTimes(1);
spy.mockReturnValueOnce('linux');
expect(os.platform()).toBe('linux');
expect(spy).toHaveBeenCalledTimes(2);
In jest, we were able to do the following to adjust timeouts at runtime:
if (os.platform() === 'win32') {
jest.setTimeout(100_000);
}
In vitest, you need to call the vitest.setConfig
function instead (and specify what to change):
if (os.platform() === 'win32') {
vitest.setConfig({
testTimeout: 100_000,
});
}
In jest, we were able to call the callback provided in the hooks to signal the hook has executed successfully:
beforeAll((done) => {
// Do something
done();
});
In vitest, this is no longer provided, but can be substituted with a promise:
beforeAll(async () => {
await new Promise((resolve) => {
// Do something
resolve();
});
});
Important
Certain projects, like puppeteer
declare const enum
s in their typings. These are enums that do not actually exist at runtime, but enums that tsc
(which is what we're currently using to compile Crawlee) can inline the values of
directly into the compiled code. You should avoid importing const enums
as vitest
will not inline them like tsc
does and will throw an error, unless the enum is also present at runtime (check by importing the module and seeing if it's exported anywhere).
Some tests may want to check for error stack traces and the presence of class names (a prime example is our tests for logging the stack traces for certain logger levels). In jest
, you were able to do this:
expect(/at BasicCrawler\.requestHandler/.test(stackTrace)).toBe(true);
In vitest
, at the time of writing this (2023/10/12), class names get an _
prepended to them. In order to solve this, just add _?
to your regular expression test (this will match both with and without the _
).
expect(/at _?BasicCrawler\.requestHandler/.test(stackTrace)).toBe(true);
In the interest of fostering an open and welcoming environment, we as contributors and maintainers pledge to making participation in our project and our community a harassment-free experience for everyone, regardless of age, body size, disability, ethnicity, gender identity and expression, level of experience, nationality, personal appearance, race, religion, or sexual identity and orientation.
Examples of behavior that contributes to creating a positive environment include:
- Using welcoming and inclusive language
- Being respectful of differing viewpoints and experiences
- Gracefully accepting constructive criticism
- Focusing on what is best for the community
- Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
- The use of sexualized language or imagery and unwelcome sexual attention or advances
- Trolling, insulting/derogatory comments, and personal or political attacks
- Public or private harassment
- Publishing others' private information, such as a physical or electronic address, without explicit permission
- Other conduct which could reasonably be considered inappropriate in a professional setting
Project maintainers are responsible for clarifying the standards of acceptable behavior and are expected to take appropriate and fair corrective action in response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or reject comments, commits, code, wiki edits, issues, and other contributions that are not aligned to this Code of Conduct, or to ban temporarily or permanently any contributor for other behaviors that they deem inappropriate, threatening, offensive, or harmful.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or its community. Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event. Representation of a project may be further defined and clarified by project maintainers.
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at support@apify.com. All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances. The project team is obligated to maintain confidentiality with regard to the reporter of an incident. Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good faith may face temporary or permanent repercussions as determined by other members of the project's leadership.
This Code of Conduct is adapted from the Contributor Covenant, version 1.4, available at http://contributor-covenant.org/version/1/4, and from PurpleBooth.