Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Separating tests into indepdent packages #820

Closed
moltar opened this issue Jan 11, 2022 · 3 comments
Closed

Separating tests into indepdent packages #820

moltar opened this issue Jan 11, 2022 · 3 comments

Comments

@moltar
Copy link
Owner

moltar commented Jan 11, 2022

Having all validator packages installed in one Node package (project) may lead to wrong results, and even errors, if dependencies of validators are shared, but have loose versions defined.

The potential solution is to convert this to a monorepo, which has a separate package for each validator.

But the ideal solution should not break dependabot / revonatebot.

@hoeck thoughts?

@moltar
Copy link
Owner Author

moltar commented Jan 11, 2022

Can use Turborepo for monorepo management.

@hoeck
Copy link
Collaborator

hoeck commented Jan 17, 2022

To be honest that sounds like over engineering to me.

After having looked at the code of some of the faster packages (gotta learn some new tricks 😁), none of them used any third party dependencies in their hot structure checking sections. So there is no effect of such an isolated dependencies system on the benchmark top scorers. Or at least I do not expect any direct measurable effect.

Before putting in all the work to restructure the whole repo, I'd try to measure the effects of the dependencies first. For example by running the existing benchmarks with a single package vs running with all packages installed.

In the short run I would like to add some more feature-specific benchmarks: optional data, error performance, arrays of objects, unions, intersections, enums.

In the long run I'd like to see a "real world api" benchmark that contains all possible features with a parameterized (e.g. size, shape, number of faulty records) input data set that each package is checked against.

@moltar
Copy link
Owner Author

moltar commented Jan 18, 2022

To be honest that sounds like over engineering to me.

That is most likely! 😁 I have this bad habit, called "over engineering".

Closing!

@moltar moltar closed this as completed Jan 18, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants