Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More combination modes #12

Open
tyt2y3 opened this issue Jul 11, 2021 · 1 comment · May be fixed by #50
Open

More combination modes #12

tyt2y3 opened this issue Jul 11, 2021 · 1 comment · May be fixed by #50

Comments

@tyt2y3
Copy link

tyt2y3 commented Jul 11, 2021

My crate currently has 10 features, it will then explode to 2^10 = 1024 combinations.
It'd be useful to be able to test some meaningful subsets of all the possible combinations.

I'd propose some modes:

one-by-one: enable each feature one-by-one
all-features: all features enabled
pseudo-random: pseudo-randomly pick N combinations
random: true-randomly pick N combinations
mix-and-match: divide features into M groups, and randomly pick R features from each group

For example, if we divide 10 features into 2 groups of 5, and picking up to 3 from each group, then:

((5 choose 3) + (5 choose 2) + (5 choose 1)) ^ 2 = 625 combinations.

After that, we can still randomly pick N from these possible combinations.

It would be a more reasonable set of combinations than picking purely randomly, as I observe that crates with many features usually divide their feature flags by function categories. As an analogy people can choose different wheels and engines when building a car.

What do you think about this?

@tyt2y3 tyt2y3 changed the title More combinations mode More combination modes Jul 11, 2021
@demmerichs
Copy link
Contributor

Some thoughts on this one:

When you want to group by category/functionality, see #42. I am working on a PR that will allow to specify quite flexible rules regarding grouping and relating feature flags.

I can see the need for having different testing configurations, staying in the analogy, e.g. if we change something at the engine, we only want to test feature combinations relating to the engine. For this it would be useful, to have multiple named setting groups that change the behavior of feature set selection just by providing a group name at the CLI. This would also allow to configure a more restricted/reduced feature set for fast testing, and a complete/large feature set matrix for an important milestone testing which is allowed to run longer.

Regarding the random subsampling, I am not a big fan of that. The whole purpose of testing feature combinations and having control over it, is to give developers confidence that they did not miss a bug. Just randomly checking a few does not provide this confidence.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants