Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#[must_use] for benchmarking components #285

Open
Tracked by #264
ggwpez opened this issue Jan 31, 2022 · 17 comments
Open
Tracked by #264

#[must_use] for benchmarking components #285

ggwpez opened this issue Jan 31, 2022 · 17 comments
Labels
C1-mentor A task where a mentor is available. Please indicate in the issue who the mentor could be. D0-easy Can be fixed primarily by duplicating and adapting code by an intermediate coder. I5-enhancement An additional feature request. T1-FRAME This PR/Issue is related to core FRAME, the framework.

Comments

@ggwpez
Copy link
Member

ggwpez commented Jan 31, 2022

Some faulty benchmarks ignore their complexity components as described in #400
To avoid this in the future, we can introduce a #[must_use] for complexity params.
This can obviously only be used in cases where the implementation would be faulty, if they are ignored.

Eg. the following benchmark result must depend on n, otherwise it is faulty:

#[must_use]
let n in 0..100;

for _ in 0..n {
    sleep(1 * SEC);
}

A error could be emitted when a weight is derived that does not depend on a #[must_use] component.

@KiChjang KiChjang changed the title #[must_must] for benchmarking components #[must_use] for benchmarking components Feb 1, 2022
@ECJ222
Copy link

ECJ222 commented Mar 12, 2022

Hi @ggwpez can I attempt this issue?

@shawntabrizi
Copy link
Contributor

I think we shouldn't tackle this until we transition the entire benchmarking macro into an attribute macro:

paritytech/substrate#10848

Are you good with macros @ECJ222? Do you think you can take this on?

@ECJ222
Copy link

ECJ222 commented Mar 13, 2022

Hi @shawntabrizi

I have not done much on macros yet, But I will attempt the issue it would be good for experience.

Do you want me to make the migrations in all the benchmark files or do you have any in mind I should start with?

@shawntabrizi
Copy link
Contributor

This is going to be quite a large issue unless you are familiar with macro development. But yeah, I would start buy building something that works with pallet balances benchmarks as a start.

@ECJ222
Copy link

ECJ222 commented Mar 14, 2022

Okay @shawntabrizi, I will jump on it.

@shawntabrizi
Copy link
Contributor

@ggwpez do you think this is still needed given the improvements to the benchmarking pipeline accuracy?

@ggwpez
Copy link
Member Author

ggwpez commented Aug 11, 2022

do you think this is still needed given the improvements to the benchmarking pipeline accuracy?

This is more of a Dev-UX thing; but I'm not sure anymore how much sense it makes.
Annotating all benchmark components with #[must_use] or #[maybe_unused] is also annoying.
Just printing a warning when a component is not used would also work.

@shawntabrizi
Copy link
Contributor

ill leave it open. doesn't sound like something that would be bad to have

@ECJ222
Copy link

ECJ222 commented Jan 10, 2023

do you think this is still needed given the improvements to the benchmarking pipeline accuracy?

This is more of a Dev-UX thing; but I'm not sure anymore how much sense it makes. Annotating all benchmark components with #[must_use] or #[maybe_unused] is also annoying. Just printing a warning when a component is not used would also work.

Is this still needed @ggwpez?

@ggwpez
Copy link
Member Author

ggwpez commented Jan 10, 2023

I think we could add this purely to the new benchmarking syntax which is being finalized here paritytech/substrate#12924
Or rather: have it implicitly warn on unused variables like Clippy does. Not sure what we would need to do for that but a second look would be good. cc @sam0x17
So maybe its not needed anymore, I am not sure.

@sam0x17
Copy link
Contributor

sam0x17 commented Jan 10, 2023

If this is still needed I can easily add it in a follow-up PR to paritytech/substrate#12924

@ggwpez
Copy link
Member Author

ggwpez commented Jan 10, 2023

If this is still needed I can easily add it in a follow-up PR to paritytech/substrate#12924

Lets keep your MR intact like it is right now and evaluate after its done of how useful this would be.
PS: My smooth brain did not read the word "follow-up" nvm 🤦‍♂️

@sam0x17
Copy link
Contributor

sam0x17 commented Jan 10, 2023

Yup that's what I'm saying ;)

@sam0x17
Copy link
Contributor

sam0x17 commented Feb 1, 2023

any syntax ideas for this @ggwpez?

something like:

#[benchmark(must_use)]
fn my_benchmark(x: Linear<1, 10>) {
...

Or are there scenarios where we want them to be able to specify must use on one component but not another, like:

#[benchmark(must_use(y))]
fn my_benchmark(x: Linear<1, 10>, y: Linear<1, 100>) {
...

Also, I assume we are going to want to actually scan the benchmark function in the must_use case to ensure the parameter is being used, if I am understanding correctly? If that is the case this would a really good place to use syn's visitor pattern support 🤔

@ggwpez
Copy link
Member Author

ggwpez commented Feb 1, 2023

Ah wait, the must_use should actually require that the component appears in the outputted weight formula.
So not really a syntax thing, mostly just passing it to the backend and then processing.
Maybe must_depend_on or some better name is possible?

@sam0x17
Copy link
Contributor

sam0x17 commented Feb 1, 2023

I think must_use is concise and good wording-wise based on what you are saying it is for (and has the same meaning as must_depend_on, must_involve, etc), but another question that occurs to me is which is more common, the "must use" case or the other case where they are allowed to not use the parameter..? If that other case is very uncommon maybe we should assume must_use for everything unless you opt out of it with like can_ignore(x) or something...?

@ggwpez
Copy link
Member Author

ggwpez commented Feb 1, 2023

Yea, we can easily prototype either in the backend and then see how often that occurs.
But I would assume that in most cases it is a bug in the benchmark, for example here in the baseline addition which ignores its component because of compiler optimizations.
I cant remember any legit cases where it is supposed to be ignored…

@juangirini juangirini transferred this issue from paritytech/substrate Aug 24, 2023
@the-right-joyce the-right-joyce added I5-enhancement An additional feature request. C1-mentor A task where a mentor is available. Please indicate in the issue who the mentor could be. and removed J0-enhancement labels Aug 25, 2023
@the-right-joyce the-right-joyce added T1-FRAME This PR/Issue is related to core FRAME, the framework. D0-easy Can be fixed primarily by duplicating and adapting code by an intermediate coder. and removed Z6-mentor labels Aug 25, 2023
claravanstaden pushed a commit to Snowfork/polkadot-sdk that referenced this issue Dec 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C1-mentor A task where a mentor is available. Please indicate in the issue who the mentor could be. D0-easy Can be fixed primarily by duplicating and adapting code by an intermediate coder. I5-enhancement An additional feature request. T1-FRAME This PR/Issue is related to core FRAME, the framework.
Projects
Status: Backlog
Development

No branches or pull requests

5 participants