New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
#[must_use]
for benchmarking components
#285
Comments
#[must_must]
for benchmarking components#[must_use]
for benchmarking components
Hi @ggwpez can I attempt this issue? |
I think we shouldn't tackle this until we transition the entire benchmarking macro into an attribute macro: Are you good with macros @ECJ222? Do you think you can take this on? |
I have not done much on macros yet, But I will attempt the issue it would be good for experience. Do you want me to make the migrations in all the benchmark files or do you have any in mind I should start with? |
This is going to be quite a large issue unless you are familiar with macro development. But yeah, I would start buy building something that works with pallet balances benchmarks as a start. |
Okay @shawntabrizi, I will jump on it. |
@ggwpez do you think this is still needed given the improvements to the benchmarking pipeline accuracy? |
This is more of a Dev-UX thing; but I'm not sure anymore how much sense it makes. |
ill leave it open. doesn't sound like something that would be bad to have |
Is this still needed @ggwpez? |
I think we could add this purely to the new benchmarking syntax which is being finalized here paritytech/substrate#12924 |
If this is still needed I can easily add it in a follow-up PR to paritytech/substrate#12924 |
Lets keep your MR intact like it is right now and evaluate after its done of how useful this would be. |
Yup that's what I'm saying ;) |
any syntax ideas for this @ggwpez? something like: #[benchmark(must_use)]
fn my_benchmark(x: Linear<1, 10>) {
... Or are there scenarios where we want them to be able to specify must use on one component but not another, like: #[benchmark(must_use(y))]
fn my_benchmark(x: Linear<1, 10>, y: Linear<1, 100>) {
... Also, I assume we are going to want to actually scan the benchmark function in the |
Ah wait, the |
I think |
Yea, we can easily prototype either in the backend and then see how often that occurs. |
Some faulty benchmarks ignore their complexity components as described in #400
To avoid this in the future, we can introduce a
#[must_use]
for complexity params.This can obviously only be used in cases where the implementation would be faulty, if they are ignored.
Eg. the following benchmark result must depend on
n
, otherwise it is faulty:A error could be emitted when a weight is derived that does not depend on a
#[must_use]
component.The text was updated successfully, but these errors were encountered: