Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parachain-staking benchmarks code related questions #1950

Closed
Chengcheng-S opened this issue Nov 15, 2022 · 9 comments · Fixed by #1951
Closed

Parachain-staking benchmarks code related questions #1950

Chengcheng-S opened this issue Nov 15, 2022 · 9 comments · Fixed by #1951

Comments

@Chengcheng-S
Copy link

Chengcheng-S commented Nov 15, 2022

Hi moonbeam team!
I'm having some problems executing parachain-staking benchmarks code, hope you can help.

  1. when executing cargo test -p pallet-parachain-staking --features runtime-benchmarks command ,the terminal returned some errors
  Compiling pallet-parachain-staking v3.0.0 (/mnt/d/RustCode/moonbeam/pallets/parachain-staking)
error[E0599]: no function or associated item named `test_benchmark_hotfix_remove_delegation_requests` found for struct `pallet::Pallet` in the current scope
   --> pallets/parachain-staking/src/benchmarks.rs:1297:31
    |
1297 |             assert_ok!(Pallet::<Test>::test_benchmark_hotfix_remove_delegation_requests());
    |                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |                                        |
    |                                        function or associated item not found in `pallet::Pallet<Test>`
    |                                        help: there is an associated function with a similar name: `execute_delegation_request`
    |
   ::: pallets/parachain-staking/src/lib.rs:100:5
    |
100  |     pub struct Pallet<T>(PhantomData<T>);
    |     -------------------- function or associated item `test_benchmark_hotfix_remove_delegation_requests` not found for this struct

error[E0599]: no function or associated item named `test_benchmark_hotfix_update_candidate_pool_value` found for struct `pallet::Pallet` in the current scope
   --> pallets/parachain-staking/src/benchmarks.rs:1304:31
    |
1304 |             assert_ok!(Pallet::<Test>::test_benchmark_hotfix_update_candidate_pool_value());
    |                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |                                        |
    |                                        function or associated item not found in `pallet::Pallet<Test>`
    |                                        help: there is an associated function with a similar name: `test_benchmark_join_candidates`
    |
   ::: pallets/parachain-staking/src/lib.rs:100:5
    |
100  |     pub struct Pallet<T>(PhantomData<T>);
    |     -------------------- function or associated item `test_benchmark_hotfix_update_candidate_pool_value` not found for this struct

error[E0599]: no function or associated item named `test_benchmark_round_transition_on_initialize` found for struct `pallet::Pallet` in the current scope
   --> pallets/parachain-staking/src/benchmarks.rs:1507:31
    |
1507 |             assert_ok!(Pallet::<Test>::test_benchmark_round_transition_on_initialize());
    |                                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
    |                                        |
    |                                        function or associated item not found in `pallet::Pallet<Test>`
    |                                        help: there is an associated function with a similar name: `test_benchmark_base_on_initialize`
    |
   ::: pallets/parachain-staking/src/lib.rs:100:5
    |
100  |     pub struct Pallet<T>(PhantomData<T>);
    |     -------------------- function or associated item `test_benchmark_round_transition_on_initialize` not found for this struct
  1. It takes too long to generate a weight file via parachain-staking benchmarks code.
2022-11-15 14:41:41 Running Benchmark: parachain_staking.join_candidates(1 args) 2/50 1/1
2022-11-15 14:41:50 Running Benchmark: parachain_staking.join_candidates(1 args) 4/50 1/1
2022-11-15 14:41:56 Running Benchmark: parachain_staking.join_candidates(1 args) 5/50 1/1
2022-11-15 14:42:04 Running Benchmark: parachain_staking.join_candidates(1 args) 6/50 1/1
2022-11-15 14:42:14 Running Benchmark: parachain_staking.join_candidates(1 args) 7/50 1/1
2022-11-15 14:42:25 Running Benchmark: parachain_staking.join_candidates(1 args) 8/50 1/1
2022-11-15 14:42:37 Running Benchmark: parachain_staking.join_candidates(1 args) 9/50 1/1
2022-11-15 14:42:51 Running Benchmark: parachain_staking.join_candidates(1 args) 10/50 1/1
2022-11-15 14:43:07 Running Benchmark: parachain_staking.join_candidates(1 args) 11/50 1/1
2022-11-15 14:43:24 Running Benchmark: parachain_staking.join_candidates(1 args) 12/50 1/1
2022-11-15 14:43:43 Running Benchmark: parachain_staking.join_candidates(1 args) 13/50 1/1
2022-11-15 14:44:03 Running Benchmark: parachain_staking.join_candidates(1 args) 14/50 1/1
2022-11-15 14:44:25 Running Benchmark: parachain_staking.join_candidates(1 args) 15/50 1/1
2022-11-15 14:44:54 Running Benchmark: parachain_staking.join_candidates(1 args) 16/50 1/1
2022-11-15 14:45:34 Running Benchmark: parachain_staking.join_candidates(1 args) 17/50 1/1
2022-11-15 14:46:17 Running Benchmark: parachain_staking.join_candidates(1 args) 18/50 1/1
2022-11-15 14:47:10 Running Benchmark: parachain_staking.join_candidates(1 args) 19/50 1/1
2022-11-15 14:48:07 Running Benchmark: parachain_staking.join_candidates(1 args) 20/50 1/1
2022-11-15 14:49:07 Running Benchmark: parachain_staking.join_candidates(1 args) 21/50 1/1
2022-11-15 14:50:10 Running Benchmark: parachain_staking.join_candidates(1 args) 22/50 1/1
2022-11-15 14:51:17 Running Benchmark: parachain_staking.join_candidates(1 args) 23/50 1/1
2022-11-15 14:52:24 Running Benchmark: parachain_staking.join_candidates(1 args) 24/50 1/1
2022-11-15 14:53:41 Running Benchmark: parachain_staking.join_candidates(1 args) 25/50 1/1
2022-11-15 14:54:55 Running Benchmark: parachain_staking.join_candidates(1 args) 26/50 1/1
2022-11-15 14:56:12 Running Benchmark: parachain_staking.join_candidates(1 args) 27/50 1/1
2022-11-15 14:57:27 Running Benchmark: parachain_staking.join_candidates(1 args) 28/50 1/1
2022-11-15 14:58:25 Running Benchmark: parachain_staking.join_candidates(1 args) 29/50 1/1
2022-11-15 14:59:47 Running Benchmark: parachain_staking.join_candidates(1 args) 30/50 1/1
2022-11-15 15:01:17 Running Benchmark: parachain_staking.join_candidates(1 args) 31/50 1/1
2022-11-15 15:02:52 Running Benchmark: parachain_staking.join_candidates(1 args) 32/50 1/1
2022-11-15 15:04:27 Running Benchmark: parachain_staking.join_candidates(1 args) 33/50 1/1
2022-11-15 15:05:29 Running Benchmark: parachain_staking.join_candidates(1 args) 34/50 1/1
2022-11-15 15:06:22 Running Benchmark: parachain_staking.join_candidates(1 args) 35/50 1/1
2022-11-15 15:07:17 Running Benchmark: parachain_staking.join_candidates(1 args) 36/50 1/1
2022-11-15 15:08:13 Running Benchmark: parachain_staking.join_candidates(1 args) 37/50 1/1
2022-11-15 15:09:11 Running Benchmark: parachain_staking.join_candidates(1 args) 38/50 1/1
2022-11-15 15:10:10 Running Benchmark: parachain_staking.join_candidates(1 args) 39/50 1/1
2022-11-15 15:11:11 Running Benchmark: parachain_staking.join_candidates(1 args) 40/50 1/1
2022-11-15 15:12:14 Running Benchmark: parachain_staking.join_candidates(1 args) 41/50 1/1
2022-11-15 15:13:18 Running Benchmark: parachain_staking.join_candidates(1 args) 42/50 1/1
2022-11-15 15:14:24 Running Benchmark: parachain_staking.join_candidates(1 args) 43/50 1/1
2022-11-15 15:15:31 Running Benchmark: parachain_staking.join_candidates(1 args) 44/50 1/1
2022-11-15 15:16:40 Running Benchmark: parachain_staking.join_candidates(1 args) 45/50 1/1
2022-11-15 15:17:50 Running Benchmark: parachain_staking.join_candidates(1 args) 46/50 1/1
2022-11-15 15:19:02 Running Benchmark: parachain_staking.join_candidates(1 args) 47/50 1/1
2022-11-15 15:20:15 Running Benchmark: parachain_staking.join_candidates(1 args) 48/50 1/1
2022-11-15 15:21:30 Running Benchmark: parachain_staking.join_candidates(1 args) 49/50 1/1
2022-11-15 15:22:47 Running Benchmark: parachain_staking.join_candidates(1 args) 50/50 1/1

It took too long to execute the benchmark code, I guess it was caused by the low configuration of the computer, which is the configuration of my computer

➜  RustCode lscpu
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   48 bits physical, 48 bits virtual
CPU(s):                          16
On-line CPU(s) list:             0-15
Thread(s) per core:              2
Core(s) per socket:              8
Socket(s):                       1
Vendor ID:                       AuthenticAMD
CPU family:                      25
Model:                           80
Model name:                      AMD Ryzen 9 5900HX with Radeon Graphics
Stepping:                        0
CPU MHz:                         3293.728
BogoMIPS:                        6587.45
Virtualization:                  AMD-V
Hypervisor vendor:               Microsoft
Virtualization type:             full
L1d cache:                       256 KiB
L1i cache:                       256 KiB
L2 cache:                        4 MiB
L3 cache:                        16 MiB
➜  RustCode free -m
              total        used        free      shared  buff/cache   available
Mem:          32036         176       29626           0        2234       31412
Swap:          8192           0        8192
@girazoki
Copy link
Collaborator

For 2, you have the reference hardware here: https://wiki.polkadot.network/docs/maintain-guides-how-to-validate-polkadot. Take into account that it is still possible that the benchmark code takes a little bit of time, as it is calculating a weight model based on the inputs.

For 1, I think we can take care of it @nbaztec

@Chengcheng-S
Copy link
Author

For 2, you have the reference hardware here: https://wiki.polkadot.network/docs/maintain-guides-how-to-validate-polkadot. Take into account that it is still possible that the benchmark code takes a little bit of time, as it is calculating a weight model based on the inputs.

For 1, I think we can take care of it @nbaztec

thanks

@Chengcheng-S
Copy link
Author

Thank you for your help. When I generate a weight file through benchmark code, the terminal keeps refreshing the

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

Is this normal? Is there a way to avoid flushing this log all the time? Looking forward to your reply

@Chengcheng-S
Copy link
Author

@nbaztec @girazoki Sorry for taking up your time, running the cargo test -p pallet-parachain-staking --features runtime-benchmarks command on the modified code will still generate an error, maybe you need to re-check the parachain-staking benchmark code

test benchmarks::tests::bench_schedule_leave_candidates ... ok
test benchmarks::benchmark_tests::test_benchmarks ... FAILED

failures:

---- benchmarks::benchmark_tests::test_benchmarks stdout ----
failing benchmark tests:
get_rewardable_delegators: "CannotDelegateLessThanOrEqualToLowestBottomWhenFull"
select_top_candidates: "CannotDelegateLessThanOrEqualToLowestBottomWhenFull"
set_auto_compound: "InsufficientBalance"
delegate_with_auto_compound: "InsufficientBalance"
thread 'benchmarks::benchmark_tests::test_benchmarks' panicked at 'assertion failed: !anything_failed', pallets/parachain-staking/src/benchmarks.rs:1498:1


failures:
    benchmarks::benchmark_tests::test_benchmarks

test result: FAILED. 310 passed; 1 failed; 0 ignored; 0 measured; 0 filtered out; finished in 8.33s

@nbaztec
Copy link
Contributor

nbaztec commented Nov 15, 2022

I'll take a look. Thanks for informing.

@nbaztec
Copy link
Contributor

nbaztec commented Nov 15, 2022

Thank you for your help. When I generate a weight file through benchmark code, the terminal keeps refreshing the

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

Is this normal? Is there a way to avoid flushing this log all the time? Looking forward to your reply

Hi @Jinsipang these logs are actually normal logs emitted from the runtime to bring to attention an important event. But perhaps they should be marked as debug, since they are the intended behavior. This would unfortunately, be fixed in the next runtime.

@Chengcheng-S
Copy link
Author

Thank you for your help. When I generate a weight file through benchmark code, the terminal keeps refreshing the

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

2022-11-15 18:01:20 reward for delegator '0xb765ce81e3e7cdc75a314b7ac4ac94d9eb06a96b' set to zero due to pending revoke request

Is this normal? Is there a way to avoid flushing this log all the time? Looking forward to your reply

Hi @Jinsipang these logs are actually normal logs emitted from the runtime to bring to attention an important event. But perhaps they should be marked as debug, since they are the intended behavior. This would unfortunately, be fixed in the next runtime.

Thank you for your reply, I hope this information can help you fix the problem in your project

@notlesh
Copy link
Contributor

notlesh commented Nov 16, 2022

The reference hardware is significantly slower than your machine, by the way. So expect it to take quite some time in either case. I think it takes around 1 hour for a full "production" run for us.

However, you can run a much quicker version (at the expense of accuracy) by using something like --steps 2 --repeat 2.

@Chengcheng-S
Copy link
Author

The reference hardware is significantly slower than your machine, by the way. So expect it to take quite some time in either case. I think it takes around 1 hour for a full "production" run for us.

However, you can run a much quicker version (at the expense of accuracy) by using something like --steps 2 --repeat 2.

@notlesh Thanks for your reply ,the actual operation takes much more time than this. I tried to run a complete parachain-staking benchmark code before, which took more than 4 hours. Now I am trying to optimize this part The code, try to improve the running time without affecting the benchmark results.
Also as you said changing the parameters of the run (--steps 2 --repeat 5.), this is not accurate in terms of results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants