-
Notifications
You must be signed in to change notification settings - Fork 627
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cleanup: Clippy in state-viewer, neard, nearcore, near-o11y #8925
Conversation
Avoided clippy warnings that want me to rename enum values or change function signatures. Overall clippy effort is tracked in #8145
`assert_no_deprecated_config()` was introduced in #7584, more than 6 months ago. Should be safe to delete.
…the cold storage (#8882) - Make the cold columns list contain all columns so that if new gets added it will need to be added to the is_cold method too. # strong typing. - Raised some TODOs that need to be addressed - any input from you Dear Reviewers would be appreciated. This is a bottom up approach to make sure all the data that should be copied to cold storage does get copied to cold storage. The top down approach of testing all view client rpc endpoint will follow. This PR was produced by: - Going through all columns as listed in the gc_col method in store.rs. For each GC-ed columns I checked if it is covered in is_cold. - For any columns that are missing I looked at go/cold-store in the "cold storage content" section. It does contain some reasons for why certain GC-ed columns should not be copied to cold. Some seem to be on point while some may be outdated and not take into consideration how the view client relies on the cold storage. It is fair given that the view client implementation is a quite recent development.
Adding localnet test to make sure that node can sync from cold storage. Introducing _migrate_to_split_storage a.k.a moving code around. Using it in old migration test (literally nothing changed). The main part of this PR is test_archival_node_sync -- spinning up legacy archival node, split storage archival node + validator and rpc node (to make split storage node split). Test flow is as follows: 1. Migrate split storage node 2. Kill legacy archival node 3. Produce bunch more epochs (twice as more as number of epochs to keep by split storage node) 4. Kill validator 5. Restart legacy archival node 6. Check that it is syncing a bit Will restart Nayduck when it will feel better, but for now no red flags, one green flag https://nayduck.near.org/#/run/2934
No particular reason to do this now, staying up to date is just good practice. I reviewed all relevant the changelogs, nothing of particular interest to us was changed. I also ran a few performance checks, it doesn't look like anything changed. Crates updated: - sha1 0.10.4 -> 0.10.5 - sha2 0.10.2 -> 0.10.6 - sha3 0.10.1 -> 0.10.6 - digest 0.10.3 -> 0.10.6 - secp256k1 0.24.2 -> 0.24.3 These are the newest stable versions of all crates, except secp256k1. I will look into updating secp256k1 to 0.27.0 in another PR because the update could include breaking changes according to semver. Co-authored-by: Akhilesh Singhania <akhi3030@gmail.com>
This is a first step toward reducing dependencies in near-primitives. See #8888 for context. `near-o11y` has too many dependencies to be pulled into crates like `near-primitives` that only need it for pretty printing. The new crate `near-fmt` is light-weight, it only depends on `near-primitives-core` and contains only the pretty printing code.
Updating a couple of dependencies allows to remove old duplicates. None of the "breaking" API changes affect us. redis 0.21.5 -> 0.23 gets rid of an old "sha1" csv 1.1.1 -> 1.2.1 gets rid of an old "itoa" Also, remove unnecessary exception for "num-rational". And while I was already going through dependencies and changelogs, also update "secp256k1" to the latest version.
* actually call near-vm when using VMKind::NearVm * handle review comments * cargo fmt
Co-authored-by: robin-near <111538878+robin-near@users.noreply.github.com>
…nager and ShardTracker. (#8778) The ShardsManager does not need the Runtime at all. It only needs EpochManager, plus two functions, something like `cares_about_shard_{next_}epoch`, which come from ShardTracker. The ChunksTestFixture has been migrated to no longer use KeyValueRuntime. Instead, it constructs a EpochManager directly, so it contains all the proper validator selection, producer selection, etc. logic. This is great, because it makes tests much more realistic, but carries a drawback which is that more work is required to set up tests. For this reason I've introduced a MockChainForShardsManager, which provides some functionality to set up the blockchain for testing. Still, it's way less obscure than KeyValueRuntime - we're running real logic afterall. Most of the original tests still pass under the migrated fixture, except two forwarding tests which are too difficult to set up correctly (and actually it's not clear if the original setup was correct) with the fixture, so I've migrated them to a test under tests/basic.rs using the TestLoop framework that is more flexible, and ensured that they are set up and tested correctly. Added two more TestLoop-based tests under tests/multi.rs, that test more multi-instance behavior. I used Tarpaulin to check the coverage of just these tests, it seem to achieve 87% coverage of chain/chunks/... which is quite good.
Actual test changes will happen when we make VMConfig::test() return a protocol version with contract preparation v2, so when we introduce the near-vm protocol upgrade.
Similarly to #8901, make tests also run with near-vm if they’re testing a recent enough protocol version (which means the changes will actually happen with the near-vm introduction protocol changes)
* [Debug UI] Dockerize debug UI project for deployment. * Update README with comment about production deployment.
…ar-primitives. (#8909) See #8896 . This time module kinda doesn't belong anywhere, but I think near-async is a reasonable place to put it instead of creating yet another crate. It's sort of got something to do with asynchrony I suppose, and near-async is a low enough crate that dependencies from other crates shouldn't be an issue.
* Migrate Chain and Chunks Info Page #8350 <h1> Problem </h1> Need migrate chain and chunks info page to Typescript + React framework. <h1> What has changed </h1> - Fixed bug on old page for not displaying floating chunks - Migrated the page with exactly the same format - used multiple navigation bars <h1> Testing </h1> - Ran linter - Attached before and after screenshots of UI on github issue and pull request - Manually viewed pages on UI ``` cd nearcore make debug nearup run localnet --binary-path ~/Github/near/nearcore/target/debug/ open http://localhost:3030/debug/pages/chain_n_chunk_info cd nearcore/tools/debug-ui npm install npm start open http://localhost:3000/localhost:3030/chain_and_chunk_info open http://localhost:3000/localhost:3030/chain_and_chunk_info/chain_info_summary open http://localhost:3000/localhost:3030/chain_and_chunk_info/floating_chunks open http://localhost:3000/localhost:3030/chain_and_chunk_info/blocks ``` * Addressed review 1 comments - Fixed enum format and printing of enum - Used and instead of conditional operator with a useless else statement - Added return type to some functions - Initialize numShards as constant - Used templated literals - String concat or string interpolation
* Adds functionality to get state parts as files from S3 * Fixes an off-by-one-block error in state dumping to S3 * * In State Dump * * In state-viewer * Latency metric for functions `fn apply_state_part()` and `fn obtain_state_part()` * New sub-sub-command `neard view-state state-parts read-state-header` to read state header stored in the DB.
This is a solution to #8908. Now when we try to prepay the gas for some operation and run into the gas limit, both the gas and compute usage for that operation will be paid partially, where compute cost is calculated as `gas_burnt / gas_cost * compute_cost`.
My previous PR #8762 was reverted by #8780 because release automation was broken. Looks like the cause of broken automation was a crate `near-test-contract`. We had a discussion about this package and the reasons to publish it in the first place in the very first PR. Now we have proof that a dependency that mentioned in the `dev-dependencies` section of `Cargo.toml` is not required to be published to crates.io. Once figured out I created a PR #8779 but it was unnoticed and instead all the changes were reverted in #8780. As agreed with @jakmeier I am creating the same changes but disabled `near-test-contract` for publishing necessary crates in the future.
Replace Children::encode_into and decode methods by implementations of BorshSerializable and BorshDeserializable traits and with that change RawTrieNode into an enum for which borsh implementations can be derived. While the first step is quite straightforward, the second requires RawTrieNode::Branch variant being split into two: BranchNoValue and BranchWithValue. The main benefit of this is less code (the commit has delta of -68 lines) and API consistent with other serialised objects. However, this doesn’t come without downsides. Firstly, having separate BranchNoValue and BranchWithValue branches means that it’s now a bit more awkward to handle branch as a single case with common code paths having to be refactored. Secondly, one might imagine in the future a new representation for nodes which embed value. For example {4u8}{key_len:u32}{key}{value} might be added for a leaf. This would essentially mean that value comprises of the rest of encoded bytes. The issue for borsh is that it would like to know length of {value} before it reads it. Either redundant {value_len:u32} would need to be included or some hacks employed (namely not using borsh for RawTrieNodeWithSize and then consuming the rest of the reader when decoding RawTrieNode). Lastly, I’m not certain about performance implications. Borsh might actually be slower than hand-implemented decoder which operates on a slice. If this is actually the case I haven’t measured.
As it fails with: ``` thread 'tests::test_contract_precompilation' panicked at 'Compilation result should be non-empty', runtime/runtime/src/lib.rs:2550:14 ``` presumably, because the compilation cache is not populated on aarch64-apple. A prior art: #8412
This PR re-introduces changes from #8805 together with #8892 after fixing #8908: The PR addresses #8265 by introducing aggregation of compute usage across all operations performed during chunk application and limiting this compute usage to 1s. This should not change the behavior of nodes in the short run because compute costs match gas costs which is validated by the assert, so any discrepancy should be caught on canary nodes.
This closes up the near-vm work. I believe #8886 (comment) is the only remaining point to be checked in former PRs, in addition to all the code added in this PR. Note that there are still things in #8323 that are not in this PR, because either I noticed they actually should not be merged (eg. the wasmparser bump would be a protocol upgrade, so will wait until we get limited replayability), or because they are actually orthogonal to this landing.
Part of #8827. Adds `flat_storage_creation_enabled` config flag to enable/disable flat storage creation. The default value is `true`. This is needed for SREs to be able to manually enable/disable flat storage creation when taking state backup.
Avoided clippy warnings that want me to rename enum values or change function signatures. Overall clippy effort is tracked in #8145
Avoided clippy warnings that want me to rename enum values or change function signatures. Overall clippy effort is tracked in #8145
* Adds functionality to get state parts as files from S3 * Fixes an off-by-one-block error in state dumping to S3 * * In State Dump * * In state-viewer * Latency metric for functions `fn apply_state_part()` and `fn obtain_state_part()` * New sub-sub-command `neard view-state state-parts read-state-header` to read state header stored in the DB.
This PR re-introduces changes from #8805 together with #8892 after fixing #8908: The PR addresses #8265 by introducing aggregation of compute usage across all operations performed during chunk application and limiting this compute usage to 1s. This should not change the behavior of nodes in the short run because compute costs match gas costs which is validated by the assert, so any discrepancy should be caught on canary nodes.
Hm, so when we first kick-started the effort rather than going crate-by-crate we made a call that we'd want to decide on clippy lints lint-by-lint and fix them across the codebase. This is so that these fixes can be enforced easily. I think we can avoid the "across the codebase" portion by adjusting our Another part to this strategy is that we'd rather make calls on whether a given lint is too pedantic or not once & do so nearcore-wide. This requires gathering feedback from a broader reviewer pool than just one or two people. It is usually counterproductive to group multiple decision points together into a single discussion, though. (For example, I do personally think that a builder pattern is a huge improvement over functions with 7 parameters; but maybe somebody has arguments against it that aren't just a gut feel. On the other hand, if there are no uses of the |
Avoided clippy warnings that want me to rename enum values or change function signatures. The clippy effort is tracked in #8145 Can't say that any of the findings were critical, but the following warnings in my personal opinion make the code most readable: * Prefer `is_empty()` * Derive `Default` for enums * Use short-hand field initializes: `MyStruct { x }` instead of `MyStruct { x: x }` * Prefer `sort_by_key()` * Prefer `!x` to `x == false`
Avoided clippy warnings that want me to rename enum values or change function signatures. The clippy effort is tracked in #8145 Can't say that any of the findings were critical, but the following warnings in my personal opinion make the code most readable: * Prefer `is_empty()` * Derive `Default` for enums * Use short-hand field initializes: `MyStruct { x }` instead of `MyStruct { x: x }` * Prefer `sort_by_key()` * Prefer `!x` to `x == false`
Avoided clippy warnings that want me to rename enum values or change function signatures. The clippy effort is tracked in #8145 Can't say that any of the findings were critical, but the following warnings in my personal opinion make the code most readable: * Prefer `is_empty()` * Derive `Default` for enums * Use short-hand field initializes: `MyStruct { x }` instead of `MyStruct { x: x }` * Prefer `sort_by_key()` * Prefer `!x` to `x == false`
Avoided clippy warnings that want me to rename enum values or change function signatures. The clippy effort is tracked in #8145 Can't say that any of the findings were critical, but the following warnings in my personal opinion make the code most readable: * Prefer `is_empty()` * Derive `Default` for enums * Use short-hand field initializes: `MyStruct { x }` instead of `MyStruct { x: x }` * Prefer `sort_by_key()` * Prefer `!x` to `x == false`
Avoided clippy warnings that want me to rename enum values or change function signatures.
The clippy effort is tracked in #8145
Can't say that any of the findings were critical, but the following warnings in my personal opinion make the code most readable:
is_empty()
Default
for enumsMyStruct { x }
instead ofMyStruct { x: x }
sort_by_key()
!x
tox == false