Skip to content

Latest commit

 

History

History
274 lines (193 loc) · 7.8 KB

CONTRIBUTING.md

File metadata and controls

274 lines (193 loc) · 7.8 KB

Table of contents

Note: Generate a new chapter with openssl rand -hex 3

Pull Requests

All Pull Requests should first be made into the 'main' branch. In future the Github Actions CI badge build status that is shown in the README may depend on the outcome of building Pull Requests from a branch.

Skipping CI

To skip running the CI unnecessarily for simple changes such as updating the documentation, include [ci skip] or [skip ci] in your Git commit message.

Linting

Check with Rust Format. Note: If you need a specific version of it replace +nightly with say +nightly-2022-03-16

cargo +nightly fmt --all -- --check

If you wish to apply Rust Format on your changes prior to creating a PR. See Linting.

cargo +nightly fmt --all

Optionally run Clippy

cargo clippy --release -- -D warnings

Optionally run check

cargo check

Debugging

Simple Debugging

TODO - Replace with use of log::debug with native::debug. See DataHighway-DHX/node#41

  • Add to Cargo.toml of runtime module:
...
    'log/std',
...
[dependencies.log]
version = "0.4.8"
  • Add to my-module/src/lib.rs
use log::{error, info, debug, trace};
...
log::debug!("hello {:?}", world); // Only shows in terminal in debug mode
log::info!("hello {:?}", world); // Shows in terminal in release mode

Note: The use of debug::native::info!("hello {:?}", world); does not appear to work anymore since Substrate updates in Feb 2021.

Detailed Debugging

RUST_LOG=debug RUST_BACKTRACE=1 ./target/release/datahighway ... \
  ... \
  -lruntime=debug

Refer to Susbtrate Debugging documentation here

Testing

Run All Tests

cargo test -p datahighway-parachain-runtime

Run Integration Tests Only

cargo test -p datahighway-parachain-runtime

Run Specific Integration Tests

Example

cargo test -p datahighway-parachain-runtime --test <INSERT_INTEGRATION_TEST_FILENAME>

Benchmarking

Run the following:

./scripts/benchmark_all_pallets.sh

Try-Runtime

  • Run Collator nodes

  • Build whilst specifying the try-runtime feature

cargo build --release features=try-runtime
  • Run Try-Runtime so on-runtime-upgrade will invoke all OnRuntimeUpgrade hooks in pallets and the runtime
RUST_LOG=runtime=trace,try-runtime::cli=trace,executor=trace \
./target/release/datahighway-collator \
try-runtime \
--chain <chain-spec> \
--execution Wasm \
--wasm-execution Compiled \
--uri <ws/s port>
--block-at <block-hash> \
on-runtime-upgrade \
live

Notes:

  • Ensure that the Collator node was run with:
--rpc-max-payload 1000 \
--rpc-cors=all \
  • The --chain argument must be provided
  • Provide a --uri and --block-at hash from the testnet where the Collator node was launched. The defaults are the wss://127.0.0.1:9944 port and the latest finalized block respectively.
  • live means we are going to scrape a live testnet, as opposed to loading a saved file.

References:

Memory Profiling

curl -L https://github.com/koute/memory-profiler/releases/download/0.6.1/memory-profiler-x86_64-unknown-linux-gnu.tgz -o memory-profiler-x86_64-unknown-linux-gnu.tgz
tar -xf memory-profiler-x86_64-unknown-linux-gnu.tgz

export MEMORY_PROFILER_LOG=info
export MEMORY_PROFILER_LOGFILE=profiling_%e_%t.log
export MEMORY_PROFILER_OUTPUT=profiling_%e_%t.dat
export MEMORY_PROFILER_CULL_TEMPORARY_ALLOCATIONS=1

It should only be run on a testnet. See paritytech/subport#257. Purge local chain from previous tests, then:

LD_PRELOAD=<INSERT_PATH_TO_MEMORY_PROFILER>/libmemory_profiler.so \
./target/release/datahighway-collator <INSERT_TESTNET_ARGS>
./memory-profiler-cli server *.dat

View output at http://localhost:8080/

Reference:

Continuous Integration

Github Actions are used for Continuous Integration. View the latest CI Build Status of the 'develop' branch, from which all Pull Requests are made into the 'master' branch.

Note: We do not watch Pull Requests from the 'master' branch, as they would come from Forked repos.

Reference: https://help.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow

Linting

Clippy

Run Manually

Stable
cargo clippy --release -- -D warnings
Nightly

The following is a temporary fix. See rust-lang/rust-clippy#5094 (comment)

rustup component add clippy --toolchain nightly-2020-12-12-x86_64-unknown-linux-gnu
rustup component add clippy-preview --toolchain nightly-2020-12-12-x86_64-unknown-linux-gnu
cargo +nightly-2020-12-12 clippy-preview -Zunstable-options

Clippy and Continuous Integration (CI)

Clippy is currently disabled in CI for the following reasons.

A configuration file clippy.toml to accept or ignore different types of Clippy errors is not available (see rust-lang/cargo#5034). So it currenty takes a long time to manually ignore each type of Clippy error in each file.

To manually ignore a clippy error it is necessary to do the following, where redundant_pattern_matching is the clippy error type in this example:

#![cfg_attr(feature = "cargo-clippy", allow(clippy::redundant_pattern_matching))]

Rust Format

RustFmt should be used for styling Rust code. The styles are defined in the rustfmt.toml configuration file, which was generated by running rustfmt --print-config default rustfmt.toml and making some custom tweaks according to https://rust-lang.github.io/rustfmt/

Install RustFmt

rustup component add rustfmt --toolchain nightly-2020-12-12-x86_64-unknown-linux-gnu

Check Formating Changes that RustFmt before applying them

Check that you agree with all the formating changes that RustFmt will apply to identify anything that you do not agree with.

cargo +nightly fmt --all -- --check

Apply Formating Changes

cargo +nightly fmt --all

Code Editor Configuration

Add Vertical Rulers in VS Code

Add the following to settings.json "editor.rulers": [80,120], as recommended here https://stackoverflow.com/a/45951311/3208553

EditorConfig

Install an EditorConfig Plugin for your code editor to detect and apply the configuration in .editorconfig.

Create new runtime modules

substrate-module-new <module-name> <author>

FAQ

The latest FAQ is still recorded on the DataHighway standalone codebase here, or modified in subsequent PRs. It will be migrated into the DataHighway/documentation codebase.

Technical Support