Skip to content

Commit

Permalink
feat: adds spellchecker workflow, and corrects misspelled words (#559)
Browse files Browse the repository at this point in the history
## What ❔

<!-- What are the changes this PR brings about? -->
<!-- Example: This PR adds a PR template to the repo. -->
<!-- (For bigger PRs adding more context is appreciated) -->
- Finishes the work started by @Deniallugo in
#437
- Adds spellchecker workflow to prevent further misspellings 
- Corrects existing misspelled words 

## Why ❔

- Ensures comments and inline documentation does not contain misspelled
words for improved readability

<!-- Why are these changes done? What goal do they contribute to? What
are the principles behind them? -->
<!-- Example: PR templates ensure PR reviewers, observers, and future
iterators are in context about the evolution of repos. -->

## Checklist

<!-- Check your PR fulfills the following items. -->
<!-- For draft PRs check the boxes as you complete them. -->

- [x] PR title corresponds to the body of PR (we generate changelog
entries from PRs).
- [ ] Tests for the changes have been added / updated.
- [x] Documentation comments have been added / updated.
- [x] Code has been formatted via `zk fmt` and `zk lint`.

---------

Signed-off-by: Danil <deniallugo@gmail.com>
Co-authored-by: Danil <deniallugo@gmail.com>
  • Loading branch information
dutterbutter and Deniallugo committed Nov 29, 2023
1 parent e8fd805 commit beac0a8
Show file tree
Hide file tree
Showing 124 changed files with 1,029 additions and 328 deletions.
1 change: 1 addition & 0 deletions .github/pull_request_template.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,3 +18,4 @@
- [ ] Tests for the changes have been added / updated.
- [ ] Documentation comments have been added / updated.
- [ ] Code has been formatted via `zk fmt` and `zk lint`.
- [ ] Spellcheck has been run via `cargo spellcheck --cfg=./spellcheck/era.cfg --code 1`.
24 changes: 24 additions & 0 deletions .github/workflows/check-spelling.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
name: Check Spelling

on:
push:
branches:
- main
pull_request:

env:
CARGO_TERM_COLOR: always

jobs:
spellcheck:
runs-on: ubuntu-latest
steps:
- name: Install cargo-spellcheck
uses: taiki-e/install-action@v2
with:
tool: cargo-spellcheck

- uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v4

- name: Run cargo-spellcheck
run: cargo spellcheck --cfg=./spellcheck/era.cfg --code 1
2 changes: 1 addition & 1 deletion core/CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@
* **prover-fri:** added picked-by column in prover fri related tables ([#2600](https://github.com/matter-labs/zksync-2-dev/issues/2600)) ([9e604ab](https://github.com/matter-labs/zksync-2-dev/commit/9e604abf3bae11b6f583f2abd39c07a85dc20f0a))
* update verification keys, protocol version 15 ([#2602](https://github.com/matter-labs/zksync-2-dev/issues/2602)) ([2fff59b](https://github.com/matter-labs/zksync-2-dev/commit/2fff59bab00849996864b68e932739135337ebd7))
* **vlog:** Rework the observability configuration subsystem ([#2608](https://github.com/matter-labs/zksync-2-dev/issues/2608)) ([377f0c5](https://github.com/matter-labs/zksync-2-dev/commit/377f0c5f734c979bc990b429dff0971466872e71))
* **vm:** Multivm tracer support ([#2601](https://github.com/matter-labs/zksync-2-dev/issues/2601)) ([4a7467b](https://github.com/matter-labs/zksync-2-dev/commit/4a7467b1b1556bfd795792dbe280bcf28c93a58f))
* **vm:** MultiVM tracer support ([#2601](https://github.com/matter-labs/zksync-2-dev/issues/2601)) ([4a7467b](https://github.com/matter-labs/zksync-2-dev/commit/4a7467b1b1556bfd795792dbe280bcf28c93a58f))

## [8.7.0](https://github.com/matter-labs/zksync-2-dev/compare/core-v8.6.0...core-v8.7.0) (2023-09-19)

Expand Down
2 changes: 1 addition & 1 deletion core/bin/block_reverter/src/main.rs
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@ enum Command {
/// L1 batch number used to rollback to.
#[arg(long)]
l1_batch_number: u32,
/// Priority fee used for rollback ethereum transaction.
/// Priority fee used for rollback Ethereum transaction.
// We operate only by priority fee because we want to use base fee from ethereum
// and send transaction as soon as possible without any resend logic
#[arg(long)]
Expand Down
2 changes: 1 addition & 1 deletion core/bin/external_node/src/config/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ pub struct OptionalENConfig {
/// Max possible size of an ABI encoded tx (in bytes).
#[serde(default = "OptionalENConfig::default_max_tx_size")]
pub max_tx_size: usize,
/// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the api server panics.
/// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the API server panics.
/// This is a temporary solution to mitigate API request resulting in thousands of DB queries.
pub vm_execution_cache_misses_limit: Option<usize>,
/// Inbound transaction limit used for throttling.
Expand Down
2 changes: 1 addition & 1 deletion core/lib/basic_types/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ impl TryFrom<U256> for AccountTreeId {
}
}

/// ChainId in the ZkSync network.
/// ChainId in the zkSync network.
#[derive(Copy, Clone, Debug, Serialize, PartialEq, Eq, PartialOrd, Ord, Hash)]
pub struct L2ChainId(u64);

Expand Down
2 changes: 1 addition & 1 deletion core/lib/config/src/configs/api.rs
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,7 @@ pub struct Web3JsonRpcConfig {
pub estimate_gas_acceptable_overestimation: u32,
/// Max possible size of an ABI encoded tx (in bytes).
pub max_tx_size: usize,
/// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the api server panics.
/// Max number of cache misses during one VM execution. If the number of cache misses exceeds this value, the API server panics.
/// This is a temporary solution to mitigate API request resulting in thousands of DB queries.
pub vm_execution_cache_misses_limit: Option<usize>,
/// Max number of VM instances to be concurrently spawned by the API server.
Expand Down
2 changes: 1 addition & 1 deletion core/lib/config/src/configs/chain.rs
Original file line number Diff line number Diff line change
Expand Up @@ -76,7 +76,7 @@ pub struct StateKeeperConfig {
pub close_block_at_geometry_percentage: f64,
/// Denotes the percentage of L1 params used in L2 block that triggers L2 block seal.
pub close_block_at_eth_params_percentage: f64,
/// Denotes the percentage of L1 gas used in l2 block that triggers L2 block seal.
/// Denotes the percentage of L1 gas used in L2 block that triggers L2 block seal.
pub close_block_at_gas_percentage: f64,

pub fee_account_addr: Address,
Expand Down
2 changes: 1 addition & 1 deletion core/lib/constants/src/crypto.rs
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@ pub const MAX_NEW_FACTORY_DEPS: usize = 32;
pub const PAD_MSG_BEFORE_HASH_BITS_LEN: usize = 736;

/// The size of the bootloader memory in bytes which is used by the protocol.
/// While the maximal possible size is a lot higher, we restric ourselves to a certain limit to reduce
/// While the maximal possible size is a lot higher, we restrict ourselves to a certain limit to reduce
/// the requirements on RAM.
pub const USED_BOOTLOADER_MEMORY_BYTES: usize = 1 << 24;
pub const USED_BOOTLOADER_MEMORY_WORDS: usize = USED_BOOTLOADER_MEMORY_BYTES / 32;
2 changes: 1 addition & 1 deletion core/lib/constants/src/ethereum.rs
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ pub const GUARANTEED_PUBDATA_PER_L1_BATCH: u64 = 4000;

/// The maximum number of pubdata per L1 batch. This limit is due to the fact that the Ethereum
/// nodes do not accept transactions that have more than 128kb of pubdata.
/// The 18kb margin is left in case of any inpreciseness of the pubdata calculation.
/// The 18kb margin is left in case of any impreciseness of the pubdata calculation.
pub const MAX_PUBDATA_PER_L1_BATCH: u64 = 110000;

// TODO: import from zkevm_opcode_defs once VM1.3 is supported
Expand Down
6 changes: 3 additions & 3 deletions core/lib/contracts/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -194,7 +194,7 @@ pub struct SystemContractsRepo {
}

impl SystemContractsRepo {
/// Returns the default system contracts repo with directory based on the ZKSYNC_HOME environment variable.
/// Returns the default system contracts repository with directory based on the ZKSYNC_HOME environment variable.
pub fn from_env() -> Self {
let zksync_home = std::env::var("ZKSYNC_HOME").unwrap_or_else(|_| ".".into());
let zksync_home = PathBuf::from(zksync_home);
Expand Down Expand Up @@ -336,7 +336,7 @@ impl BaseSystemContracts {
BaseSystemContracts::load_with_bootloader(bootloader_bytecode)
}

/// BaseSystemContracts with playground bootloader - used for handling 'eth_calls'.
/// BaseSystemContracts with playground bootloader - used for handling eth_calls.
pub fn playground() -> Self {
let bootloader_bytecode = read_playground_batch_bootloader_bytecode();
BaseSystemContracts::load_with_bootloader(bootloader_bytecode)
Expand Down Expand Up @@ -364,7 +364,7 @@ impl BaseSystemContracts {
BaseSystemContracts::load_with_bootloader(bootloader_bytecode)
}

/// BaseSystemContracts with playground bootloader - used for handling 'eth_calls'.
/// BaseSystemContracts with playground bootloader - used for handling eth_calls.
pub fn estimate_gas() -> Self {
let bootloader_bytecode = read_bootloader_code("fee_estimate");
BaseSystemContracts::load_with_bootloader(bootloader_bytecode)
Expand Down
8 changes: 4 additions & 4 deletions core/lib/dal/src/connection/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -72,13 +72,13 @@ impl<'a> ConnectionPoolBuilder<'a> {
}
}

/// Constructucts a new temporary database (with a randomized name)
/// Constructs a new temporary database (with a randomized name)
/// by cloning the database template pointed by TEST_DATABASE_URL env var.
/// The template is expected to have all migrations from dal/migrations applied.
/// For efficiency, the postgres container of TEST_DATABASE_URL should be
/// For efficiency, the Postgres container of TEST_DATABASE_URL should be
/// configured with option "fsync=off" - it disables waiting for disk synchronization
/// whenever you write to the DBs, therefore making it as fast as an inmem postgres instance.
/// The database is not cleaned up automatically, but rather the whole postgres
/// whenever you write to the DBs, therefore making it as fast as an in-memory Postgres instance.
/// The database is not cleaned up automatically, but rather the whole Postgres
/// container is recreated whenever you call "zk test rust".
pub(super) async fn create_test_db() -> anyhow::Result<url::Url> {
use rand::Rng as _;
Expand Down
2 changes: 1 addition & 1 deletion core/lib/dal/src/contract_verification_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ impl ContractVerificationDal<'_, '_> {
/// Returns the next verification request for processing.
/// Considering the situation where processing of some request
/// can be interrupted (panic, pod restart, etc..),
/// `processing_timeout` parameter is added to avoid stucking of requests.
/// `processing_timeout` parameter is added to avoid stuck requests.
pub async fn get_next_queued_verification_request(
&mut self,
processing_timeout: Duration,
Expand Down
2 changes: 1 addition & 1 deletion core/lib/dal/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ mod tests;

/// Storage processor is the main storage interaction point.
/// It holds down the connection (either direct or pooled) to the database
/// and provide methods to obtain different storage schemas.
/// and provide methods to obtain different storage schema.
#[derive(Debug)]
pub struct StorageProcessor<'a> {
conn: ConnectionHolder<'a>,
Expand Down
2 changes: 1 addition & 1 deletion core/lib/dal/src/storage_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -43,7 +43,7 @@ impl StorageDal<'_, '_> {
.unwrap();
}

/// Returns bytecode for a factory dep with the specified bytecode `hash`.
/// Returns bytecode for a factory dependency with the specified bytecode `hash`.
pub async fn get_factory_dep(&mut self, hash: H256) -> Option<Vec<u8>> {
sqlx::query!(
"SELECT bytecode FROM factory_deps WHERE bytecode_hash = $1",
Expand Down
2 changes: 1 addition & 1 deletion core/lib/dal/src/transactions_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -969,7 +969,7 @@ impl TransactionsDal<'_, '_> {
}
}

/// Returns miniblocks with their transactions that state_keeper needs to reexecute on restart.
/// Returns miniblocks with their transactions that state_keeper needs to re-execute on restart.
/// These are the transactions that are included to some miniblock,
/// but not included to L1 batch. The order of the transactions is the same as it was
/// during the previous execution.
Expand Down
2 changes: 1 addition & 1 deletion core/lib/dal/src/witness_generator_dal.rs
Original file line number Diff line number Diff line change
Expand Up @@ -527,7 +527,7 @@ impl WitnessGeneratorDal<'_, '_> {
/// Saves artifacts in node_aggregation_job
/// and advances it to `waiting_for_proofs` status
/// it will be advanced to `queued` by the prover when all the dependency proofs are computed.
/// If the node aggregation job was already `queued` in case of connrecunt run of same leaf aggregation job
/// If the node aggregation job was already `queued` in case of connector run of same leaf aggregation job
/// we keep the status as is to prevent data race.
pub async fn save_leaf_aggregation_artifacts(
&mut self,
Expand Down
10 changes: 5 additions & 5 deletions core/lib/eth_client/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ use zksync_types::{
};

/// Common Web3 interface, as seen by the core applications.
/// Encapsulates the raw Web3 interction, providing a high-level interface.
/// Encapsulates the raw Web3 interaction, providing a high-level interface.
///
/// ## Trait contents
///
Expand All @@ -34,7 +34,7 @@ use zksync_types::{
///
/// Most of the trait methods support the `component` parameter. This parameter is used to
/// describe the caller of the method. It may be useful to find the component that makes an
/// unnecessary high amount of Web3 calls. Implementations are advices to count invocations
/// unnecessary high amount of Web3 calls. Implementations are advice to count invocations
/// per component and expose them to Prometheus.
#[async_trait]
pub trait EthInterface: Sync + Send {
Expand Down Expand Up @@ -139,7 +139,7 @@ pub trait EthInterface: Sync + Send {
/// An extension of `EthInterface` trait, which is used to perform queries that are bound to
/// a certain contract and account.
///
/// THe example use cases for this trait would be:
/// The example use cases for this trait would be:
/// - An operator that sends transactions and interacts with zkSync contract.
/// - A wallet implementation in the SDK that is tied to a user's account.
///
Expand All @@ -149,10 +149,10 @@ pub trait EthInterface: Sync + Send {
/// implementation that invokes `contract` / `contract_addr` / `sender_account` methods.
#[async_trait]
pub trait BoundEthInterface: EthInterface {
/// ABI of the contract that is used by the implementor.
/// ABI of the contract that is used by the implementer.
fn contract(&self) -> &ethabi::Contract;

/// Address of the contract that is used by the implementor.
/// Address of the contract that is used by the implementer.
fn contract_addr(&self) -> H160;

/// Chain ID of the L1 network the client is *configured* to connected to.
Expand Down
6 changes: 3 additions & 3 deletions core/lib/eth_signer/src/json_rpc_signer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -85,7 +85,7 @@ impl EthereumSigner for JsonRpcSigner {
}
}

/// Signs typed struct using ethereum private key by EIP-712 signature standard.
/// Signs typed struct using Ethereum private key by EIP-712 signature standard.
/// Result of this function is the equivalent of RPC calling `eth_signTypedData`.
async fn sign_typed_data<S: EIP712TypedStructure + Sync>(
&self,
Expand Down Expand Up @@ -192,7 +192,7 @@ impl JsonRpcSigner {
self.address.ok_or(SignerError::DefineAddress)
}

/// Specifies the Ethreum address which sets the address for which all other requests will be processed.
/// Specifies the Ethereum address which sets the address for which all other requests will be processed.
/// If the address has already been set, then it will all the same change to a new one.
pub async fn detect_address(
&mut self,
Expand Down Expand Up @@ -376,7 +376,7 @@ mod messages {
Self::create("eth_sign", params)
}

/// Signs typed struct using ethereum private key by EIP-712 signature standard.
/// Signs typed struct using Ethereum private key by EIP-712 signature standard.
/// The address to sign with must be unlocked.
pub fn sign_typed_data<S: EIP712TypedStructure + Sync>(
address: Address,
Expand Down
2 changes: 1 addition & 1 deletion core/lib/eth_signer/src/pk_signer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,7 @@ impl EthereumSigner for PrivateKeySigner {
Ok(signature)
}

/// Signs typed struct using ethereum private key by EIP-712 signature standard.
/// Signs typed struct using Ethereum private key by EIP-712 signature standard.
/// Result of this function is the equivalent of RPC calling `eth_signTypedData`.
async fn sign_typed_data<S: EIP712TypedStructure + Sync>(
&self,
Expand Down
2 changes: 1 addition & 1 deletion core/lib/mempool/src/mempool_store.rs
Original file line number Diff line number Diff line change
Expand Up @@ -29,7 +29,7 @@ pub struct MempoolStore {
/// Next priority operation
next_priority_id: PriorityOpId,
stashed_accounts: Vec<Address>,
/// Number of l2 transactions in the mempool.
/// Number of L2 transactions in the mempool.
size: u64,
capacity: u64,
}
Expand Down
2 changes: 1 addition & 1 deletion core/lib/merkle_tree/src/pruning.rs
Original file line number Diff line number Diff line change
Expand Up @@ -89,7 +89,7 @@ impl<DB: PruneDatabase> MerkleTreePruner<DB> {
/// Sets the sleep duration when the pruner cannot progress. This time should be enough
/// for the tree to produce enough stale keys.
///
/// The default value is 60s.
/// The default value is 60 seconds.
pub fn set_poll_interval(&mut self, poll_interval: Duration) {
self.poll_interval = poll_interval;
}
Expand Down
2 changes: 1 addition & 1 deletion core/lib/merkle_tree/src/recovery.rs
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
//! afterwards will have the same outcome as if they were applied to the original tree.
//!
//! Importantly, a recovered tree is only *observably* identical to the original tree; it differs
//! in (currently unobservable) node versions. In a recovered tree, all nodes will initially have
//! in (currently un-observable) node versions. In a recovered tree, all nodes will initially have
//! the same version (the snapshot version), while in the original tree, node versions are distributed
//! from 0 to the snapshot version (both inclusive).
//!
Expand Down
4 changes: 2 additions & 2 deletions core/lib/merkle_tree/src/storage/patch.rs
Original file line number Diff line number Diff line change
Expand Up @@ -344,7 +344,7 @@ impl WorkingPatchSet {
}
}

/// Computes hashes and serializes this changeset.
/// Computes hashes and serializes this change set.
pub(super) fn finalize(
self,
manifest: Manifest,
Expand Down Expand Up @@ -597,7 +597,7 @@ impl WorkingPatchSet {
Some(Node::Internal(node)) => {
let (next_nibble, child_ref) = node.last_child_ref();
nibbles = nibbles.push(next_nibble).unwrap();
// ^ `unwrap()` is safe; there can be no internal nodes on the bottommost tree level
// ^ `unwrap()` is safe; there can be no internal nodes on the bottom-most tree level
let child_key = nibbles.with_version(child_ref.version);
let child_node = db.tree_node(&child_key, child_ref.is_leaf).unwrap();
// ^ `unwrap()` is safe by construction
Expand Down
6 changes: 3 additions & 3 deletions core/lib/merkle_tree/src/types/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ impl TreeEntry {
}
}

/// Returns `true` iff this entry encodes lack of a value.
/// Returns `true` if and only if this entry encodes lack of a value.
pub fn is_empty(&self) -> bool {
self.leaf_index == 0 && self.value_hash.is_zero()
}
Expand All @@ -63,7 +63,7 @@ pub struct TreeEntryWithProof {
/// Proof of the value authenticity.
///
/// If specified, a proof is the Merkle path consisting of up to 256 hashes
/// ordered starting the bottommost level of the tree (one with leaves) and ending before
/// ordered starting the bottom-most level of the tree (one with leaves) and ending before
/// the root level.
///
/// If the path is not full (contains <256 hashes), it means that the hashes at the beginning
Expand Down Expand Up @@ -152,7 +152,7 @@ pub struct TreeLogEntryWithProof<P = Vec<ValueHash>> {
/// Log entry about an atomic operation on the tree.
pub base: TreeLogEntry,
/// Merkle path to prove log authenticity. The path consists of up to 256 hashes
/// ordered starting the bottommost level of the tree (one with leaves) and ending before
/// ordered starting the bottom-most level of the tree (one with leaves) and ending before
/// the root level.
///
/// If the path is not full (contains <256 hashes), it means that the hashes at the beginning
Expand Down
2 changes: 1 addition & 1 deletion core/lib/multivm/src/glue/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ pub(crate) mod history_mode;
pub mod tracers;
mod types;

/// This trait is a workaround on the Rust'c [orphan rule](orphan_rule).
/// This trait is a workaround on the Rust's [orphan rule](orphan_rule).
/// We need to convert a lot of types that come from two different versions of some crate,
/// and `From`/`Into` traits are natural way of doing so. Unfortunately, we can't implement an
/// external trait on a pair of external types, so we're unable to use these traits.
Expand Down
Loading

0 comments on commit beac0a8

Please sign in to comment.