Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement codespell as a workflow #1554

Open
wants to merge 11 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 3 additions & 0 deletions .codespellrc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
[codespell]
quiet-level = 2
ignore-words = .github/workflows/ignore-words.txt
28 changes: 28 additions & 0 deletions .github/workflows/codespell.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,28 @@
name: Spell Check

on:
pull_request:
push:
branches:
- main
- master

jobs:
spellcheck:
name: Run codespell
runs-on: ubuntu-latest

steps:
- name: Checkout code
uses: actions/checkout@v2

- name: Set up Python
uses: actions/setup-python@v2
with:
python-version: '3.x'

- name: Install codespell
run: pip install codespell

- name: Run codespell
run: codespell
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It will not fail the CI if we have some spelling mistakes. Could you also apply suggestions from codespell to the whole codebase, please?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added codespell to CI, should fail if there are spelling mistakes (exit error 65).

Could you also apply suggestions from codespell to the whole codebase, please?

Sure, do you mean every repos of FuelLabs?

Copy link
Collaborator

@xgreenx xgreenx Dec 20, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I meant to fuel-core. I tried to run codespell locally, and it seems to produce a lot of warnings.

Nevermind, I forgot to use your configuration

8 changes: 8 additions & 0 deletions .github/workflows/ignore-words.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
crate
inout
implementor
implementors
ser
fot
mis-match
re-use
4 changes: 2 additions & 2 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -105,7 +105,7 @@ This is a rough outline of what a contributor's workflow looks like:
- If the PR contains any breaking changes, add the breaking label to your PR.
- If you are part of the FuelLabs Github org, please open a PR from the repository itself.
- Otherwise, push your changes to a branch in your fork of the repository and submit a pull request.
- Make sure mention the issue, which is created at step 1, in the commit message.
- Make sure to mention the issue, which is created at step 1, in the commit message.
- Your PR will be reviewed and some changes may be requested.
- Once you've made changes, your PR must be re-reviewed and approved.
- If the PR becomes out of date, you can use GitHub's 'update branch' button.
Expand All @@ -120,7 +120,7 @@ Thanks for your contributions!

For beginners, we have prepared many suitable tasks for you. Checkout our [Help Wanted issues](https://github.com/FuelLabs/fuel-core/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22) for a list.

If you are planning something big, for example, relates to multiple components or changes current behaviors, make sure to open an issue to discuss with us before going on.
If you are planning something big, for example, relates to multiple components or changes in current behaviors, make sure to open an issue to discuss with us before going on.

The Client team actively develops and maintains several dependencies used in Fuel Core, which you may be also interested in:

Expand Down
4 changes: 2 additions & 2 deletions Makefile.toml
Original file line number Diff line number Diff line change
Expand Up @@ -2,9 +2,9 @@
# https://github.com/sagiegurari/cargo-make/blob/0.36.0/src/lib/descriptor/makefiles/stable.toml

# This is a configuration file for the cargo plugin `cargo-make`. We use this plugin because of it's handling around
# cargo workspaces. Specifically, each task is run on workspace members indepedently, avoiding potential issues that
# cargo workspaces. Specifically, each task is run on workspace members independently, avoiding potential issues that
# arise from feature unification (https://doc.rust-lang.org/cargo/reference/features.html#feature-unification).
# Feature unification allows two unrelated crates with the same depedency to enable features on eachother.
# Feature unification allows two unrelated crates with the same dependency to enable features on each other.
# This is problematic when a crate is built independently (when publishing / being consumed from crates.io),
# and it implicitly depended on features enabled by other crates in the same workspace.
# While feature resolver v2 attempted to resolve this problem, it still comes up in certain scenarios.
Expand Down
2 changes: 1 addition & 1 deletion crates/chain-config/src/serialization.rs
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ macro_rules! impl_hex_number {
let pad =
SIZE.checked_sub(bytes.len())
.ok_or(D::Error::custom(format!(
"value cant exceed {WORD_SIZE} bytes"
"value can't exceed {WORD_SIZE} bytes"
)))?;

if pad != 0 {
Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/database/storage.rs
Original file line number Diff line number Diff line change
Expand Up @@ -244,7 +244,7 @@ pub trait ToDatabaseKey {
where
Self: 'a;

/// Coverts the key into database key that supports byte presentation.
/// Converts the key into database key that supports byte presentation.
fn database_key(&self) -> Self::Type<'_>;
}

Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/graphql_api/service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -158,7 +158,7 @@ impl RunnableTask for Task {
}
}

// Need a seperate Data Object for each Query endpoint, cannot be avoided
// Need a separate Data Object for each Query endpoint, cannot be avoided
#[allow(clippy::too_many_arguments)]
pub fn new_service(
config: Config,
Expand Down
2 changes: 1 addition & 1 deletion crates/fuel-core/src/state/rocks_db.rs
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ impl ShallowTempDir {
Self { path }
}

/// Returns the path of teh directory.
/// Returns the path of the directory.
pub fn path(&self) -> &PathBuf {
&self.path
}
Expand Down
2 changes: 1 addition & 1 deletion crates/services/consensus_module/poa/src/deadline_clock.rs
Original file line number Diff line number Diff line change
Expand Up @@ -141,7 +141,7 @@ impl DeadlineClock {
}

/// Clears the timeout, so that now event is produced when it expires.
/// If the event has alread occurred, it will not be removed.
/// If the event has already occurred, it will not be removed.
pub async fn clear(&self) {
self.control
.send(ControlMessage::Clear)
Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/config/fuel_upgrade.rs
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ impl From<[u8; 32]> for Checksum {
/// When two nodes want to establish a connection they need to
/// exchange the Hash of their respective Chain Id and Chain Config.
/// The connection is only accepted if their hashes match.
/// This is used to aviod peers having same network name but different configurations connecting to each other.
/// This is used to avoid peers having same network name but different configurations connecting to each other.
#[derive(Debug, Clone)]
pub(crate) struct FuelUpgrade {
checksum: Checksum,
Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/gossipsub/topics.rs
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ impl GossipsubTopics {
}
}

/// Given a `GossipsubBroadcastRequest` retruns a `GossipTopic`
/// Given a `GossipsubBroadcastRequest` returns a `GossipTopic`
/// which is broadcast over the network with the serialized inner value of `GossipsubBroadcastRequest`
pub fn get_gossipsub_topic(
&self,
Expand Down
6 changes: 3 additions & 3 deletions crates/services/p2p/src/p2p_service.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1160,7 +1160,7 @@ mod tests {
}

// Simulates 2 p2p nodes that connect to each other and consequently exchange Peer Info
// On sucessful connection, node B updates its latest BlockHeight
// On successful connection, node B updates its latest BlockHeight
// and shares it with Peer A via Heartbeat protocol
#[tokio::test]
#[instrument]
Expand Down Expand Up @@ -1373,7 +1373,7 @@ mod tests {
p2p_config.bootstrap_nodes = node_b.multiaddrs();
let mut node_c = build_service_from_config(p2p_config.clone()).await;

// Node C does not connecto to Node A
// Node C does not connect to Node A
// it should receive the propagated message from Node B if `GossipsubMessageAcceptance` is `Accept`
node_c.swarm.ban_peer_id(node_a.local_peer_id);

Expand Down Expand Up @@ -1417,7 +1417,7 @@ mod tests {

// Node B received the correct message
// If we try to publish it again we will get `PublishError::Duplicate`
// This asserts that our MessageId calculation is consistant irrespective of which Peer sends it
// This asserts that our MessageId calculation is consistent irrespective of which Peer sends it
let broadcast_request = broadcast_request.clone();
matches!(node_b.publish_message(broadcast_request), Err(PublishError::Duplicate));

Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/peer_manager.rs
Original file line number Diff line number Diff line change
Expand Up @@ -260,7 +260,7 @@ impl PeerManager {
.choose(&mut range)
}

/// Handles the first connnection established with a Peer
/// Handles the first connection established with a Peer
fn handle_initial_connection(
&mut self,
peer_id: &PeerId,
Expand Down
2 changes: 1 addition & 1 deletion crates/services/p2p/src/peer_report.rs
Original file line number Diff line number Diff line change
Expand Up @@ -86,7 +86,7 @@ pub struct PeerReportBehaviour {
heartbeat: Heartbeat,
identify: Identify,
pending_events: VecDeque<PeerReportEvent>,
// regulary checks if reserved nodes are connected
// regularly checks if reserved nodes are connected
health_check: Interval,
decay_interval: Interval,
}
Expand Down
4 changes: 2 additions & 2 deletions crates/services/producer/src/block_producer.rs
Original file line number Diff line number Diff line change
Expand Up @@ -107,7 +107,7 @@ where
gas_limit: max_gas,
};

// Store the context string incase we error.
// Store the context string in case we error.
let context_string =
format!("Failed to produce block {height:?} due to execution failure");
let result = self
Expand All @@ -121,7 +121,7 @@ where
}

// TODO: Support custom `block_time` for `dry_run`.
/// Simulate a transaction without altering any state. Does not aquire the production lock
/// Simulate a transaction without altering any state. Does not acquire the production lock
/// since it is basically a "read only" operation and shouldn't get in the way of normal
/// production.
pub async fn dry_run(
Expand Down
2 changes: 1 addition & 1 deletion crates/services/relayer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ Ethereum blocks are considered final after two epochs. Each epoch contains 32 sl

Second finality that we have is related to fuel block attestation time limit, how long are we going to wait until challenge comes. It should be at least longer than ethereum finality. Not relevant for first version.

* Problem: Validator deposit to ethereum gets reverted by block reorg. (Eth clients usually have priority for reverted txs but this does not mean it cant happen). It can potentially rearrange order of transactions
* Problem: Validator deposit to ethereum gets reverted by block reorg. (Eth clients usually have priority for reverted txs but this does not mean it can't happen). It can potentially rearrange order of transactions
* Solution: Introduce sliding window, only deposits that are at least eth finality long can be finalized and included in validators leader selection.

* Problem: How to choose when bridge message event gets enabled for use in fuel, at what exact fuel block does this happen? (Note that we have sliding window)
Expand Down
2 changes: 1 addition & 1 deletion crates/services/txpool/src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ impl TxInfo {
pub fn new(tx: ArcPoolTx) -> Self {
let since_epoch = std::time::SystemTime::now()
.duration_since(std::time::UNIX_EPOCH)
.expect("Now is bellow of the `UNIX_EPOCH`");
.expect("Now is below of the `UNIX_EPOCH`");

Self {
tx,
Expand Down
2 changes: 1 addition & 1 deletion crates/types/src/services/txpool.rs
Original file line number Diff line number Diff line change
Expand Up @@ -186,7 +186,7 @@ pub enum TransactionStatus {
/// Why this happened
reason: String,
},
/// Transaction was included in a block, but the exection was reverted
/// Transaction was included in a block, but the execution was reverted
Failed {
/// Included in this block
block_id: BlockId,
Expand Down
2 changes: 1 addition & 1 deletion deployment/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@ ENV BUILD_FEATURES=$FEATURES
COPY --from=planner /build/recipe.json recipe.json
RUN echo $CARGO_PROFILE_RELEASE_DEBUG
RUN echo $BUILD_FEATURES
# Build our project dependecies, not our application!
# Build our project dependencies, not our application!
RUN xx-cargo chef cook --release --no-default-features --features "${BUILD_FEATURES}" -p fuel-core-bin --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
Expand Down
2 changes: 1 addition & 1 deletion deployment/e2e-client.Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ RUN cargo chef prepare --recipe-path recipe.json
FROM chef as builder
ENV CARGO_NET_GIT_FETCH_WITH_CLI=true
COPY --from=planner /build/recipe.json recipe.json
# Build our project dependecies, not our application!
# Build our project dependencies, not our application!
RUN cargo chef cook --release -p fuel-core-e2e-client --features p2p --recipe-path recipe.json
# Up to this point, if our dependency tree stays the same,
# all layers should be cached.
Expand Down
4 changes: 2 additions & 2 deletions docs/fee_calculations.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ to include an additional cost to op codes that write new data to storage or to
transactions that add new contracts to the chain.

There are a number of ways we might calculate this value; we have decided to go
with a simple calculatoin based on our target storage growth and working
with a simple calculation based on our target storage growth and working
backward from there.

#### Pessimistic Estimate
Expand All @@ -23,7 +23,7 @@ This gives us this graph:
| 500,000,000,000 | 10,000,000 | 31536000 | **15,855** | **630.72** |

This is a harsh estimate that isn't taking into account the additional base cost of tx
execution and the cost of any additional op codes. It is also assuming that
execution and the cost of any additional op codes. It is also assumed that
all blocks would be maxing out the storage.

#### Generous Estimate
Expand Down
4 changes: 2 additions & 2 deletions docs/poa/flows.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Flows

## PoA Primary Production Flow
When the node is configured with a POA key, produce blocks and notify network.
When the node is configured with a POA key, produces blocks and notifies network.

```mermaid
sequenceDiagram
Expand Down Expand Up @@ -99,7 +99,7 @@ sequenceDiagram
S->>+POA: verify signed block header
POA->>+R: await new block da height
R-->>-POA:
note right of POA: verify signature against current authority key
note right of POA: verify the signature against current authority key
POA->>-S:
S->>+BI: commit sealed block
BI->>+R: check_da_height for message inclusion
Expand Down
4 changes: 2 additions & 2 deletions tests/tests/trigger_integration/interval.rs
Original file line number Diff line number Diff line change
Expand Up @@ -83,7 +83,7 @@ async fn poa_interval_produces_empty_blocks_at_correct_rate() {
round_time_seconds <= secs_per_round
&& secs_per_round
<= round_time_seconds + 2 * (rounds as u64) / round_time_seconds,
"Round time not within treshold"
"Round time not within threshold"
);
}

Expand Down Expand Up @@ -168,7 +168,7 @@ async fn poa_interval_produces_nonempty_blocks_at_correct_rate() {
round_time_seconds <= secs_per_round
&& secs_per_round
<= round_time_seconds + 2 * (rounds as u64) / round_time_seconds,
"Round time not within treshold"
"Round time not within threshold"
);

// Make sure all txs got produced
Expand Down