Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions src/SUMMARY.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@

- [Medium Severity](medium/README.md)

- [Append entries efficiently](medium/Append_Entries_Efficiently.md)
- [Remove deprecated storage getters](medium/Remove_Deprecated_Storage_Getters.md)
- [Avoid hardcoded parameters and values](medium/Avoid_Hardcoded_Parameters_and_Values.md)
- [Include tests for edge cases](medium/Include_Tests_for_Edge_Cases.md)
Expand Down
3 changes: 2 additions & 1 deletion src/TABLE.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
| <span style="color:red;">Critical</span> | Use appropriate origin checks | Open access on extrinsics without checks may allow unauthorized actions that can compromise security | Add access control checks to limit access to specific users or roles |
| <span style="color:red;">Critical</span> | Avoid unbounded iteration | Unbounded iterations over large data structures can lead to resource exhaustion and potential denial of service | Implement limits or use a bounded storage map for these iterations |
| <span style="color:red;">Critical</span> | Unchecked input data | Lack of input validation can lead to unexpected behaviors and potential vulnerabilities | Validate input data before processing to ensure safe and predictable behavior |
| <span style="color:red;">Critical</span> | Avoid unwrap usage inside runtime | Using `unwrap()` or `expect()` without proper error handling can lead to runtime panics and crashes | Handle errors gracefully with `Result` or `Option` types to prevent panics |
| <span style="color:red;">Critical</span> | Avoid unwrap usage inside runtime | Using `unwrap()` or `expect()` without proper error handling can lead to runtime panics and crashes | Handle errors gracefully with `Result` or `Option` types to prevent panics |
| <span style="color:red;">Critical</span> | Use benchmarking for accurate dynamic weights | Using hardcoded weights for extrinsics can lead to inaccurate resource estimations and performance issues. | Implement benchmarking to dynamically assess the weights of functions, ensuring they accurately reflect execution costs |
| <span style="color:orange;">High</span> | Make proper usage of XCM `Junctions` | Misuse of junction types (especially GeneralIndex) for purposes beyond their intended entity representation can lead to incorrect path routing | Use junctions strictly for their intended purpose of representing entities in Location paths; propose RFCs for new needs |
| <span style="color:orange;">High</span> | Properly setup XCM `Barrier` | Improperly configured XCM executor barriers can allow unauthorized free execution from any origin | Implement restrictive barriers with explicit authorization for unpaid execution and clear documentation of intended uses |
Expand All @@ -19,6 +19,7 @@
| <span style="color:orange;">High</span> | Avoid redundant storage access in mutations | Using both try_mutate and insert leads to unnecessary storage accesses | Use `try_mutate` or `try_mutate_exists` to read, modify, and write in a single step |
| <span style="color:orange;">High</span> | Prevent unnecessary reads and writes in storage access | Frequent reads and writes to storage without optimization can degrade performance | Use efficient storage access methods such as `try_mutate` to combine reads and writes |
| <span style="color:orange;">High</span> | Implement `try-state` Hook | The absence of `try-state` hooks prevents runtime sanity checks, making it harder to ensure that the storage state is sensible after upgrades | Implement the `try-state` hook to perform thorough state checks without altering storage |
| <span style="color:gold;">Medium</span> | Append entries efficiently | Using `try_mutate` has a severe penalty when appending elements to a `StorageValue` | Use `try_append` instead of `try_mutate` whenever possible |
| <span style="color:gold;">Medium</span> | Implement proper XCM fee management | Using the FeeManager unit type without consideration leads to unintended fee burning rather than proper fee handling | Implement proper FeeManager that either deposits or distributes fees, with clear handling of fee-exempt locations |
| <span style="color:gold;">Medium</span> | Remove deprecated storage getters | Using deprecated storage getters may lead to compatibility issues in future versions | Replace deprecated getters with the recommended methods in updated frameworks |
| <span style="color:gold;">Medium</span> | Avoid hardcoded parameters and values | Hardcoding parameters can reduce flexibility and adaptability to different environments | Use configurable parameters to enhance adaptability |
Expand Down
30 changes: 18 additions & 12 deletions src/critical/Avoid_Unbounded_Iteration.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,11 +12,13 @@ The following example illustrates an iteration over all items in `big_data_set`

```rust
#[pallet::storage]
pub type UnboundedData<T: Config> = StorageValue<_, Vec<u32>>;
pub type UnboundedData<T: Config> = StorageMap<_, Blake2_128Concat, T::AccountId, u32>;

let big_data_set = UnboundedData::<T>::get();
for item in big_data_set {
// Process each item with no limit
fn iterate_over_data() {
let big_data_set = UnboundedData::<T>::iter();
for (acc, data) in big_data_set {
// Process each item with no limit
}
}
```

Expand All @@ -30,11 +32,13 @@ Use a bounded iterator or limit the number of items processed in each iteration.
const MAX_ITEMS: usize = 20;

#[pallet::storage]
pub type UnboundedData<T: Config> = StorageValue<_, Vec<u32>>;
pub type UnboundedData<T: Config> = StorageMap<_, Blake2_128Concat, T::AccountId, u32>;

let big_data = UnboundedData::<T>::get();
for item in big_data.iter().take(MAX_ITEMS) {
// Process a limited number of items safely
fn iterate_over_data() {
let big_data = UnboundedData::<T>::iter();
for (acc, data) in big_data.take(MAX_ITEMS) {
// Process a limited number of items safely
}
}
```

Expand All @@ -46,11 +50,13 @@ Alternatively, use bounded data structures to enforce strict size limits at the

```rust
#[pallet::storage]
pub type BoundedData<T: Config> = StorageValue<_, BoundedVec<u32, T::MaxEntries>>;
pub type BoundedData<T: Config> = StorageValue<_, BoundedBTreeMap<T::AccountId, u32, T::MaxEntries>>;

let bounded_data = BoundedData::<T>::get();
for item in bounded_data {
// Iterates over a data structure with bounded size.
fn iterate_over_data() {
let bounded_data = BoundedData::<T>::get();
for (acc, data) in bounded_data {
// Iterates over a data structure with bounded size.
}
}
```

Expand Down
11 changes: 5 additions & 6 deletions src/high/Be_Careful_With_Storage_Growth.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The following code allows adding entries without any limit, leading to uncontrol

```rust
#[pallet::storage]
#[pallet::unbounded]
pub type Entries<T: Config> = StorageValue<_, Vec<u32>>;

fn add_entry(entry: u32) {
Expand All @@ -32,15 +33,13 @@ pub type Entries<T: Config> = StorageValue<_, BoundedVec<u32, T::MaxEntries>>;

#[pallet::error]
pub enum Error<T> {
/// MaxEntries limit reached
/// MaxEntries limit reached.
TooManyEntries,
}

fn add_entry_limited(entry: u32) -> Result<(), Error> {
Entries::<T>::try_mutate(|entries| {
entries.try_push(entry).map_err(|_| Error::<T>::TooManyEntries)?;
Ok(())
})
fn add_entry_limited(entry: u32) -> Result<(), DispatchError> {
Entries::<T>::try_append(entry).map_err(|_| Error::<T>::TooManyEntries)?;
Ok(())
}
```

Expand Down
48 changes: 48 additions & 0 deletions src/medium/Append_Entries_Efficiently.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
# Append Entries Efficiently

**Severity**: <span style="color:gold;">Medium</span>

## Description

Using `try_append` instead of `try_mutate` for a Substrate `StorageValue` has several advantages. Specifically,
`try_append` is more efficient because it avoids reading and rewriting the entire storage value, resulting in lower
computation and storage costs when appending a single item. This makes it particularly useful for operations where the
goal is simply to append an entry without other modifications. By contrast, `try_mutate` reads the entire storage value
into memory, modifies it, and writes it back, which can be both computationally and storage-intensive for large
collections.

## What should be avoided

Using `try_mutate` has a severe penalty when compared to `try_append` and should be avoided whenever possible.

```rust
#[pallet::storage]
pub type Entries<T: Config> = StorageValue<_, BoundedVec<u32, T::MaxEntries>>;

#[pallet::error]
pub enum Error<T> {
/// MaxEntries limit reached.
TooManyEntries,
}

fn add_entry(entry: u32) -> Result<(), DispatchError> {
// Adds a new entry by reading the whole storage item and editing it. This is inefficient.
Entries::<T>::try_mutate(|entries| {
entries.try_push(entry).map_err(|_| Error::<T>::TooManyEntries)?;
Ok(())
})
}
```

## Best practice

Use `try_append` for improved readability and efficiency.

```rust
fn add_entry(entry: u32) -> Result<(), DispatchError> {
// Adds a new entry efficiently.
Entries::<T>::try_append(entry).map_err(|_| Error::<T>::TooManyEntries)?;
Ok(())
}
```