Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: refactor out file rs functions into sub dirs with imports and mods (concept only, do not merge) #1335

Closed
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
18 changes: 9 additions & 9 deletions .github/workflows/benchmark-prs.yml
Expand Up @@ -35,7 +35,7 @@ jobs:

- name: Download 95mb file to be uploaded with the safe client
shell: bash
run: wget https://sn-node.s3.eu-west-2.amazonaws.com/the-test-data.zip
run: wget https://sn-node.s3.eu-west-2.amazonaws.com/the-test-chunks.zip

# As normal user won't care much about initial client startup,
# but be more alerted on communication speed during transmission.
Expand Down Expand Up @@ -68,13 +68,13 @@ jobs:

- name: Fund cli wallet
shell: bash
run: target/release/safe --log-output-dest=data-dir wallet get-faucet 127.0.0.1:8000
run: target/release/safe --log-output-dest=chunks-dir wallet get-faucet 127.0.0.1:8000
env:
SN_LOG: "all"

- name: Start a client instance to compare memory usage
shell: bash
run: target/release/safe --log-output-dest=data-dir files upload the-test-data.zip --retry-strategy quick
run: target/release/safe --log-output-dest=chunks-dir files upload the-test-chunks.zip --retry-strategy quick
env:
SN_LOG: "all"

Expand Down Expand Up @@ -146,7 +146,7 @@ jobs:
name: 'Memory Usage of Client during uploading large file'
tool: 'customSmallerIsBetter'
output-file-path: client_memory_usage.json
# Where the previous data file is stored
# Where the previous chunks file is stored
external-data-json-path: ./cache/client-mem-usage.json
# Workflow will fail when an alert happens
fail-on-alert: true
Expand Down Expand Up @@ -188,8 +188,8 @@ jobs:
# What benchmark tool the output.txt came from
tool: 'customBiggerIsBetter'
output-file-path: files-benchmark.json
# Where the previous data file is stored
external-data-json-path: ./cache/benchmark-data.json
# Where the previous chunks file is stored
external-data-json-path: ./cache/benchmark-chunks.json
# Workflow will fail when an alert happens
fail-on-alert: true
# GitHub API token to make a commit comment
Expand All @@ -203,7 +203,7 @@ jobs:

- name: Start a client to carry out download to output the logs
shell: bash
run: target/release/safe --log-output-dest=data-dir files download --retry-strategy quick
run: target/release/safe --log-output-dest=chunks-dir files download --retry-strategy quick

- name: Start a client to simulate criterion upload
shell: bash
Expand Down Expand Up @@ -271,7 +271,7 @@ jobs:
with:
tool: 'customSmallerIsBetter'
output-file-path: node_memory_usage.json
# Where the previous data file is stored
# Where the previous chunks file is stored
external-data-json-path: ./cache/node-mem-usage.json
# Workflow will fail when an alert happens
fail-on-alert: true
Expand Down Expand Up @@ -371,7 +371,7 @@ jobs:
with:
tool: 'customSmallerIsBetter'
output-file-path: swarm_driver_long_handlings.json
# Where the previous data file is stored
# Where the previous chunks file is stored
external-data-json-path: ./cache/swarm_driver_long_handlings.json
# Workflow will fail when an alert happens
fail-on-alert: true
Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/generate-benchmark-charts.yml
Expand Up @@ -43,7 +43,7 @@ jobs:

- name: Download 95mb file to be uploaded with the safe client
shell: bash
run: wget https://sn-node.s3.eu-west-2.amazonaws.com/the-test-data.zip
run: wget https://sn-node.s3.eu-west-2.amazonaws.com/the-test-chunks.zip

- name: Build node and client
run: cargo build --release --features local-discovery --bin safenode --bin safe --bin faucet
Expand Down Expand Up @@ -92,7 +92,7 @@ jobs:
shell: bash
run: cat files-benchmark.json

# gh-pages branch is updated and pushed automatically with extracted benchmark data
# gh-pages branch is updated and pushed automatically with extracted benchmark chunks
- name: Store cli files benchmark result
uses: benchmark-action/github-action-benchmark@v1
with:
Expand All @@ -105,7 +105,7 @@ jobs:

- name: Start a client instance to compare memory usage
shell: bash
run: cargo run --bin safe --release -- --log-output-dest=data-dir files upload the-test-data.zip --retry-strategy quick
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files upload the-test-chunks.zip --retry-strategy quick
env:
SN_LOG: "all"

Expand Down
6 changes: 3 additions & 3 deletions .github/workflows/memcheck.yml
Expand Up @@ -107,7 +107,7 @@ jobs:

- name: Download 95mb file to be uploaded with the safe client
shell: bash
run: wget https://sn-node.s3.eu-west-2.amazonaws.com/the-test-data.zip
run: wget https://sn-node.s3.eu-west-2.amazonaws.com/the-test-chunks.zip

# The resources file we upload may change, and with it mem consumption.
# Be aware!
Expand Down Expand Up @@ -159,7 +159,7 @@ jobs:
- name: Assert we've reloaded some chunks
run: rg "Existing record loaded" $RESTART_TEST_NODE_DATA_PATH

- name: Chunks data integrity during nodes churn
- name: Chunks chunks integrity during nodes churn
run: cargo test --release -p sn_node --test data_with_churn -- --nocapture
env:
TEST_DURATION_MINS: 5
Expand Down Expand Up @@ -212,7 +212,7 @@ jobs:
# exit 1
# fi

- name: Verify data replication using rg
- name: Verify chunks replication using rg
shell: bash
timeout-minutes: 1
# get the counts, then the specific line, and then the digit count only
Expand Down
50 changes: 25 additions & 25 deletions .github/workflows/merge.yml
Expand Up @@ -283,37 +283,37 @@ jobs:
timeout-minutes: 5

- name: Start a client to upload cost estimate
run: cargo run --bin safe --release -- --log-output-dest=data-dir files estimate "./resources"
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files estimate "./resources"
env:
SN_LOG: "all"
timeout-minutes: 15

- name: Start a client to upload files
run: cargo run --bin safe --release -- --log-output-dest=data-dir files upload "./resources" --retry-strategy quick
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files upload "./resources" --retry-strategy quick
env:
SN_LOG: "all"
timeout-minutes: 15

- name: Start a client to download files
run: cargo run --bin safe --release -- --log-output-dest=data-dir files download --retry-strategy quick
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files download --retry-strategy quick
env:
SN_LOG: "all"
timeout-minutes: 2

- name: Start a client to create a register writable by the owner only
run: cargo run --bin safe --release -- --log-output-dest=data-dir register create -n baobao
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register create -n baobao
env:
SN_LOG: "all"
timeout-minutes: 10

- name: Start a client to get a register writable by the owner only
run: cargo run --bin safe --release -- --log-output-dest=data-dir register get -n baobao
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register get -n baobao
env:
SN_LOG: "all"
timeout-minutes: 2

- name: Start a client to edit a register writable by the owner only
run: cargo run --bin safe --release -- --log-output-dest=data-dir register edit -n baobao wood
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register edit -n baobao wood
env:
SN_LOG: "all"
timeout-minutes: 10
Expand All @@ -323,27 +323,27 @@ jobs:
- name: Start a client to create a register writable by anyone
id: register-address
if: matrix.os != 'windows-latest'
run: echo "$(cargo run --bin safe --release -- --log-output-dest=data-dir register create -n trycatch -p | rg REGISTER_ADDRESS )" >> $GITHUB_OUTPUT
run: echo "$(cargo run --bin safe --release -- --log-output-dest=chunks-dir register create -n trycatch -p | rg REGISTER_ADDRESS )" >> $GITHUB_OUTPUT
env:
SN_LOG: "all"
timeout-minutes: 10

- name: Start a client to create a register writable by anyone
id: register-address-windows
if: matrix.os == 'windows-latest'
run: echo "$(cargo run --bin safe --release -- --log-output-dest=data-dir register create -n trycatch -p | rg REGISTER_ADDRESS )" >> $ENV:GITHUB_OUTPUT
run: echo "$(cargo run --bin safe --release -- --log-output-dest=chunks-dir register create -n trycatch -p | rg REGISTER_ADDRESS )" >> $ENV:GITHUB_OUTPUT
env:
SN_LOG: "all"
timeout-minutes: 10

- name: Start a client to get a register writable by anyone (current client is the owner)
run: cargo run --bin safe --release -- --log-output-dest=data-dir register get -n trycatch
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register get -n trycatch
env:
SN_LOG: "all"
timeout-minutes: 2

- name: Start a client to edit a register writable by anyone (current client is the owner)
run: cargo run --bin safe --release -- --log-output-dest=data-dir register edit -n trycatch wood
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register edit -n trycatch wood
env:
SN_LOG: "all"
timeout-minutes: 10
Expand All @@ -356,28 +356,28 @@ jobs:
#
- name: Start a client to get a register writable by anyone (new client is not the owner)
if: matrix.os != 'windows-latest'
run: cargo run --bin safe --release -- --log-output-dest=data-dir register get ${{ steps.register-address.outputs.REGISTER_ADDRESS }}
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register get ${{ steps.register-address.outputs.REGISTER_ADDRESS }}
env:
SN_LOG: "all"
timeout-minutes: 2

- name: Start a client to edit a register writable by anyone (new client is not the owner)
if: matrix.os != 'windows-latest'
run: cargo run --bin safe --release -- --log-output-dest=data-dir register edit ${{ steps.register-address.outputs.REGISTER_ADDRESS }} water
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register edit ${{ steps.register-address.outputs.REGISTER_ADDRESS }} water
env:
SN_LOG: "all"
timeout-minutes: 10

- name: Start a client to get a register writable by anyone (new client is not the owner)
if: matrix.os == 'windows-latest'
run: cargo run --bin safe --release -- --log-output-dest=data-dir register get ${{ steps.register-address-windows.outputs.REGISTER_ADDRESS }}
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register get ${{ steps.register-address-windows.outputs.REGISTER_ADDRESS }}
env:
SN_LOG: "all"
timeout-minutes: 2

- name: Start a client to edit a register writable by anyone (new client is not the owner)
if: matrix.os == 'windows-latest'
run: cargo run --bin safe --release -- --log-output-dest=data-dir register edit ${{ steps.register-address-windows.outputs.REGISTER_ADDRESS }} water
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir register edit ${{ steps.register-address-windows.outputs.REGISTER_ADDRESS }} water
env:
SN_LOG: "all"
timeout-minutes: 10
Expand Down Expand Up @@ -689,7 +689,7 @@ jobs:
echo "SAFE_PEERS has been set to $SAFE_PEERS"
fi

- name: Chunks data integrity during nodes churn
- name: Chunks chunks integrity during nodes churn
run: cargo test --release -p sn_node --features="local-discovery" --test data_with_churn -- --nocapture
env:
TEST_DURATION_MINS: 5
Expand Down Expand Up @@ -732,7 +732,7 @@ jobs:
# exit 1
# fi

- name: Verify data replication using rg
- name: Verify chunks replication using rg
shell: bash
timeout-minutes: 1
# get the counts, then the specific line, and then the digit count only
Expand All @@ -749,7 +749,7 @@ jobs:
fi

# Only error out after uploading the logs
- name: Don't log raw data
- name: Don't log raw chunks
if: matrix.os != 'windows-latest' # causes error
shell: bash
timeout-minutes: 10
Expand All @@ -762,7 +762,7 @@ jobs:

verify_data_location_routing_table:
if: "!startsWith(github.event.head_commit.message, 'chore(release):')"
name: Verify data location and Routing Table
name: Verify chunks location and Routing Table
runs-on: ${{ matrix.os }}
strategy:
matrix:
Expand All @@ -787,7 +787,7 @@ jobs:
run: cargo build --release --features local-discovery --bin safenode --bin faucet
timeout-minutes: 30

- name: Build data location and routing table tests
- name: Build chunks location and routing table tests
run: cargo test --release -p sn_node --features=local-discovery --test verify_data_location --test verify_routing_table --no-run
env:
# only set the target dir for windows to bypass the linker issue.
Expand Down Expand Up @@ -821,7 +821,7 @@ jobs:
CARGO_TARGET_DIR: ${{ matrix.os == 'windows-latest' && './test-target' || '.' }}
timeout-minutes: 5

- name: Verify the location of the data on the network
- name: Verify the location of the chunks on the network
run: cargo test --release -p sn_node --features="local-discovery" --test verify_data_location -- --nocapture
env:
CHURN_COUNT: 3
Expand Down Expand Up @@ -864,7 +864,7 @@ jobs:
echo "Node dir count is $node_count"

# Only error out after uploading the logs
- name: Don't log raw data
- name: Don't log raw chunks
if: matrix.os != 'windows-latest' # causes error
shell: bash
timeout-minutes: 10
Expand Down Expand Up @@ -962,7 +962,7 @@ jobs:
timeout-minutes: 5

- name: Start a client to upload
run: ~/safe --log-output-dest=data-dir files upload "ubuntu-16.04.7-desktop-amd64.iso" --retry-strategy quick
run: ~/safe --log-output-dest=chunks-dir files upload "ubuntu-16.04.7-desktop-amd64.iso" --retry-strategy quick
env:
SN_LOG: "all"
timeout-minutes: 30
Expand Down Expand Up @@ -1077,7 +1077,7 @@ jobs:
timeout-minutes: 5

- name: Start a client to upload first file
run: cargo run --bin safe --release -- --log-output-dest=data-dir files upload "./test_data_1.tar.gz" --retry-strategy quick
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files upload "./test_data_1.tar.gz" --retry-strategy quick
env:
SN_LOG: "all"
timeout-minutes: 5
Expand Down Expand Up @@ -1111,7 +1111,7 @@ jobs:
timeout-minutes: 6

- name: Use same client to upload second file
run: cargo run --bin safe --release -- --log-output-dest=data-dir files upload "./test_data_2.tar.gz" --retry-strategy quick
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files upload "./test_data_2.tar.gz" --retry-strategy quick
env:
SN_LOG: "all"
timeout-minutes: 10
Expand Down Expand Up @@ -1163,7 +1163,7 @@ jobs:
timeout-minutes: 25

- name: Use second client to upload third file
run: cargo run --bin safe --release -- --log-output-dest=data-dir files upload "./test_data_3.tar.gz" --retry-strategy quick
run: cargo run --bin safe --release -- --log-output-dest=chunks-dir files upload "./test_data_3.tar.gz" --retry-strategy quick
env:
SN_LOG: "all"
timeout-minutes: 10
Expand Down