Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pay per chunk #647

Closed
wants to merge 5 commits into from
Closed

Pay per chunk #647

wants to merge 5 commits into from

Conversation

joshuef
Copy link
Contributor

@joshuef joshuef commented Aug 16, 2023

starting to simplify payments to a per chunk basis

A first step in #638## Description

Summary generated by Reviewpad on 16 Aug 23 09:51 UTC

This pull request includes multiple file diffs with various changes, such as renaming types and fields, importing and removing modules, updating function signatures and return types, and modifying method implementations. These changes seem to be focused on improving code clarity, making updates for better functionality, and enhancing the payment transaction handling.

Please review the changes thoroughly to ensure they align with the intended goals and do not introduce any unintended issues or conflicts. Let me know if you need further details or assistance with the code review.

@reviewpad reviewpad bot added the Large Large sized PR label Aug 16, 2023

trace!("Making payment for of {amount_to_pay:?} nano tokens to each node in close group at {content_addr:?}.");

// TODO: This needs to go out to each CLOSEGROUP of addresses

Check notice

Code scanning / devskim

A "TODO" or similar was left in source code, possibly indicating incomplete functionality Note

Suspicious comment
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you mean paying each individual holder ?

hmm... I don't think we need to contact each holder individually,
just get_closest(chunk_addr) then transfer some amount to each fetched close_peer, and send the transfer to network shall be enough?

just, that means the payment_proof needs to hold 8 spend_ids, and on validation, the holder only needs to validate the spend to itself (or still ALL for security?)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each CLOSE_GROUP for each record address needs to return a price, which should then be paid.

on validation, each would just do itself, I think?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each CLOSE_GROUP for each record address needs to return a price, which should then be paid.

ok. I thought a close_group can charge with the same.

on validation, each would just do itself, I think?

yeah. but that's can be done on receiving the record via put_record , instead of contacting them directly individually.

reason_hash: Hash,
) -> Result<TransferOutputs> {
// We need to select the necessary number of dbcs from those that we were passed.
let (dbcs_to_spend, change_amount) = select_inputs(available_dbcs, storage_payment)?;

// TODO: Clear all this up when we've removed this

Check notice

Code scanning / devskim

A "TODO" or similar was left in source code, possibly indicating incomplete functionality Note

Suspicious comment
panic!("Expected insufficient amount error")
}
}
// TODO: Add a check on cost when we're paying DBCs

Check notice

Code scanning / devskim

A "TODO" or similar was left in source code, possibly indicating incomplete functionality Note

Suspicious comment
self.store_cost().as_nano() * (2.0f64.powf((num_of_addrs / 100 + 1) as f64)) as u64,
);
for content_addr in addrs_to_pay {
// TODO: parallelise this

Check notice

Code scanning / devskim

A "TODO" or similar was left in source code, possibly indicating incomplete functionality Note

Suspicious comment
@@ -292,7 +275,7 @@ impl Client {
pub(super) async fn store_chunk(
&self,
chunk: Chunk,
payment: PaymentProof,
payment: Vec<DbcId>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought there is only one DbcId of the payment for one chunk ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you may need to reissue several DBCs sometimes if you have a lot of dust?

sn_client/src/wallet.rs Outdated Show resolved Hide resolved
sn_client/src/wallet.rs Show resolved Hide resolved
sn_client/src/wallet.rs Outdated Show resolved Hide resolved
@joshuef joshuef changed the title feat!(protocol): remove chunk merkletree to simplify payment Pay per chunk Aug 16, 2023
@joshuef joshuef marked this pull request as ready for review August 16, 2023 10:14
@reviewpad
Copy link

reviewpad bot commented Aug 16, 2023

Reviewpad Report

‼️ Errors

  • Unconventional commit detected: 'feat!(protocol): remove chunk merkletree to simplify payment

starting to simplify payments to a per chunk basis

A first step in #638' (53a1653)

  • Unconventional commit detected: 'feat!(protocol): get price and pay for each chunk individually' (bc38914)
  • Unconventional title detected: 'Pay per chunk' illegal 'a' character in commit message type: col=01

@joshuef
Copy link
Contributor Author

joshuef commented Aug 16, 2023

This closes #638

@@ -776,6 +775,8 @@
/// Retry up to `PUT_RECORD_RETRIES` times if we can't verify the record is stored
async fn put_record_with_retries(&self, record: Record) -> Result<()> {
let mut retries = 0;

// TODO: Move this put retry loop up above store cost checks so we can re-put if storecost failed.

Check notice

Code scanning / devskim

A "TODO" or similar was left in source code, possibly indicating incomplete functionality Note

Suspicious comment
Previously we were hanging waiting on inactivity before we continued, now we
push the loop forward and can us bootstrap to get us going
@joshuef joshuef added this pull request to the merge queue Aug 17, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Aug 17, 2023
@joshuef joshuef added this pull request to the merge queue Aug 17, 2023
@github-merge-queue github-merge-queue bot removed this pull request from the merge queue due to failed status checks Aug 17, 2023
@joshuef
Copy link
Contributor Author

joshuef commented Aug 18, 2023

Closing as #649 is atop this and working better

@joshuef joshuef closed this Aug 18, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Large Large sized PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants