chore: adjust create_streams fee to 1 TRUF per transaction#1373
Conversation
Time Submission Status
You can submit time with the command. Example: See available commands to help comply with our Guidelines. |
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (13)
📝 WalkthroughWalkthroughThis PR consolidates the write-fee model across three stream actions ( ChangesFlat per-transaction write fee
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Tip 💬 Introducing Slack Agent: The best way for teams to turn conversations into code.Slack Agent is built on CodeRabbit's deep understanding of your code, so your team can collaborate across the entire SDLC without losing context.
Built for teams:
One agent for your entire SDLC. Right inside Slack. Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
internal/migrations/003-primitive-insertion.sql (1)
40-69:⚠️ Potential issue | 🔴 Critical | ⚡ Quick winCritical: Re-enable batch size limit before deploying flat-fee model.
Lines 42-45 show a commented-out batch size cap (
if $num_records > 10) with a TODO stating it "MUST be re-enabled once ingestors are updated." The flat 1 TRUF fee (lines 66-69) now makes unbounded batches economically attractive: inserting 1000 records costs the same as inserting 1 record. This combination creates an immediate DoS risk.The batch size limit must be re-enabled (or a new appropriate limit set) before this fee change is deployed. Otherwise, malicious or cost-optimizing users can submit arbitrarily large
insert_recordscalls for 1 TRUF, exhausting node resources.Also applies to: 79-79
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/migrations/003-primitive-insertion.sql` around lines 40 - 69, The batch-size cap for insert_records was commented out and must be re-enabled to prevent DoS under the new flat-fee ($total_fee) model: restore the conditional that checks $num_records (e.g., "if $num_records > 10") and ERROR with a clear message (same style as existing checks), or choose a new safe limit instead of 10; ensure the check uses the same NULL-safe patterns as the other validations (use IS DISTINCT FROM where needed) and apply the same re-enable at the other occurrence referenced (around the later occurrence of $num_records) so all entry points enforce the cap before proceeding to fee collection.internal/migrations/004-composed-taxonomy.sql (1)
33-44:⚠️ Potential issue | 🟠 Major | 🏗️ Heavy liftAdd an upper bound on the number of children to prevent resource exhaustion.
Similar to
create_streams, the flat 1 TRUF fee incentivizes submitting taxonomies with very large child arrays. While line 43 ensures at least 1 child, there's no upper bound on$num_children. A taxonomy with hundreds of children now costs the same 1 TRUF as one with a single child, creating a potential resource exhaustion vector (the FOR loop at lines 81-114 processes each child sequentially, performing lookups and inserts).Consider adding a maximum child count validation before the fee collection block.
Also applies to: 47-66
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/migrations/004-composed-taxonomy.sql` around lines 33 - 44, Add a strict upper bound check for the number of children to prevent resource exhaustion: validate $num_children (the variable computed from array_length($child_stream_ids)) is between 1 and a defined MAX_CHILDREN before any fee collection or processing occurs (i.e., before the FOR loop and before fee collection), and raise an ERROR if it exceeds the limit; introduce a clear constant name like MAX_CHILDREN (choose an appropriate value consistent with create_streams) and apply the same check for the other child-array validation block that compares $child_stream_ids, $child_data_providers and $weights so all paths enforce the cap.
🧹 Nitpick comments (2)
internal/migrations/003-primitive-insertion.prod.sql (1)
30-31: ⚡ Quick winUpdate comment to reflect fee calculation no longer uses record count.
The comment states
$num_recordsis "used for both fee calculation and validation," but after the flat-fee change, the fee calculation (line 62) no longer depends on record count—it's always 1 TRUF per transaction. The variable is now only used for validation (lines 43–56).📝 Suggested comment fix
- -- Get record count (used for both fee calculation and validation) + -- Get record count (used for validation) $num_records INT := array_length($data_provider);🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@internal/migrations/003-primitive-insertion.prod.sql` around lines 30 - 31, The comment above the declaration of $num_records (initialized from array_length($data_provider)) is outdated — update it to state that $num_records is used only for validation (not fee calculation) since fees are now a flat 1 TRUF per transaction; locate the declaration of $num_records and change the comment to reflect "used for validation only" or similar, removing the reference to fee calculation.tests/streams/insert_records_fee_test.go (1)
34-41: ⚡ Quick winUse the shared write-fee constant here to avoid cross-test drift.
This file hardcodes
1 TRUFwhile sibling fee tests now read fromfeefund.WriteFeeWei. Reusing the shared constant keeps policy changes centralized and prevents silent divergence across suites.Proposed patch
import ( "context" "fmt" "math/big" "strings" "testing" @@ testutils "github.com/trufnetwork/node/tests/streams/utils" testerc20 "github.com/trufnetwork/node/tests/streams/utils/erc20" + "github.com/trufnetwork/node/tests/streams/utils/feefund" "github.com/trufnetwork/node/tests/streams/utils/setup" @@ - // Flat 1 TRUF per insert_records transaction (issue `#3805`): one call - // charges the same fee regardless of how many records it carries. - insertFeeAmount = "1000000000000000000" // 1 TRUF with 18 decimals per tx + // Flat 1 TRUF per insert_records transaction (issue `#3805`): one call + // charges the same fee regardless of how many records it carries. + insertFeeAmount = feefund.WriteFeeWei )🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@tests/streams/insert_records_fee_test.go` around lines 34 - 41, Replace the hardcoded "1 TRUF" fee with the shared feefund.WriteFeeWei value: remove or stop using insertFeeAmount and instead call mustParseInsertBigInt(feefund.WriteFeeWei.String() or feefund.WriteFeeWei if already a string) when initializing oneTRUFInsert; ensure any place referencing insertFeeAmount uses feefund.WriteFeeWei so the test reads the centralized write-fee constant (symbols: insertFeeAmount, oneTRUFInsert, mustParseInsertBigInt, feefund.WriteFeeWei).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@internal/migrations/001-common-actions.sql`:
- Line 57: Add an explicit batch-size guard in the create_streams procedure:
before the fee collection block and right after computing
array_length($stream_ids) (e.g., where $num_streams is derived), check if
$num_streams exceeds a shared MAX_BATCH_SIZE constant and raise an ERROR like
"create_streams: batch size exceeds maximum of [MAX_BATCH_SIZE] streams"; use
the same MAX_BATCH_SIZE value used by other batch operations (see pattern in
003-primitive-insertion.sql) to keep limits consistent and prevent resource
exhaustion during validation and writes.
---
Outside diff comments:
In `@internal/migrations/003-primitive-insertion.sql`:
- Around line 40-69: The batch-size cap for insert_records was commented out and
must be re-enabled to prevent DoS under the new flat-fee ($total_fee) model:
restore the conditional that checks $num_records (e.g., "if $num_records > 10")
and ERROR with a clear message (same style as existing checks), or choose a new
safe limit instead of 10; ensure the check uses the same NULL-safe patterns as
the other validations (use IS DISTINCT FROM where needed) and apply the same
re-enable at the other occurrence referenced (around the later occurrence of
$num_records) so all entry points enforce the cap before proceeding to fee
collection.
In `@internal/migrations/004-composed-taxonomy.sql`:
- Around line 33-44: Add a strict upper bound check for the number of children
to prevent resource exhaustion: validate $num_children (the variable computed
from array_length($child_stream_ids)) is between 1 and a defined MAX_CHILDREN
before any fee collection or processing occurs (i.e., before the FOR loop and
before fee collection), and raise an ERROR if it exceeds the limit; introduce a
clear constant name like MAX_CHILDREN (choose an appropriate value consistent
with create_streams) and apply the same check for the other child-array
validation block that compares $child_stream_ids, $child_data_providers and
$weights so all paths enforce the cap.
---
Nitpick comments:
In `@internal/migrations/003-primitive-insertion.prod.sql`:
- Around line 30-31: The comment above the declaration of $num_records
(initialized from array_length($data_provider)) is outdated — update it to state
that $num_records is used only for validation (not fee calculation) since fees
are now a flat 1 TRUF per transaction; locate the declaration of $num_records
and change the comment to reflect "used for validation only" or similar,
removing the reference to fee calculation.
In `@tests/streams/insert_records_fee_test.go`:
- Around line 34-41: Replace the hardcoded "1 TRUF" fee with the shared
feefund.WriteFeeWei value: remove or stop using insertFeeAmount and instead call
mustParseInsertBigInt(feefund.WriteFeeWei.String() or feefund.WriteFeeWei if
already a string) when initializing oneTRUFInsert; ensure any place referencing
insertFeeAmount uses feefund.WriteFeeWei so the test reads the centralized
write-fee constant (symbols: insertFeeAmount, oneTRUFInsert,
mustParseInsertBigInt, feefund.WriteFeeWei).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 0ef5c7eb-c947-483e-af63-2b3f35a6e67a
📒 Files selected for processing (19)
extensions/tn_settlement/settlement_integration_test.gointernal/migrations/001-common-actions.prod.sqlinternal/migrations/001-common-actions.sqlinternal/migrations/003-primitive-insertion.prod.sqlinternal/migrations/003-primitive-insertion.sqlinternal/migrations/004-composed-taxonomy.prod.sqlinternal/migrations/004-composed-taxonomy.sqltests/streams/allow_zeros_test.gotests/streams/insert_records_fee_test.gotests/streams/primitive_batch_insert_alignment_test.gotests/streams/roles/permission_gates_test.gotests/streams/stream_creation_fee_test.gotests/streams/taxonomy_fee_test.gotests/streams/transaction_events_ledger_test.gotests/streams/utils/feefund/feefund_default.gotests/streams/utils/feefund/feefund_kwiltest.gotests/streams/utils/procedure/execute.gotests/streams/utils/setup/common.gotests/streams/utils/setup/primitive.go
|
@holdex pr submit-time 3h |
resolves: https://github.com/truflation/website/issues/3805
Summary by CodeRabbit