Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drips sender can prevent receiver from squeezing by spamming the drips history #274

Open
code423n4 opened this issue Feb 3, 2023 · 12 comments
Labels
bug Something isn't working disagree with severity Sponsor confirms validity, but disagrees with warden’s risk assessment (sponsor explain in comments) downgraded by judge Judge downgraded the risk level of this issue grade-a primary issue Highest quality submission among a set of duplicates QA (Quality Assurance) Assets are not at risk. State handling, function incorrect as to spec, issues with clarity, syntax

Comments

@code423n4
Copy link
Contributor

Lines of code

https://github.com/code-423n4/2023-01-drips/blob/9fd776b50f4be23ca038b1d0426e63a69c7a511d/src/Drips.sol#L422-L433
https://github.com/code-423n4/2023-01-drips/blob/9fd776b50f4be23ca038b1d0426e63a69c7a511d/src/Drips.sol#L342-L368

Vulnerability details

Squeezing drips from a sender requires providing the sequence of drips configurations (see NatSpec description in L337-L338):

/// It can start at an arbitrary past configuration, but must describe all the configurations
/// which have been used since then including the current one, in the chronological order.

A receiver who wants to squeeze drips from a sender may be faced with many history entries to provide if the sender, for whatever reason, spammed the drips history with many new configurations. Drips configs can potentially contain 0 receivers, which lowers the gas costs to create a new config.

The dripsHistory parameter of the DripsHub.squeezeDrips function uses memory instead of calldata. If the provided array is significantly sized, copying it to memory may exceed the block gas limit and cause the transaction to fail with an out-of-gas error. This leaves the receiver unable to squeeze drips from the sender.

Impact

A drips sender can prevent a drips receiver from squeezing the current cycle by spamming the current cycle with many drips history configurations.

Proof of Concept

Drips.sol#L342-L368

342: function _squeezeDrips(
343:     uint256 userId,
344:     uint256 assetId,
345:     uint256 senderId,
346:     bytes32 historyHash,
347:     DripsHistory[] memory dripsHistory
348: ) internal returns (uint128 amt) {
349:     uint256 squeezedNum;
350:     uint256[] memory squeezedRevIdxs;
351:     bytes32[] memory historyHashes;
352:     uint256 currCycleConfigs;
353:     (amt, squeezedNum, squeezedRevIdxs, historyHashes, currCycleConfigs) =
354:         _squeezeDripsResult(userId, assetId, senderId, historyHash, dripsHistory);
355:     bytes32[] memory squeezedHistoryHashes = new bytes32[](squeezedNum);
356:     DripsState storage state = _dripsStorage().states[assetId][userId];
357:     uint32[2 ** 32] storage nextSqueezed = state.nextSqueezed[senderId];
358:     for (uint256 i = 0; i < squeezedNum; i++) {
359:         // `squeezedRevIdxs` are sorted from the newest configuration to the oldest,
360:         // but we need to consume them from the oldest to the newest.
361:         uint256 revIdx = squeezedRevIdxs[squeezedNum - i - 1];
362:         squeezedHistoryHashes[i] = historyHashes[historyHashes.length - revIdx];
363:         nextSqueezed[currCycleConfigs - revIdx] = _currTimestamp();
364:     }
365:     uint32 cycleStart = _currCycleStart();
366:     _addDeltaRange(state, cycleStart, cycleStart + 1, -int256(amt * _AMT_PER_SEC_MULTIPLIER));
367:     emit SqueezedDrips(userId, assetId, senderId, amt, squeezedHistoryHashes);
368: }

Drips.sol#L422-L433

392: function _squeezeDripsResult(
393:     uint256 userId,
394:     uint256 assetId,
395:     uint256 senderId,
396:     bytes32 historyHash,
397:     DripsHistory[] memory dripsHistory
398: )
399:     internal
400:     view
401:     returns (
402:         uint128 amt,
403:         uint256 squeezedNum,
404:         uint256[] memory squeezedRevIdxs,
405:         bytes32[] memory historyHashes,
406:         uint256 currCycleConfigs
407:     )
408: {
409:     {
410:         DripsState storage sender = _dripsStorage().states[assetId][senderId];
411:         historyHashes = _verifyDripsHistory(historyHash, dripsHistory, sender.dripsHistoryHash);
412:         // If the last update was not in the current cycle,
413:         // there's only the single latest history entry to squeeze in the current cycle.
414:         currCycleConfigs = 1;
415:         // slither-disable-next-line timestamp
416:         if (sender.updateTime >= _currCycleStart()) currCycleConfigs = sender.currCycleConfigs;
417:     }
418:     squeezedRevIdxs = new uint256[](dripsHistory.length);
419:     uint32[2 ** 32] storage nextSqueezed =
420:         _dripsStorage().states[assetId][userId].nextSqueezed[senderId];
421:     uint32 squeezeEndCap = _currTimestamp();
422:     for (uint256 i = 1; i <= dripsHistory.length && i <= currCycleConfigs; i++) {
423:         DripsHistory memory drips = dripsHistory[dripsHistory.length - i];
424:         if (drips.receivers.length != 0) {
425:             uint32 squeezeStartCap = nextSqueezed[currCycleConfigs - i];
426:             if (squeezeStartCap < _currCycleStart()) squeezeStartCap = _currCycleStart();
427:             if (squeezeStartCap < squeezeEndCap) {
428:                 squeezedRevIdxs[squeezedNum++] = i;
429:                 amt += _squeezedAmt(userId, drips, squeezeStartCap, squeezeEndCap);
430:             }
431:         }
432:         squeezeEndCap = drips.updateTime;
433:     }
434: }

Tools Used

Manual review

Recommended mitigation steps

Consider limiting the number of drips configurations within a cycle to a reasonable number and use calldata instead of memory for the dripsHistory parameter of Drips._squeezeDrips() (and update the DripsHub contract accordingly as well) to prevent gas expensive copying of calldata to memory.

@code423n4 code423n4 added 2 (Med Risk) Assets not at direct risk, but function/availability of the protocol could be impacted or leak value bug Something isn't working labels Feb 3, 2023
code423n4 added a commit that referenced this issue Feb 3, 2023
@c4-judge
Copy link
Contributor

c4-judge commented Feb 9, 2023

GalloDaSballo marked the issue as primary issue

@c4-judge c4-judge added the primary issue Highest quality submission among a set of duplicates label Feb 9, 2023
@xmxanuel
Copy link

xmxanuel commented Feb 9, 2023

There is no real incentive for why a sender should do it. A sender could also just stop sending to a specific receiver.
A reasonable number would be also related to the cycleSecs.

Maybe the only scenario I could think of is something like:

  • sender sends huge amount of a specific token, stream is already over but the cycle is not finished
  • fear the receiver would dump the token on the market and sender can benefit from the delay
  • in such a case: the sender could try to delay with spamming.
  • however, the receiver is not required to provide the entire history linked list (the hashes are up to the latest are enough)

I think even then it would be just more expensive and not impossible to collect.

@CodeSandwich
Copy link

CodeSandwich commented Feb 12, 2023

[disagree with severity: QA]
The proposed attack could only postpone receiving funds until the end of the cycle. The cost of performing such an attack is questionable. Each added drips history entry can be skipped by the receiver, which has a minuscule cost, except reading the parameters it adds 160 bytes to calldata (5 words) and a single hash operation, it's really hard and costly to enforce filling up an entire block of gas with just these operations.

Using calldata is not a great idea, generally reading an argument more than once makes calldata not pay off anymore. See ethereum/solidity#12103.

@c4-sponsor
Copy link

CodeSandwich marked the issue as disagree with severity

@c4-sponsor c4-sponsor added the disagree with severity Sponsor confirms validity, but disagrees with warden’s risk assessment (sponsor explain in comments) label Feb 13, 2023
@GalloDaSballo
Copy link

@berndartmueller Can you please let me know if you believe the denial can be performed indefinitely, or if at end-of-cycle the tokens will be collectable?

@berndartmueller
Copy link
Member

@GalloDaSballo The denial can only be performed within the cycle. The tokens can be claimed at the end of the cycle (or to be correct, at the next cycle)

This issue presents a denial issue to prevent the squeezing functionality. I submitted it as Medium severity as the intended functionality of the protocol is affected

@GalloDaSballo
Copy link

From the deployments we can infer that a cycle will be 1 week (604800), we could assume that on mainnet this may change to up to a month, but I think this gives us an idea of the maximum delay.

I'd say the finding could be judged either as QA Low or Medium

If the spammer could have been anyone, then we could have argued in favour of Medium Severity, as anyone could have DOsses up to 1 week of drips, however, this grief can only be performed by the sender, whom could simply cancel the drip.

For this reason, after considering Medium Severity, I think the most appropriate one is Low

@GalloDaSballo
Copy link

GalloDaSballo commented Feb 23, 2023

L +3

@c4-judge c4-judge added downgraded by judge Judge downgraded the risk level of this issue QA (Quality Assurance) Assets are not at risk. State handling, function incorrect as to spec, issues with clarity, syntax and removed 2 (Med Risk) Assets not at direct risk, but function/availability of the protocol could be impacted or leak value labels Feb 23, 2023
@c4-judge
Copy link
Contributor

GalloDaSballo changed the severity to QA (Quality Assurance)

@c4-judge c4-judge added grade-c unsatisfactory does not satisfy C4 submission criteria; not eligible for awards labels Feb 28, 2023
@c4-judge
Copy link
Contributor

GalloDaSballo marked the issue as grade-c

@c4-judge c4-judge reopened this Feb 28, 2023
@c4-judge c4-judge added grade-a and removed grade-c unsatisfactory does not satisfy C4 submission criteria; not eligible for awards labels Feb 28, 2023
@c4-judge
Copy link
Contributor

GalloDaSballo marked the issue as grade-a

@GalloDaSballo
Copy link

After re-reading the finding, I realized that this can be performed, but not as a front-run

Because of the increased cost of SSTOREing vs SLOADing, the claimant will always have a cheaper time in claiming than the attacker in DOSsing, the only way for the attack to be performed would be to also DOS the n blocks necessary to spam the history up to a point that would cause a revert in copying it to memory

For those reasons I confirm QA for this report

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working disagree with severity Sponsor confirms validity, but disagrees with warden’s risk assessment (sponsor explain in comments) downgraded by judge Judge downgraded the risk level of this issue grade-a primary issue Highest quality submission among a set of duplicates QA (Quality Assurance) Assets are not at risk. State handling, function incorrect as to spec, issues with clarity, syntax
Projects
None yet
Development

No branches or pull requests

7 participants