Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize CoinJoin Fees #1457

Merged
merged 7 commits into from May 22, 2019

Conversation

Projects
2 participants
@nopara73
Copy link
Collaborator

commented May 21, 2019

Closes #1155

Motivation

There are 2 changes here.

Change 1: Multiply by 0.7

As described and implemented in issue #1155 we halvened coinjoin fee target based on the number of unconfirmed coinjoins. In this PR I multiply by 0.7 instead of halvening. This may be somewhat controversial in itself, but I believe change 2 justifies it.

Change 2: Optimize CJ Fee Target

The problem with optimizing CJ fee target when a round is created is that we don't know if there'll be unconfirmed coin registered in the round or not. Thus we need to optimize the fee target at transaction building phase.
Currently we have a fee optimization in case the fee for the fee target is lowered from the time of round creation to the final CJ propagation. In this case we split the final fee difference between the base denomination's active outputs.
The optimization in this PR fits into this current optimization in a way that not only the final fee is optimized, but also the final fee target if no unconfirmed output is spent.
In that case we don't use the higher adjusted feerate, but rather we default back to use the low feerate and passes this feerate to the original fee optimization algorithm.

Change 2, Note 1

I don't actually send RPC commands for every CJ outputs to check confirmation, rather send one RPC command (getrawmempool) that will tell us all the unconfirmed CJ txids. If there's intersection between our CJ and theirs then there's unconfirmed tx so we cannot default back.

Change 2, Note 2

As I said change 2 also justifies change 1, because change 2 will result in high number of unconfirmed coinjoins those aren't built on each other. So normally the coordinator doesn't know this and would end up suggesting high fees all the time, even though it somewhat optimizes them out with change 2, we should lowball the fees more at the beginning, because change 2 is not an individual fee optimization, but rather a communist fee optimization. It doesn't take account who should pay how much, it splits the diff between the active outputs equally.

Side Effect

Another problem is that the base denomination is going down way too fast, sometimes resulting with overcharging (0.7% would be change sanity limit) and often resulting in toxic change (passes the 0.7% would be change sanity limit.) It'll help slow the rate of base denomination lowering.

nopara73 added some commits May 21, 2019

@nopara73 nopara73 requested a review from lontivero May 21, 2019

@nopara73 nopara73 added this to In Progress in v1.1.5 via automation May 21, 2019

@nopara73

This comment has been minimized.

Copy link
Collaborator Author

commented May 21, 2019

@lontivero Can you review this PR? I think it's pretty straightforward, it took me longer to write the explanation than the actual code:)

@lontivero
Copy link
Contributor

left a comment

Looks good to me.

nopara73 added some commits May 21, 2019

@nopara73

This comment has been minimized.

Copy link
Collaborator Author

commented May 21, 2019

@lontivero I implemented what you requested about the configurability. I also figured I can make it even smarter and make sure the optimization happens even when the coinjoin would be spending unconfirmed transactions based on the number of dependants.

Can you make sure to review this commit? 24fda31
And in that commit can you very carefully review the GetAllDependentsAsync RPC extension method?

@lontivero

This comment has been minimized.

Copy link
Contributor

commented May 22, 2019

I've reviewed it and I the algorithm is okay and makes a lot of sense. However I think you can achieve the same by querying the MempoolEntry::AncestorCount property (it contains the length of the chain of unconfirmed transactions in the mempool for a given transaction).


I take the opportunity to share here something about the RPC call. What worries me a bit is not number of RPC calls that it could performs but the frequency of those calls because I know the Wasabi backend uses to stress a the bitcoin node from time to time. That is clear by reviewing the only one log file that you shared with me time ago where the node logs a few hundreds of this warnings per day:

2019-01-05T18:19:20Z WARNING: request rejected because http work queue depth exceeded, it can be increased with the -rpcworkqueue= setting

That is because by default the bitcoin core rpc work queue can keep 16 work items and sometimes the wasabi backend makes requests faster than the node can process them. The point is that more RPC calls at high frequency could be a problem in the future. Anyway, as the warning says, the limit can be increased (bitcoin core doesn't enforce any upper limit to that value but of course everything has a limit, right?).

I would suggest to increase the rpcworkqueue setting to 32 (or 64) as first option and watch the occurrence of those WARNING in the debug.log file (they could be considered error from our point of view because the request is rejected).

@nopara73

This comment has been minimized.

Copy link
Collaborator Author

commented May 22, 2019

I've reviewed it and I the algorithm is okay and makes a lot of sense. However I think you can achieve the same by querying the MempoolEntry::AncestorCount property (it contains the length of the chain of unconfirmed transactions in the mempool for a given transaction).

The problem with that is we need the AncestorCount before we propagate the coinjoin, so we can only check the would be inputs to the coinjoin, where the AncestorCount can duplicate.
For example:

commonAncestorTx - inputTx1
commonAncestorTx - inputTx2

Then we'd get AncestorCount 2 for inputTx1 and inputTx2. If we'd just sum it up and deduct 2, then we'd get 2 at the end, so commonAncestorTx would be counted twice and there's no way to know if it's the same ancestor unless we go through the depnedents.

I would suggest to increase the rpcworkqueue setting to 32 (or 64) as first option and watch the occurrence of those WARNING in the debug.log file (they could be considered error from our point of view because the request is rejected).

I wasn't aware of it. ACK for 64. Adding to the documentation in this PR.

@nopara73 nopara73 merged commit 49c8ce6 into zkSNACKs:master May 22, 2019

4 checks passed

CodeFactor No issues found.
Details
Wasabi.Linux #20190522.2 succeeded
Details
Wasabi.Osx #20190522.2 succeeded
Details
Wasabi.Windows #20190522.2 succeeded
Details

v1.1.5 automation moved this from In Progress to Done May 22, 2019

@nopara73 nopara73 deleted the nopara73:feeopt branch May 22, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.