Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Fix RPC Batch Transactions: Process the rest of the transactions if one fails... #14578
Issue is with "sendmany" RPC call (I may not have been clear, sorry)
In both cases, the batch RPC transaction call fails - one is due to an invalid address, the other due to batching more than 10-15 transactions. You will have to forgive me, I lost the reference to the original article/issue that I read where it said if you batched too many transactions it would fail.
Expecting: One transaction in the batch to fail, the rest to execute normally
Actual: All transactions in the batch fail, causing other programs such as NOMP which use them to fail = no payments to miners!
Using a fork of a fork of (probably a fork) of bitcoin core. This affects all forks of bitcoin - literally all of them. Hopefully they will merge the fix to the batch RPC send.
Machine: Vultr VPS ($10/Mth - 1 CPU, 2048 MB RAM, 40GB disk)
I will make work arounds in NOMP. Then I will search for where I saw about batch transactions and post a reference link here. You may already have an open issue (I checked briefly, cant find)
I tried to find the reference links, but I can't. I will keep searching, but I have bugs to fix in EasyNOMP.
You should be able to easily reproduce the two problems (both related to batch transactions)
You should get a failure on the entire batch of payments in both cases.
This is a security issue for pools, as an attacker can deny payment services by simply mining one or two shares to an invalid receiving address.
What should happen, is the daemon should process each transaction that it can, but fail only the one transaction with the problem. Also batch transactions should support at least 150 transactions to be worth having a "batching" feature (think of how many tx a pool would send!).
NOMP is also responsible, as it never had address validation prior to making payments. Address validation should be done on both sides. I will be patching this in NOMP shortly.
The limit is due to the maximum size of bitcoin transactions, and depends on the number of inputs your transactions has. Certain 150 outputs is not a problem in general, but if you need 1000s of inputs to pay that much, then yes. Sorry, nothing Bitcoin Core can do about that.
Batches are not atomic. They're executed one by one. Changing that would require a major redesign of how the RPC system works.
Perhaps that is reasonable for
Really, I think there is no solution except validating your inputs.
@sipa - thank you for your speedy reply!
So what is the upper limit on how many transactions can fit into one batch RPC call? I need to know so I can manually batch my transactions...
I meant to say "you should NOT get a failure on the entire batch, it should process the other transactions in the batch and return the failed transaction in the JSON return value"
Example of 'sendmany' partial failure response:
I must be unaware of how the RPC system works and ties into other systems. Was it intended to be designed so that the entire batch fails if one single address is wrong?
So should I just switch it back to send one single transaction at a time then? Or is there a way to tell which transaction is the one that failed the whole batch?
@sipa I am using the sendmany RPC call. I thought there was a confusion, I am sorry if I was not clear about that.
I just looked at the RPC page for sendmany http://chainquery.com/bitcoin-api/sendmany
I see what you mean, because the original return value is only a transaction ID which if you change it to a JSON result, will break every single program that is set up to expect a transaction ID as a result.
sendmany return information:
The thing is, this is why there is a such thing as a "Hard Fork" and "Soft Fork".
One way to make it work without breaking compatability is to bring the new command in stages, and depreciate the old one over time...
Example: sendmany operates as normal until depreciation date, sendmany-new operates the new way until sendmany depreaciation date, and then sendmany-new gets it's own depreciation date later on...
That gives backwards compatability, notice, and time for developers to switch over to the new version of sendmany, which has better return information telling them which tx failed and why. It also allows you to fix the issues with sendmany failing in the two specific instances in the OP.
Is this not possible?
P.S. I believe I have patched EasyNOMP I am just testing it now
I was having trouble understanding this issue, because I thought it was referring to JSON-RPC batch requests, but it sounds like this is just talking about calling the
In #14578 (comment), leshacat seems to be asking for
Maybe in the future it wound be good to have a
with something to clarify that it only makes a single transaction.
@leshacat, I would agree with sipa that "there is no solution except validating your inputs.". If your application isn't checking that addresses are valid as soon as it receives them, and instead it is failing at the point of trying to send a payment, that sounds like it would only be making invalid address issues harder to debug at the same time as making your payment code more complex...
Re: #14578 (comment), backwards compatibility is not a major concern here. There are many ways we can extend APIs while remaining backwards compatible, and it would make sense to first figure out what the ideal API would actually look like before getting into the weeds of backwards compatibility.
@ryanofsky yes I am getting the feeling the problem is not too many transactions in the batch, it is too many inputs and outputs in the transaction. So the posting I read must have been someone mid-trail finding the cause of the issue.
If this is an issue of too many inputs/outputs and not too many transactions in the batch, I will work on this from my end then. I think I found a fix for it in z-nomp. This adds to context of why you cannot simply change this.
Either way I intended to work on it from my end, I only raised the issue because I believe validation should be done on both sides, as well as also batch processing should support large batches on both sides.
Thanks for the quick replies, and sorry for consuming resources and time from your busy days :) Obviously thanks for BTC :)