-
Notifications
You must be signed in to change notification settings - Fork 969
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too aggressive caching of transaction validation result? #1330
Comments
I think you are trying to sent two transactions with the same sequence number. op_underfunded means that sequence number was consumed for given account and you have to use next one. Please reopen if using next sequence number didn't work. |
@vogel but that's the thing.. if it's a pre-authorized transaction, you can't change the sequence number. |
In that case we need another solution, but it should be discussed on stellar-protocol project.
With current protocol you have to ensure that transaction can be executed or sequence number will be consumed and pre-authorized transaction will be useless. One solution for that is to create account that has all needed funds and only allow predefined set of transaction to be executed on it (with proper setting of thresholds and signers), like:
|
@vogel just so that we're on the same page - even if the transaction fails to be applied, the account sequence number is still incremented? |
It depends. But for all op_ errors yes, sequence number is incremented. |
In the following scenario, the transaction validation result caching seems to be too aggressive.
op_underfunded
op_underfunded
eventhough the funds are presentThe same caching problem happens when the prepared tx result can yield different results based on the signers of the tx. For instance if one signer doesn't have enough weight and tx fails, that failure is cached and when the signer who does have enough weight tries to submit it, the failure is still returned.
Here's a test case for the
op_underfunded
:How long would one have to wait right now for the cache entry to expire to "workaround" this problem?
The text was updated successfully, but these errors were encountered: