Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transaction confirmation API #486

Closed
theoreticalbts opened this issue Oct 4, 2016 · 23 comments
Closed

Transaction confirmation API #486

theoreticalbts opened this issue Oct 4, 2016 · 23 comments

Comments

@theoreticalbts
Copy link
Contributor

The crash bug in #355 is due to the confirmation notification code. We currently notify clients when a transaction is confirmed. There are several related issues to consider:

  • The current implementation suffers from at least one crash bug, Crash in network_broadcast_api::on_applied_block #355.
  • To ease certain scaling / load-balancing problems in the steemit.com infrastructure, we want to eliminate server-push notifications wherever possible, and prefer request/response.
  • The full state transition model of a transaction is not adequately captured by the current API.
  • The CLI wallet and the web node depend on the current notifications, and any update to the API will have to be coordinated with these components (essentially we must add a new API, deprecate the old API but keep both API's active while consumers are upgraded, then remove the old API).
@theoreticalbts theoreticalbts self-assigned this Oct 4, 2016
@theoreticalbts
Copy link
Contributor Author

Let's talk about the state transition model. In general, a transaction can be in one of the following states:

  • (S1) Failed initial push to local node
  • (S2) Accepted by the local node
  • (S3) Invalid by the local node
  • (S4) Included in a block
  • (S5) Expired
  • (S6) TaPos failed

Each of these requires separate analysis.

  • (S1) A transaction which fails its initial push to the local node can only be included if it is broadcast to some other node. If (a) the client is the originator of the transaction, (b) the client never shares / shared the transaction with any other node, (c) the client never re-submits the transaction, then a transaction which fails in this way cannot be included.
  • (S2) A transaction which is accepted by the local node may or may not be included. The transaction may expire before being included by a witness, some other transaction may get into the blockchain first which invalidates the transaction (i.e. if two different transactions transfer the same account's entire balance, each may be accepted by the local node where it is submitted, but only one will make it into the chain), or a fork may occur (e.g. invalidating TaPoS or other state).
  • (S3) A transaction which is invalid by the local node may or may not be included. E.g. if Alice has 1 STEEM and there are three transactions (T1) Alice -> Bob, (T2) Alice -> Charlie, (T3) Dan -> Alice, then a client may push T1 which is initially accepted, then the node receives a block containing T2 and reports T1 has been invalidated, only to later receive another block containing T3 followed by T1, so T1 has gone from being accepted, to invalid, to included.
  • (S4) A transaction which is included in a block may be un-included if the blockchain forks. In general, it may then end up following any path through any of the other possible states on the new fork.
  • (S5) A transaction which is expired may be un-expired if the blockchain forks. In general, it may then end up following any path through any of the other possible states on the new fork.
  • (S6) A transaction which fails TaPoS may un-fail TaPoS if the blockchain forks. In general, it may then end up following any path through any of the other possible states on the new fork.

So basically a transaction's state cannot be considered finalized until its inclusion, expiration, or TaPoS invalidation has occurred and has passed the last irreversible block.

@theoreticalbts
Copy link
Contributor Author

The reason I'm talking about this is because we need to precisely define, in terms of the actual transaction lifecycle through the above states S1-S6, exactly (1) what semantics the current API provides, and (2) what semantics API consumers need / expect for particular applications.

@theoreticalbts
Copy link
Contributor Author

The first question -- what semantics the current API provides -- is by far the simpler question to answer. The current API simply provides an error when a transaction is immediately rejected (S1), success when a transaction is initially accepted (S2), and a callback (server-push) notification the first time a transaction enters (S4) -- i.e. is included in a block -- from any other state.

The current API does not provide any notification of any other state transition. Clients have no way of being notified of any other state transitions -- there is no way to process forking [1], expiration [2], and non-immediate rejection/acceptance.

[1] Forking can in principle be handled by delayed node, but I'm not sure if any client actually does this.

[2] Expiration can in principle be handled by client-side logic, but I'm not sure if any client actually does this. In particular, it seems a bit much to ask clients to handle the tricky issue of a fork potentially un-expiring expired transactions.

@theoreticalbts
Copy link
Contributor Author

theoreticalbts commented Oct 4, 2016

The simplest possible API which we can provide is to simply allow a client to issue a polling query like this:

check_transaction_state( txid, ref_block_num, ref_block_prefix, expiration )

This call will return a JSON data structure including:

  • Which state S1-S6 the transaction is currently in
  • An is_irreversible flag which, if true, means the transaction can never enter a different state

Whenever a client submits a transaction, the client would then make the polling call every 3s or so until the tx enters some state irreversibly. The downside is that having a large number of polling calls for every transaction greatly increases the call load on the server for steemit.com.

Since the polling call's implementation wouldn't require full blockchain state, but rather only the txid's of recent blocks and the last irreversible block number, perhaps the best option is an out-of-process implementation which can easily be scaled up. An out-of-process implementation might also be able to provide a more event-driven API than pure polling, such as long polling or websocket notification.

Thoughts?

@theoreticalbts
Copy link
Contributor Author

theoreticalbts commented Oct 4, 2016

I had a conversation with @bytemaster about this, and he had some good insights, which I'll attempt to summarize.

A better breakdown of states would be: Initially rejected (R0), tentatively rejected (TR), tentatively confirmed (TC), irreversibly rejected (IR), irreversibly confirmed (IC).

A transaction which is IR or IC cannot change state, a transaction which is TR or TC can change state, a transaction which is R0 cannot change state for practical purposes [1].

[1] It could change state if the originator submitted it to a different node in the past, or will submit it to a different node in the future. Basically you treat R0 as IR if you trust the originator not to do multiple submissions [2], and TR if you don't.

[2] Clients can trust the originator to have this property for transactions they originated, as long as they connect to a trusted server and throw away tx's they submit which get an R0 response.

@theoreticalbts
Copy link
Contributor Author

Transaction polling

@theoreticalbts
Copy link
Contributor Author

theoreticalbts commented Nov 27, 2017

I haven't really written any code for this, releasing my assignment.

Implementing this may help SMT creators use TaPoS to ensure SMT creation is atomic #1795, and may also help steemit.com and third-party apps create an improved UI for tracking and dealing with expired transactions.

@mvandeberg
Copy link
Contributor

For practical purposes, there needs to be a 6th transaction state Unknown (U) that represents a transaction that is not initially rejected, tentatively rejected, nor irreversibly rejected, and has not yet been included in a block. This state represents the case when the transaction may need to be rebroadcast.

The API call should accept the transaction and return an enum of these states.

This call should live in network_broadcast_api.

@theoreticalbts
Copy link
Contributor Author

there needs to be a 6th transaction state Unknown (U) that represents a transaction that is not initially rejected, tentatively rejected, nor irreversibly rejected

This depends on how the API is designed. I originally envisioned the transaction query states to be return values from the broadcast API. This means you have semantics of "If transaction is state U, broadcast if you don't know about it" which means the RPC caller cannot observe state U, the transaction will have moved to accepted/failed local push. If you want to allow a separate query-only operation which says "what is the status of transaction <txid>?", that's fine, but you also need the caller to provide TaPoS and expiration if you want to be able to tell the client that it's expired or failed TaPoS.

@mvandeberg
Copy link
Contributor

Ok, I see. You were assuming that this would be an additional return type to broadcast_transaction. Currently, the return type is void, so it would creating a new return type.

I envisioned creating a new API method so that the semantics of the original method remain unchanged. They would change because a failure to push the transaction results in an exception being thrown, which is returned as a jsonrpc error message. Instead, we would return the state. I am fine with your approach so long as the details of an error are still returned. Knowing why an op is rejected is critically important to the use of this API.

I propose the following return struct.

struct broadcast_transaction_return
{
   transaction_state state;
   optional< fc::exception > error;
};

To support this API, we need to record locally when each block is included (to determine irreversibility). Currently, we only track the set of unexpired included transactions. That needs to be expanded to the entire 64k range of TaPoS blocks. With the existing transaction_object definition, this is quite expensive due to the inclusion of the packed transaction. However, the packed_trx field is not needed. It is used in database::get_recent_transaction which is called by the p2p code. However, if the transaction_object includes the block in which the transaction is included in, and perhaps the transaction num in the block, we can look it up via get_block. This is an addition of 8 bytes to the object, which is smaller than our smallest transaction.

This should increase reindex times by a decent amount because pracked_trx is transitory state and is dynamically allocated. I imagine these savings more than make up for the added costs (both memory and computation) of keeping an index of all transaction IDs for the last 64k blocks and included included block number in each entry.

My recommendation for the transaction_object

class transaction_object : public object< transaction_object_type, transaction_object >
{
   public:
      template< typename Constructor, typename Allocator >
      transaction_object( Constructor&& c, allocator< Allocator > a )
      {
         c( *this );
      }

      transaction_object() {}

      id_type              id;

      transaction_id_type  trx_id;
      uint32_t             block;
      uint32_t             trx_in_block;
};

The refactor of the transaction_object and use in the p2p code should be done in another issue that blocks this issue.

@theoreticalbts
Copy link
Contributor Author

So I thought some more about this, and there are some subtle problems we have to consider. The only things the state database can tell you are:

  • The current head block / LIB
  • What block the transaction was included in [1]
  • If the transaction is in the transaction pool

In particular, given an architecture of clients making stateless calls to random nodes, the only way to distinguish between failed transactions and transactions that we don't know about is during the initial broadcast of a transaction. There is no way to detect a transaction that becomes invalidated due to a block (i.e. the example where transfer Alice -> Bob becomes invalid when transfer Alice -> Charlie is included in a block).

So the only information we can give to clients is:

  • We can report an exception (as we currently do) if the transaction fails its initial push.
  • We can report whether a transaction is / isn't included in the pool or a block.
  • We can report whether a transaction included in a block is before LIB (and is therefore irreversible).
  • We can report whether a transaction not currently included in a block could be included (not considering forks) by comparing expiration time to HBT.
  • We can report whether a transaction not currently included in a block could be included (considering forks) by comparing expiration time to LIB time.

[1] The state database doesn't track transactions outside the TaPoS window. We may want to have logic in Jussi to query Steemd for blocks in the TaPoS window, and also query SBDS for historical blocks.

[2] I think this ticket may predate our decision that all future API's should be stateless.

@theoreticalbts
Copy link
Contributor Author

Currently, we only track the set of unexpired included transactions. That needs to be expanded to the entire 64k range of TaPoS blocks.

Let's not change the dupe checking code. If the dupe checking tx cache isn't sufficient for the functionality we want to implement, let's put the additional needed objects in a plugin.

@mvandeberg
Copy link
Contributor

Let's not change the dupe checking code. If the dupe checking tx cache isn't sufficient for the functionality we want to implement, let's put the additional needed objects in a plugin.

This does not change the fact that the current implementation can be modified to reduce reindex time and state size.

I think this ticket may predate our decision that all future API's should be stateless.

If we do not perform writes, then it is stateless. My original idea for the API was to simply query the current state without actually pushing the transaction. Perhaps it is best to go back to the idea of separate push and query calls.

@theoreticalbts
Copy link
Contributor Author

Let's consider how you're going to use this information if you're a client.

I picture the client will implement:

  • (a) An error message when a transaction is declined
  • (b) Some alert or UI update which occurs when a transaction is included in a block or becomes irreversible
  • (c) Some alert or UI update which occurs when a transaction disappears from the pool without being included in a block, or disappears from a block due to a fork
  • (d) A list display of submitted transactions and their status
  • (e) A function to update expiration / TaPoS, re-sign and re-submit a failed transaction, atomically (meaning the old/new versions of the transaction cannot be accepted even in the presence of forking)
  • (f) Combining the query with a query to some blockchain service.

The client implements a transaction broadcast, followed by a query poll.

  • (a) Check for an exception when the transaction is submitted.
  • (b-c) Updates are generated locally according to changes in the polling result
  • (d) This is simply a view of the internal data structure which supports other features.
  • (e) This is a very subtle problem, I'll discuss it separately.
  • (f) This is a very subtle problem, I'll discuss it separately.

Re-submission algorithm

The problem here is that the client must set TaPoS so that the old/new transactions cannot both be included. If we want to support multiple re-submissions, we have to consider the possibility that there are multiple old transactions, and the new transaction must be newer than any of them.

Which means the new transaction's TaPoS must refer to a block greater than the expiration of any of the old transactions on a chain which does not include any of the old transactions. So in order for the client to provide this kind of atomicity, it must atomically query a list of transactions. The query can return a report that each transaction in the list is (or is not) included, and get a report of the block ID and block number of some block with a timestamp greater than the maximum expiration among the list.

So how does the resubmission actually work?

  • A client queries all old versions of the transaction in a single list.
  • If the return value indicates any of the old versions is included, it declines to sign.
  • If the return value indicates none of the old versions is included, it sets TaPoS based on the returned block number / ID.
  • If the returned block number is block_id_type() (all zeros), then some old transactions are still not expired (according to the head block time of the node that answered the query). So the client declines to sign.

So now let's discuss how the query needs to compute the returned block number / ID. The only requirement is that the returned block ID is newer than the expiration of every transaction. (If the expiration of some transaction is still in the future according to head block time, then you return a block number of all zeros.) It suffices to return head block time, but this is subject to forks. I suggest the algorithm returns whichever is later: LIB or the first block newer than the expiration of every transaction.

Combining algorithm

As discussed above, we probably want to implement a combining API which combines the result of a historical query (to SBDS, say, which indexes irreversible blocks) with the result of a current query (to a steemd node, which only indexes very recent blocks). The combining API implementation needs to know that:

  • The historical query chain is a prefix of the current query chain
  • There is no gap between the last block tracked by the historical query chain and the first block tracked by the current query chain

So to implement this, what do we need?

  • The steemd query needs to provide a field to assert that a particular block in the query chain exists and is equal to or later than the earliest block that has transaction indexing available.
  • The historical query needs to provide its head block along with its result.

The combining query fills the steemd query field from the head block of the historical query. Note that because of atomicity concerns, we cannot implement these as separate requests! Here's one possible failure mode:

  • The history node is at block 1000.
  • The history node says that a transaction was not included.
  • The transaction is included in block 1001.
  • The history node advances to block 1002.
  • The history node says that it's at block 1002.
  • The steemd node says the transaction wasn't included after block 1002.
  • The combining query now thinks the transaction wasn't included at/before 1002 and wasn't included after block 1002, so it incorrectly reports to its client that the transaction wasn't included.

This failure mode can be bypassed if the history node returns its current block number as part of the API call, instead of a separate call. Similar reasoning leads us to conclude steemd must execute similarly.

@mvandeberg
Copy link
Contributor

#2309 can serve as the transaction ID store. That combined with the transaction IDs for duplicate trx should be sufficient to return the needed data.

@theoreticalbts
Copy link
Contributor Author

The above discussion sheds some light on how many transactions we need to track. We don't need to handle the entire 64k TaPoS window, we only need to handle LIB plus some threshold which is almost always greater than the maximum lag of the historical node. I suggest maybe double irreversibility would make a good threshold: When a transaction was irreversible in LIB, you can forget about it for the purposes of this query [1].

How would this be implemented?

  • Create an "LIB height record" which records, for each block height, what was the LIB as of that block
  • When the LIB advances, set the DIB to the LIB height record value indexed by the LIB height.
  • Delete all LIB height records at/before LIB.
  • Delete all transaction records at/before DIB.

[1] The set of queryable transactions is not a subset of the transactions tracked for duplicate purposes.

Since irreversibility need not advance (for example one witness producing on its own chain and all the other witness "missing"), it is possible for transactions to fall out of the TaPoS window without becoming unqueryable.

@mvandeberg
Copy link
Contributor

I agree. A store that tracks double irreversibility combined with the irreversibility store of account history provides complete transaction history for the purposes of this API.

The question we still need to answer is where should this live?

We are trying to keep things out of steemd that don't need to be in steemd, but I see value in having basic functionality with just steemd and a wallet. If we move account history to SBDS and require querying SBDS to confirm a transaction then the exchange either needs to query api.steemit.com or run their our instance of SBDS.

@theoreticalbts
Copy link
Contributor Author

theoreticalbts commented May 8, 2018

I'm not averse to adding an option to index transactions by ID since the beginning of time. If you run an application that either doesn't care about older transactions, or uses an external solution for indexing them (e.g. SBDS), you can turn off the option and have smaller state size by limiting yourself to non-doubly-irreversible transactions.

An alternative approach is to have a Docker container with pre-configured steemd and SBDS. And/or maybe a tutorial on how to manually set up SBDS, for those who won't / can't use Docker.

@mvandeberg
Copy link
Contributor

There seems to be a desire for indexing transactions by ID. #2309

@mvandeberg
Copy link
Contributor

We are going save transactions locally for double irreversibility plus 2400 (2x max expiration of blocks).

This should provide an ample buffer when the key query for irreversible inclusion/exclusion can be answered.

The purpose of this data store is to answer this query for current transactions, not for historical transactions. get_transaction by an ID for historical transactions should be handled by a more appropriate data store such as SBDS.

@roadscape
Copy link
Contributor

Is the last comment implying #2309 will not be fixed? Could the buffer be configurable so that no purging happens for nodes which need it?

@mvandeberg
Copy link
Contributor

This issue isn't excluding #2309, but I would like to know a compelling reason for get_transaction to live in steemd as opposed to another service?

@youkaicountry
Copy link
Contributor

Closing, development issue #2458 opened.

On1x pushed a commit to VIZ-Blockchain/viz-cpp-node that referenced this issue May 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

4 participants