Light Client transaction queue, initial LightDispatcher #4501
Conversation
}) | ||
); | ||
WithToken::No(signed) | ||
hardware_signature(&*accounts, address, t, network_id).map(WithToken::No) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
how is this related to light client?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just factoring out common code shared between the FullDispatcher
and the LightDispatcher
. The dispatcher's role is to fill optional fields of transaction requests, fulfill signer requests, and dispatch transactions onto the network.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
(the reason for the split is that the light node must fetch next nonce, expected gas price, estimated gas from the network, while a full node has all this information available locally)
}; | ||
|
||
if accounts.is_hardware_address(address) { | ||
return hardware_signature(&*accounts, address, t, network_id).map(WithToken::No) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@NikVolf here you can see the invocation of the hardware_signature
bit from the light dispatcher.
rpc/src/v1/helpers/dispatch.rs
Outdated
{ | ||
let network_id = None; // TODO: fetch from client. | ||
let address = filled.from; | ||
let best_header = self.client.block_header(BlockId::Latest) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seing a lot of the same expectations over and over again
worse drying it in the common function?
ethcore/light/src/client/mod.rs
Outdated
use ethcore::blockchain_info::BlockChainInfo; | ||
use ethcore::spec::Spec; | ||
use ethcore::service::ClientIoMessage; | ||
use ethcore::encoded; | ||
use io::IoChannel; | ||
|
||
use util::hash::{H256, H256FastMap}; | ||
use util::hash::H256; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can be just util::H256
with the other guys next lines i believe
@NikVolf grumbles addressed |
lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of grumbles regarding transaction queue.
impl TransactionQueue { | ||
/// Import a pending transaction to be queued. | ||
pub fn import(&mut self, tx: PendingTransaction) -> Result<TransactionImportResult, TransactionError> { | ||
let sender = tx.sender(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indentation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems right to me. Weirdness in online viewer is just line wrapping.
pub fn import(&mut self, tx: PendingTransaction) -> Result<TransactionImportResult, TransactionError> { | ||
let sender = tx.sender(); | ||
let hash = tx.hash(); | ||
let nonce = tx.nonce; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't you need to check if transaction is already in queue (by hash?)
|
||
match acct_txs.current.binary_search_by(|x| x.nonce.cmp(&nonce)) { | ||
Ok(idx) => { | ||
trace!(target: "txqueue", "Replacing existing transaction from {} with nonce {}", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably replacement should happen only if gas_price is higher than the previous one, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With the intended use-case (storing transactions only of local origin), it makes sense to replace whenever the same nonce is encountered.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, but if you ever propagate transaction with lower gas price it will never be replaced in full nodes transaction queue (at least the ones using default implementations) giving the user false feeling that the transaction was replaced, although this could be left to be displayed in UI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the user wants to try with a lower gas price then they're welcome to; the least we can do is allow our implementation to propagate it to the network. What the peers do with the transaction is beyond our control. The fact that it would probably not work should be left to documentation instead of behavior.
// current is sorted with one tx per nonce, | ||
// so if a tx with given nonce wasn't found that means it is either | ||
// earlier in nonce than all other "current" transactions or later. | ||
debug_assert!(idx == 0 || idx == cur_len); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest standard assert
here at the beginning, we can switch back to debug_assert
before release.
#[derive(Debug, Default, Clone, PartialEq, Eq)] | ||
pub struct TransactionQueue { | ||
by_account: HashMap<Address, AccountTransactions>, | ||
by_hash: H256FastMap<PendingTransaction>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It should be optimized to avoid transaction cloning (currently we store each transaction at least twice) - with large transaction data
it may result in high memory usage. (esp. important for future transactions, which might not be included in blockchain at all).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fair. I suppose the transactions can be shared with an internal Rc
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're putting TransactionQueue
into RwLock
, so I'm afraid it needs to be Arc
.
trace!(target: "txqueue", "Replacing existing transaction from {} with nonce {}", | ||
sender, nonce); | ||
|
||
acct_txs.current[idx] = tx.clone(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Old transaction should be removed from by hash
|
||
let res = match self.by_account.entry(sender) { | ||
Entry::Vacant(entry) => { | ||
entry.insert(AccountTransactions { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems that we will store a lot of old transactions, cause if all transactions from particular sender are imported we will remove the entry here.
But then if some node propagates it again we will insert it to the queue with Assumed
nonce, and will never cull
it unless the sender actually sends another transaction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Light nodes don't have the capability to validate transactions, so this queue isn't intended to store anything propagated from the network, only local stuff from RPC. The intended usage is to fetch actual nonces at the best block for all queued_senders
and call cull
using those.
// believed current nonce (gotten from initial given TX or `cull` calls). | ||
cur_nonce: CurrentNonce, | ||
current: Vec<PendingTransaction>, // ordered "current" transactions (cur_nonce onwards) | ||
future: BTreeMap<U256, PendingTransaction>, // "future" transactions. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could be just HashMap
, no? Doesn't seem that we use ordering anywhere
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ordering relied on here: https://github.com/ethcore/parity/pull/4501/files#diff-e1ba00eb97ba4534d6c62db683387edbR229
/// Get all transactions ready to be propagated. | ||
/// `best_block_number` and `best_block_timestamp` are used to filter out conditionally | ||
/// propagated transactions. | ||
pub fn ready_transactions(&self, best_block_number: u64, best_block_timestamp: u64) -> Vec<PendingTransaction> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should indicate that the transactions are not sorted by any priority (actually it will batch transactions from single sender togethe).
Since we propagate transactions in chunks some senders will always be a bit more priviledged by some node.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Haven't implemented the transaction propagation code for the light node just yet. I'll add docs that it's in "arbitrary" order.
} else if idx == cur_len && acct_txs.current.last().map_or(false, |f| f.nonce + 1.into() != nonce) { | ||
trace!(target: "txqueue", "Queued future transaction for {}, nonce={}", sender, nonce); | ||
let future_nonce = nonce; | ||
acct_txs.future.insert(future_nonce, tx.clone()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Future queue should have a hard limit (probably current queue also), otherwise you can use arbitrary amount of nodes memory just by propagating future transactions that will never be included in the blockchain.
@tomusdrw Grumbles should be addressed. Duplication of transaction data is avoided by having the |
Some(Condition::Timestamp(time)) => time <= best_block_timestamp, | ||
}).map(|info| info.hash) | ||
}) | ||
.filter_map(|hash| self.by_hash.get(&hash)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be good to put .expect
here instead to track potential issues (had issues like this in standard transaction pool)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I chose to use filter_map
specifically to avoid panics in this case. I think it's better for something not to be propagated than for the whole node to go down. Our style is to have panickers be provable: I don't think the invariant that by_hash
be fully consistent with by_account
is strong enough to assert (although it should really be the case). Adding a warning here "Inconsistency detected between by_account
and by_hash
" makes sense to me, though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good, minor suggestion left.
Goes after #4485
The TXQ only manages local transactions, but still distinguishes between current and future.
Doesn't do intensive verification because someone only hurts their own node by triggering pathological behavior. Nonces are initially guessed at based on what's provided in the transactions for an account (which will usually be accurate/fetched from network), but ancient TXs can still be culled.
The dispatcher uses the
Dispatcher
interface from #4485, but it's sort of incomplete: needs stuff like gas price corpus and gas amount estimation to be fully complete (this needs the request cache, on_demand request batching, and the very scary Remote Transaction Execution)