Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Are we removing good txs to free up space for new txs? #1539

Closed
melekes opened this issue May 7, 2018 · 4 comments
Closed

Are we removing good txs to free up space for new txs? #1539

melekes opened this issue May 7, 2018 · 4 comments
Labels
T:question Type: Question

Comments

@melekes
Copy link
Contributor

melekes commented May 7, 2018

if cache.list.Len() >= cache.size {
popped := cache.list.Front()
poppedTx := popped.Value.(types.Tx)
// NOTE: the tx may have already been removed from the map
// but deleting a non-existent element is fine
delete(cache.map_, string(poppedTx))
cache.list.Remove(popped)
}

Should we silently do that or it's better to return an error if txCache is full? It's like saying "we may or may not commit a tx depending on the current load" VS "if we accept a tx, it will be eventually committed or rejected by the app."

@melekes melekes changed the title Are we removing good txs to free up space for new txs Are we removing good txs to free up space for new txs? May 7, 2018
@melekes melekes added the T:question Type: Question label May 7, 2018
@melekes melekes closed this as completed May 7, 2018
@ebuchman
Copy link
Contributor

ebuchman commented May 7, 2018

Huh.

Looks like we don't need the extra call to Exists and we can just check the return from Push.

We do need to bound the mempool, so maybe we do that by returning error if the cache is full.

Why did you decide to close this ?

@melekes
Copy link
Contributor Author

melekes commented May 8, 2018

We do need to bound the mempool, so maybe we do that by returning error if the cache is full.

#345

Looks like we don't need the extra call to Exists and we can just check the return from Push

yes!

Why did you decide to close this ?

because we're not removing txs (ie. cache != mempool.txs). I still does not understand why txCache has map AND list? Why do we need a list? Can't we remove from map directly?

Why do we need a cache at all? Is this really the case where users send same transactions (if they know that Tendermint requires unique txs).

@melekes melekes reopened this May 8, 2018
@ebuchman
Copy link
Contributor

ebuchman commented May 12, 2018

I still does not understand why txCache has map AND list? Why do we need a list? Can't we remove from map directly?

If we change the behaviour to just reject txs when the cache is full, then I don't think we need the list.

The list is just so we can remove the oldest transaction from the cache.

Why do we need a cache at all? Is this really the case where users send same transactions (if they know that Tendermint requires unique txs).

The cache isn't just for users - remember we're gosipping txs to all our peers, so we could receive the same tx from multiple peers. Having the cache means we can quickly ignore it instead of running it over ABCI again

@ebuchman ebuchman mentioned this issue May 12, 2018
3 tasks
@ebuchman
Copy link
Contributor

It's like saying "we may or may not commit a tx depending on the current load" VS "if we accept a tx, it will be eventually committed or rejected by the app."

the mempool doesnt provide such strong guarantees yet. its currently the clients responsibility to monitor for when a tx is committed, the mempool doesnt guarantee it

for instance a tx could enter the mempool, but before it can be sent to peers the node crashes

for enterprise customers, we will need to provide guarantees here by using a WAL and replaying txs but its non-trivial to do this all efficiently (#248).

for public blockchains people are more accustomed to having bad UX and guarantees and to be responsible for eg. resubmitting

they’re also more likely to send the tx to more nodes so the probability of all of them crashing at the same time and losing the msg decreases substantially


Closing this for #345 , but we should make the changes we discussed. Thanks!

melekes added a commit that referenced this issue May 17, 2018
melekes added a commit that referenced this issue May 18, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
T:question Type: Question
Projects
None yet
Development

No branches or pull requests

2 participants