Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] - Increase network throughput #3247

Open
17 of 19 tasks
SebastienGllmt opened this issue Sep 25, 2021 · 23 comments
Open
17 of 19 tasks

[FR] - Increase network throughput #3247

SebastienGllmt opened this issue Sep 25, 2021 · 23 comments
Labels
enhancement New feature or request

Comments

@SebastienGllmt
Copy link
Contributor

SebastienGllmt commented Sep 25, 2021

Internal/External
Internal

Area
Other Any other topic (Delegation, Ranking, ...).

Note: this is partially related to cardano-node, partially related to ouroboros-network

Progress update

It's been a few months since this issue was created, so if you're reading this after-the-fact, you may be interested in what progress was made since this issue was raised. Here is the progress so far

  • add ability for nodes to query mempool state #3413
  • add ability for nodes to query pending txs #3404
  • blockchain load is now tracked so people can monitor when congestion occurs link
  • On epoch 306 (December 1, 2021), block size was increased by 8kb (64kb->72kb - 12.5% increase) and more similar changes in later epochs (I haven't kept track)
  • On epoch 306 (December 1, 2021), Plutus script memory units per transaction by 1.4 million (9.85->11.25 - 12.5% increase) and more similar changes in later epochs (I haven't kept track)
  • Peer-to-peer was launched on testnet link
  • Peer-to-peer mainnet (partially implemented now)
  • Milkomeda public launch on testnet
  • Milkomeda public launch on mainnet
  • Improve UTXO parallelizability with reference inputs CIP-31 - scheduled for the next hardfork
  • Reduce block space taken my smart contract txs by introducing reference scripts CIP-33 - scheduled for Babbage
  • IOHK announced they are working on "input endorsers" for scalability
  • Hydra testnet launch
  • Hydra mainnet launch
  • Rough plans for tiered network traffic blog post
  • Paper for tiered network traffic paper
  • pipelining (coupling validation & block validation to speed it up) to improve block propagation times
  • Plutarch, Aiken, etc. now used in production to get faster and smaller scripts on-chain
  • Change block headers to only need a single VRF

Personal Note

Hello! If you're reading this from Twitter / Reddit and you want more technical content, check out my personal twitter or the twitter account of my company dcspark. We do more than just Cardano -- we're also involved in Solana and Urbit as well!

Background

Currently the Cardano network has hit 100% capacity at least twice. That means that mempools are full and this is causing tooling-dependent issues

You can find the latest 1hr & 24hr loads from pool.pm

Is this related to the "Congestion issue" I keep hearing about?

No. The "congestion" issue you may have heard about in the past few weeks is about how Cardano smart contracts are deterministic and so only one transaction can consume a utxo. It can be tackled by re-designing smart contracts to work better in parallel across multiple UTXO entries

This issue is about how Cardano itself has reached the max tps the network can support

How is the mempool handled currently?

The Cardano mempool is designed to be "fair". Transactions are processed in a FIFO order regardless or how much in fees they pay (the ledger spec does support a fee market, but cardano-node doesn't take this into account)

Cardano-node uses a fixed-size mempool is currently only 2 blocks in size (~280 txs). If transactions are submitted when the mempool is full, they may be dropped (this is partially tool-dependent. The cardano-node just holds agency with txsubmit protocol until there is room)

What is the max throughput for Cardano currently

Cardano tps is between 0.2tps to max 6.5tps
Cardano block time is ~20seconds

Cardano is currently hitting the maximum due to NFT drops and so the tps is usually on the higher end of that range (since Plutus script execution is not to common currently)

Can we increase tx size and/or block size?

Unfortunately not really. Plutus is still being optimized and increasing the block size could cause Plutus script execution to take too long and cause downstream problems with the network.

That being said, currently Plutus script are not only expensive in execution, but also large in size (a single script is 5-8kbs and so a single transaction in Cardano can usually only actually use 1-2 scripts before you hit the limit). This means increasing the tx size limit could allow larger batching of NFT drops while still not allowing for much extra Plutus computation (until Plutus gets optimized in the future at least)

Can we increase the mempool size?

I don't know about why 2 blocks was picked as the fixed size. Would appreciate any background info on this. Possibly this was just to try and get Cardano to stay under 8GBs of RAM?

Additionally, this would degrade the user experience for light wallets because currently light wallets have no way of querying the mempool so they can't show pending transactions. If we keep a fat mempool, we would need to prioritize mempool querying for light wallets as a feature

Can we tackle this issue at the wallet level?

To a certain extent. Wallets can implement tx resubmission so that even if the mempool being full caused the tx to be dropped initially, the wallet will just resend it for the user.

Currently there is no way of querying the node's mempool for light wallet tooling which means a wallet can't know if a tx that was submitted got dropped, failed or is still pending. This makes it tricky to implement proper tx resubmission since it means the only way to know if you should resubmit is by seeing if the ttl for your tx has expired. Lowering the ttl is not ideal either because if network congestion makes it longer for a tx to go through, decreasing the ttl lowers the chance for your tx to make it into a block before it times out.

Can we sort the mempool?

One way we can alleviate the problem without increasing the tx and/or block size is to sort transactions in the mempool such as using a fee market.

This is non-trivial to do correctly because

  1. Transactions in the mempool could depend on each other (do you replace 2 txs with a 1tx that pays a higher fee than both combined?)
  2. If mempool sorting takes too long, it can affect the tps of the system negatively
  3. It allows attack vectors related to the mempool (ex: submitting txs that are only 1 lovelace better to include in the transaction repeatedly to increase computation costs. Other blockchains have suffered this kind of attacks in the past). You would need the fee gained from including the tx to be higher than the computation cost of re-sorting your mempool (but it's hard to calculate the lovelace cost of this)
  4. It makes debugging of mempool issues harder since it's no longer as predictable
  5. The network layer code wasn't written with sorting the mempool in mind and therefore would require a fair amount of development work and iteration on the networking models
  6. It slightly complicated babel fees

Sorting methods

The easiest field for sorting the mempool is the ratio of tx fee / cost of tx.

Another option is sorting by the age of oldest UTXO entry in the transaction. This prevents spam attacks (because they will run out of old UTXOs eventually), but will have terrible consequences on usage of popular smart contracts as all their UTXOs will be relatively fresh

Can we prioritize transactions without having to make modifications to the mempool?

Yes, to a limited extend we can implement transaction sorting to choose which transactions to pass to your peers (so they their mempools are better than yours). This requires P2P which Cardano doesn't have yet, but can help in the future.

This would also be slightly flexible because you could choose to accept transactions from your certain peers even if they don't provide you the best tx fee return if you have some special agreement with them.

Can we rely on Hydra or Milkomeda?

One popular sentiment is that L2 solutions like Hydra or Milkomeda can help us.

Milkomeda isn't meant as a throughput solution -- it's an interoperability solution so it won't help with this.

Hydra head is still in development so we need a solution sooner than later. Additionally, the first version of Hydra deployed will be fairly limited (you need to open a state channel with a fixed set of actors, the actors all have to be online 24/7, you can't add or remove assets from the channel other than closing the channel entirely). Given this set of limitations, we can't relay on Hydra head v1 to fully alleviate the problem. It's possible to have non-custodial head-as-a-service solutions to bypass some of these limitations (but nobody is working on this as far as I know) and some future iterations of Hydra improve or get rid of these restrictions.

Even if we have Hydra, it still doesn't solve the problem entirely because states still have to be opened & closed on the main chain eventually.

Is a fee market avoidable?

No. Blockchains have always fundamentally work on a "pay up or shut up" system. If there is network congestion, the only objective way to decide who to allow through is to prioritize the transactions with the most financial value which is proxy-defined by how much in fees they are willing to pay for the transaction to go through.

Additionally, a fee market is something all Cardano users should want. Cardano has a fixed supply and so stake pool returns will keep going down (it's already below 5% return on staking), but the good news is that transaction fees are distributed to all delegators. That means the only way for people to have an incentive to continue delegating is for tx fees to increase until they eventually surpass rewards from the reserve

You can read IOG's thoughts on fees & tiered network traffic here

What does Ethereum look like?

Other blockchains like Ethereum take full advantage of the fee market (you can see live information here) and wallets look at the state of the network to know how much in tx fees they have to pay.

  • Ethereum does around 10~15 tps
  • Ethereum block time is ~13seconds

Currently transaction fees on Ethereum actually provides more to miners than the base block rewards (this is despite the fact that in Ethereum, unlike Cardano, block rewards do not go down overtime algorithmically)

Related tickets

@SebastienGllmt SebastienGllmt added the enhancement New feature or request label Sep 25, 2021
@rdlrt
Copy link

rdlrt commented Sep 26, 2021

At the moment the spikes in usage have been caused via NFT drops - while I agree there needs to be a preferential treatment in mempool, I am not very keen to see simply a fee-based (i.e. single-dimensional without catering for asset or tx type) priority.

Having seen some (not all) of the drop mechanism, they use pre-deposits from users , which is used to cater for minimum UTxO + fees + margin. Thus, this simply means the impact of sorting it on fees will mean users who want to send simple ADA tx will end up paying having to pay very high fees during spike, while the drops will simply continue to use very whatever protocol needs for fees specified for tx - as those are from user deposits.

Also - while doing so, the change also onboards the complications and risks as you highlighted. I almost feel the handling of queue - if not FIFO - needs to be more mature in differentiating potentially actual load versus drops, given that latter could potentially cause a short temporary DDoS if based on "borrowed" fees. Also, the increase in max tx volume will cause an increased impact on resources (bandwidth, processing, and downstream to components like dbsync/explorers), such a change should be taken up with very careful research indeed. I do not see an issue with increase in fees itself (even if they're dynamic), as long as the above scenarios are catered for to be selectively more expensive than a user simply sending base ADA funds from his wallet. While for assets, I believe the proposed babel fees concept can have addendums and cater for secondary asset needs(?), allowing participation by SPOs interested.

The tx handling on client side can be more dynamic (no reason for the transactions to "fail" instead of catering for enough ttl and not having inter dependencies for UTxOs) - unless there is some criteria that I miss. For immediate short-term purposes of natural transaction loads, there is also activeSlotCoefficient that could be increased via fork to increase block frequency per epoch if needed - but again, there will be slightly higher resource usage and potential increase in overlap of slot/height battles.

@SebastienGllmt
Copy link
Contributor Author

@rdlrt I agree it's definitely far from ideal. The point of this issue was mostly to try and have a publicly-visible summary of the state of things that can be shared and discussed as opposed to proposing a specific concrete solution.

@dcoutts
Copy link
Contributor

dcoutts commented Sep 27, 2021

There are a few misunderstandings here.

Firstly, note that the system is not congested as a whole. There are simply periods when some users (e.g. doing NFT) drops have unbounded demand. That is not in itself a problem. It is fine for an NFT drop to take a while to complete.

What would be a problem is if "ordinary" users (i.e. not those doing the NFT drops) are finding that they face unacceptably long times to get their txs included onto the chain.

If that was the problem, note that increasing the system capacity would not solve it. To see why consider a hypothetical example: suppose the NFT drop takes 2 hours at the current system capacity, and otherwise normally the system is 30% loaded. Then doubling the system capacity would simply reduce the time for the NFT drop from 2 hours to 1 hours, and make the system normally 15% loaded. So all it would do is shorten the time duration in which "ordinary" users face unacceptably long confirmation times.

If this were a problem then the solution is not capacity but priority. The NFT drop can be the lowest priority since it is fine if it takes longer to complete.

Note that the system already has some natural fairness so that we do not expect the NFT drops to crowd out ordinary users. We have not yet seen any evidence that this is the case. If we do see such evidence, there are further measures we can take with priorities so that typical user transactions will get priority over single users with unbounded demand (like large NFT drops).

For users, this is a pain point (txs submitted through a wallet won't go through with no error message given)

This is not really the case at the level of the node. Submitting txs to a relay node can block until there is capacity in the mempool. This means there is notification.

for people doing NFT drops it is disastrous (a random percentage of their drop randomly doesn't go through and they have to re-submit the txs that didn't make it).

This is conflating two unrelated issues. The first issue is the same as for ordinary wallets. Submitting a transaction can induce back-pressure. This is a feature not a bug. The submitting agent does not need to do anything special to handle this: just keep submitting transactions. As and when there is space, the protocol will make forward progress.

The second issue is transactions that are successfully submitted to a local node or relay but that never make it to the chain. The system as a whole has never guaranteed the reliability of transaction submission. It is impossible to provide such a guarantee in a distributed system like this. As such, submitting agents have always been required to handle re-submission logic in an appropriate way. The TTL mechanism is provided as a way to know for sure when it is safe to re-submit a transaction.

Cardano-node uses a fixed-size mempool is currently only 2 blocks in size (~280 txs). If transactions are submitted when the mempool is full, they may be dropped (this is partially tool-dependent. The cardano-node just holds agency with txsubmit protocol until there is room)

No, they are not dropped. They are not accepted until there is room. This is the back-pressure.

To a certain extent. Wallets can implement tx resubmission so that even if the mempool being full caused the tx to be dropped initially, the wallet will just resend it for the user.

The txs are not dropped. This is a misunderstanding. They are simply not accepted yet. If you keep trying to submit then it will eventually be accepted.

But yes, totally separately to that point. It is possible for txs to fail to make it to the chain, and re-submssion is a necessary feature anyway, but it is not specifically related to congestion.

I don't know about why 2 blocks was picked as the fixed size. Would appreciate any background info on this. Possibly this was just to try and get Cardano to stay under 8GBs of RAM?

The mempool is as big as needed to ensure throughput, but no larger. Bigger buffers just create more work and bigger latency. The system is designed not to buffer but to put back-pressure on the edges of the system. This is better than using big buffers since it provides better feedback, and uses less resources to manage pending work. See also "buffer bloat" for a discussion on this in networking in general.

Can we prioritize transactions

Yes. As I say above, the issue with NFT drops is that it's a period of essentially infinite demand. Increasing capacity does not change that. The solution is to prioritise. We already do that to some extent. There are ways to do it more systematically and on more criteria.

The way to do it is not to reorder the mempool. The way to do it is to pick which txs from immediate peers to accept into the mempool in the first place. Here are some of the criteria we would probably want to use.

Characteristics/behaviour of the peer

  1. At the level of a node in the graph that is requesting txs from its immediate peers: make fair (weighted) random selections of the txs from the immediate peers when there is space in the mempool to fill. This is a kind of bandwidth sharing.
  2. Another scheduling trick is to prioritise the peer that has recently used least of the resource.
  3. Give lower weighting to peers that have only very recently connected. Equivalently this prioritises peers that have established connections for a long time.
  4. Prioritise/weight based on known peers: we sometimes have peers we know about from local config, which correspond to other nodes we control, or nodes we have peering arrangements with.
  5. Prioritise/weight based on the stake of the remote peer, which prioritises the peers of other SPO relays over anonymous 3rd parties

Characteristics of the transaction

  1. Age of the oldest UTxO entry.
  2. Total value transferred in the transaction.

And finally

  1. Fee multiplier vs min fee.

Note that we already do 1 to some extent. That's the current pseudo-random "fairness". And also note that there's lots of things to consider prioritising on before we get to a fee market.

If we see evidence that fairness is a problem with these NFT drops then we can do more of 1--5 and that should solve the problem for NFT drops and similar spikes from individual users.

When the system is more generally loaded from continuous demand rather than these spikes then it makes sense to increase capacity incrementally.

Summary

  1. We need to make sure wallets and other submitting agents are dealing properly with back-pressure in tx submission.
  2. We need to make sure wallets are properly handling re-submission for cases where txs are accepted but never make it to the chain.
  3. We need to look for evidence of unfairness in tx submission (i.e. NFT drops actually crowding out normal users) and if so then plan to do some more of the prioritisation features.

@DanTup
Copy link

DanTup commented Sep 30, 2021

We need to make sure wallets and other submitting agents are dealing properly with back-pressure in tx submission

Without triggering this, is there a way I can confirm what cardano-submit-api's behaviour is here? Assuming my HTTP Client will wait indefinitely, would cardano-submit-api's web server also wait, or will it time my request out if the node blocks waiting to take the transaction?

I tried reviewing the cardano-submit-api source for clues but couldn't find anything about timeouts. It seems like it may be using a package named Servant, but I also couldn't find anything about timeouts in their docs or repo that seemed to indicate if there is a default timeout.

@dcoutts
Copy link
Contributor

dcoutts commented Sep 30, 2021

@DanTup the behaviour should be that the HTTP server side will block without any timeout. I say "should" because I've also not verified that the HTTP server doesn't have a timeout. I don't think it should have any, but I've not verified it, and you're right that we should. As you say, one thing to watch out for is timeouts in your HTTP client: you want to avoid that.

So when using the submit api, the behaviour we want (both of the api and the client using it) is indeed simply to block until the node has space and gets back to us and accepts our tx.

My advice for applications doing large NFT drops would be to use the Daedalus wallet backend via its API, rather than the cardano-submit-api. The submit API does what it does ok, but it's not a wallet. It does not track pending txs. It does not resubmit, and all agents have to have some capacity to resubmit (even if that means kicking it back to a human to decide). If you're using the submit API then you must implement your own wallet functionality such as:

  • making txs with appropriate TTLs so you know when txs have definitely failed to make it to the chain and so can be resubmitted
  • actually resubmitting txs (which would involve re-signing if you use the policy of waiting for the TTL)
  • tracking the pending/in-flight set of UTxOs (so we don't re-use UTxOs that are being spent by earlier txs, but make them available again for failed txs)

@dcoutts
Copy link
Contributor

dcoutts commented Sep 30, 2021

cc @newhoggy @Jimbo4350 we should verify this blocking behaviour in the REST API.

@DanTup
Copy link

DanTup commented Sep 30, 2021

So when using the submit api, the behaviour we want (both of the api and the client using it) is indeed simply to block until the node has space and gets back to us and accepts our tx.

I think this should also be very clear in the documentation (for the API endpoint for the web API, and the description for cardano-cli transaction submit for the CLI) that this is the behaviour as a prompt/reminder for anyone using them to consider this case. And if the results if your client ties out (or you hit Ctrl+C in the CLI) is not deterministic (eg. you might cancel right as the tx is processed, or it might not be accepted and probably would therefore be dropped), I think it's worth calling that out too. Docs should help guide people into writing robust software and they might not always be aware of infrequent edge cases like this.

My advice for applications doing large NFT drops would be to use the Daedalus wallet backend via its API, rather than the cardano-submit-api. The submit API does what it does ok, but it's not a wallet.

Assuming you mean cardano-wallet, I tried to use that first. I spent many weekends discovering that it could not mint (it has APIs that just throws as not implemented?), and it's submit-proxy API always rejected transactions I built locally (saying they weren't encoded correctly - there's little documentation about this and questions about this in the forums were all responded to with "don't use that API, it's not for you").

cardano-submit-api was my very last choice, and the only one I was able to make work 🤷‍♂️

making txs with appropriate TTLs so you know when txs have definitely failed to make it to the chain and so can be resubmitted

If you use a time-locked policy, you have to use an exact matching TTL on the transaction (at least, that seemed to be my experience), so I'm not sure this works. My plan is just to keep the txs around on disk as a backup, so I can periodically re-submit them if required (rebuilding the txs would be complicated because to increase throughput I spend utxos from un-confirmed transactions.. resubmitting the same txs is therfore much simpler).

@dcoutts
Copy link
Contributor

dcoutts commented Sep 30, 2021

I agree this ought to be clear in the docs.

And if the results if your client ties out (or you hit Ctrl+C in the CLI) is not deterministic (eg. you might cancel right as the tx is processed, or it might not be accepted and probably would therefore be dropped), I think it's worth calling that out too.

It's worth noting here that application authors that care about this may want to use the tx submission protocol at a lower level where they can in fact resolve that race condition. At the lower level, if you talk directly to a relay, the way it works is like this:

  1. The relay asks you for up to N TxIds that you might wish to send.
  2. You reply with up to N TxIds
  3. The relay then can ask for the full Tx for any of the TxIds that you told it about.
  4. You reply with the requested Txs.

The relay will only do step 3 and ask for the full Tx once there is space in its mempool. So in the state in which you're blocking waiting to submit, we do in fact know that there's no danger of loosing track of whether a tx was submitted or not. It is only once the relay asks for and we reply with the full Tx that we've really fired the tx off into the system.

Now we don't currently make it very easy to interact with the node-to-node tx submission protocol, but if we all conclude that would be useful then that might be the way to go.

As for your frustration with the wallet backend, I take your point. It has the right basic infrastructure to do well in this use case, but yes it would need to have the right APIs to support it.

If you use a time-locked policy, you have to use an exact matching TTL on the transaction (at least, that seemed to be my experience), so I'm not sure this works.

Ah, fortunately that's not the case. The rule there is simply that the validity interval of the tx has to be within the time required by the script. Suppose you have a time lock script that says "the time must be before the 1st of October and signed with key X or after 1st October and signed with key Y". Then you can submit a tx signed with key X with a validity interval that ends well before the deadline. It does not need to extend to the deadline exactly. And the same goes for the other end of the validity interval, for requiring the time be after some point.

So yes you can still use TTLs for knowing txs didn't make it, and also use time locks. You can verify this for yourself in the Allegra ledger specification and in practice on the testnet.

My plan is just to keep the txs around on disk as a backup, so I can periodically re-submit them if required (rebuilding the txs would be complicated because to increase throughput I spend utxos from un-confirmed transactions.. resubmitting the same txs is therfore much simpler).

That's a perfectly reasonable policy too.

@benapetr
Copy link

benapetr commented Oct 2, 2021

Why not just increase the mempool? It's extremely tiny at this moment (126kb) and 99% of time cardano network is almost idle. These spikes would be easily handled if mempool was bigger, most of blocks that follow the spike are near 0 txs, so this way it would just redistribute into following blocks and users wouldn't need to resubmit anything.

@benapetr
Copy link

benapetr commented Oct 2, 2021

really this problem is so easy and trivial to fix that I don't understand why there is such a lengthy complicated discussion about it :)

@mark-stopka
Copy link
Contributor

@KtorZ is implementing mempool query API over here, which is relevant to this discussion...

@NeilBurgess42
Copy link
Contributor

NeilBurgess42 commented Oct 5, 2021

@benapetr @dcoutts @DanTup
Neil Burgess here, I am working on CAD-3504 to improve the docs. I want to create a best practices document for API users submitting transactions to the Cardano network.
I gather
cardano-submit-api is the correct endpoint to address.
When a transaction is submitted, the submitter should block until it is accepted before submitting another one.
Can you please confirm or correct?
Recommended method of confirming acceptance?

@PhilippeLeLong
Copy link

@dcoutts aside from NFT drops, what's the expectation on network activity when the first successful dApps go live? Especially DEXes will easily fill up a whole block with swapping transactions. Is increasing the block size really that critical as @SebastienGllmt suggests?

@newhoggy
Copy link
Contributor

I've confirmed that when submitting transactions via the submit-api, that when the mempool is full, the thread is blocked.

It is therefore be possible to queue up multiple transactions that get processed as soon as mempool capacity becomes available.

There is one gotcha however, which is that the number of in-flight transactions is limited by the number of open files allowed by the operating system. If this is exceeded, then cardano-submit-api just exits so further request are not served.

Increasing the number of open files allowed by the operating system with ulimit will increase the available number of in-flight transactions.

@dcoutts
Copy link
Contributor

dcoutts commented Oct 13, 2021

I think we should recommend that for large batches of txs, it's still sensible to use the cardano-submit-api serially (or at least with very low concurrency). When the mempool is empty, the actual submission should not take long. When it's full you want it to block and there's no advantage to having a 1000 blocked threads. So we should recommend the simple scheme of just submit the txs in order: i.e. don't submit the next one until the previous one has been accepted into the local mempool.

@NeilBurgess42
Copy link
Contributor

I have prepared a document that summarizes my understanding of this discussion. Please review and comment. All of Input Output has editor access.

@mark-stopka
Copy link
Contributor

I have prepared a document that summarizes my understanding of this discussion. Please review and comment. All of Input Output has editor access.

However read-only access is missing for non-IOG accounts.

@DanTup
Copy link

DanTup commented Oct 14, 2021

I've confirmed that when submitting transactions via the submit-api, that when the mempool is full, the thread is blocked.

There is one gotcha however, which is that the number of in-flight transactions is limited by the number of open files allowed by the operating system. If this is exceeded, then cardano-submit-api just exits so further request are not served.

Thanks for testing! This sounds great to me - I just need to ensure my web client won't time out then (I'm already processing things serially, so will also just pause here until it completes before continuing with others) :-)

@benapetr
Copy link

How hard is it to increase the size of mempool? Also I noticed that for some reason it is randomly getting full for extended period and stays full even if there isn't any large amount of transactions. This could be observed on testnet yesterday, look at these graphs I gathered from prometheus monitoring testnet cardano-node:

image

Right now max size of mempool is about 126kb, I don't understand why it stayed full for such a long time, shouldn't next slot in which block is added to blockchain clear it, or at least mostly clear it?

@ilap
Copy link

ilap commented Nov 10, 2021

@dcoutts

I think this can be a part of the solution:
If anybody is having questions why, feel free to ask.

--- a/ouroboros-consensus/src/Ouroboros/Consensus/Mempool/Impl/Pure.hs
+++ b/ouroboros-consensus/src/Ouroboros/Consensus/Mempool/Impl/Pure.hs
@@ -130,7 +130,10 @@ pureTryAddTxs
 pureTryAddTxs cfg txSize wti tx is
   | let size    = txSize tx
         curSize = msNumBytes  $ isMempoolSize is
-  , curSize + size > getMempoolCapacityBytes (isCapacity is)
+        capacity = getMempoolCapacityBytes (isCapacity is)
+  , case wti of
+      Intervene -> curSize + size > (3 * capacity `div` 2)
+      DoNotIntervene -> curSize + size > capacity
   = NoSpaceLeft
   | otherwise
   = case eVtx of

@DanTup
Copy link

DanTup commented Feb 12, 2022

Submitting txs to a relay node can block until there is capacity in the mempool

Can anyone tell me what the ordering of accepting new txs into the mempool is? If a mempool is full and two peers want to send transactions, are they always accepted in-order? If the mempool frees up 10kb and the first peer wants to give a 15kb tx but the second peer has a 5kb tx, will the node wait until it has enough space to take the first tx (15kb), or will it accept the second smaller one (5kb) to fill up the space?

I assumed they were taken in order, but some of the descriptions I've seen recently - like the image shown at https://youtu.be/EGwJCiMy4_E?t=125 seem to suggest that smaller txs can jump the queue while large ones are held back.

@hududed
Copy link

hududed commented Feb 16, 2022

@dcoutts could you elaborate on how the following can be prioritized ideally:

Characteristics of the transaction

Age of the oldest UTxO entry.
Total value transferred in the transaction.

@lunarisapps
Copy link

There has been some changes here right?

I mean except peer to peer and hydra all of the above points should be implemented?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests