Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

[State Sync] DHT causing high resource utilization #5799

Open
peterargue opened this issue Apr 28, 2024 · 2 comments
Open

[State Sync] DHT causing high resource utilization #5799

peterargue opened this issue Apr 28, 2024 · 2 comments
Assignees
Labels
Bug Something isn't working Execution Cadence Execution Team S-Access

Comments

@peterargue
Copy link
Contributor

peterargue commented Apr 28, 2024

馃悶 Bug Report

Over the last year or so, we've had incidents of high resource utilization (memory, cpu, goroutines) across all node roles and networks (mainnet, testnet, lower environments). Profiles have all pointed to the IPFS DHT as the main culprit.

Context

DHT stands for Distributed Hash Table and is a library provided by the team behind IPFS for decentralized peer discovery and content routing on the IPFS network.

It's used for 2 usecases on the Flow network

  1. Peer discovery on the public network
    On the public network node identities are not required to be static, so nodes need a peer-to-peer method to discover peers and maintain a map of the network.
  2. Content routing for bitswap on the staked and public networks
    Bitswap is a protocol for sharing arbitrary data among decentralized nodes. On Flow, it's used to share execution data between Execution and Access nodes. The DHT allows peers to announce which blobs of data they have, which makes syncing more efficient.

Originally, it was enabled for both the staked and public network for all nodes, even though it was only needed for Access and Execution nodes on staked network.

Past Issues

Originally, the issue manifested with a linear, unbounded goroutine leak observed on all node types. Nodes would eventually run out of memory and crash.

libp2p ResourceManager limits were tuned (#4846) and eventually the DHT was disabled on all nodes that did not require it (#5797). This resolved the issue for those nodes. However, the intermittent leaks persisted for Access and Execution nodes.
image
This shows the resource manager blocking new streams which limited how high the goroutine leaks got

Upgrading libp2p and the DHT library (libp2p/go-libp2p-kad-dht) (#5417), resolved the goroutine leak, but the issue then manifested in different ways. We started to observe spikes in goroutines that were capped around 8500, but memory utilization remained high

image image

Current Issues

Recently, we've been seeing 2 more issues that seem to be related:

  1. High memory on previewnet ANs ([Access] Frequent previewnet AN OOM聽#5798)
    Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.
  2. Resource contention on public ANs ([Access] Resource contention from local data lookups聽#5747)
    There isn't a direct link between this issue and the DHT other than that 20-30% of CPU on the nodes was used for DHT related activities. Also, a high amount of allocations on the heap result in more frequent garbage collection.

Disabling the DHT

The main intention of the DHT is to make it efficient to disseminate the routing table of which blocks of data are stored on which nodes. The basic design makes a few assumptions:

  1. Nodes are connected to a subset of peers on the network
  2. Nodes only host a subset of data
  3. Peers are joining and leaving the network regularly, thus it's necessary to remind the network which blocks of data you have.
  4. Data is equally relevant over time, so we should remind peers of all data we have

On the Flow staked bitswap network, none of those assumptions are true.

  1. Participating nodes are connected to all other participating peers
  2. All nodes have all recent data
  3. Staked nodes will generally be available throughout an epoch
  4. Only the most recent data is generally needed by peers

Additionally, bitswap already has a built-in mechanism for discovering peers that have data the client wants. This mechanism is used first before looking into the DHT, so the DHT is rarely used in practice.

Given all of that, it seems there is limited value to run the DHT on the staked network, especially with amount of overhead.

See these comments on an analysis of disabling the DHT (#5798 (comment), #5798 (comment), #5798 (comment), #5798 (comment))

Next steps

Disabling the DHT seems to be a viable option for nodes on the staked network. It is still needed on the public network for peer discovery, though possibly not for bitswap. More investigation is needed to understand if these issues will also appear there.

We could also try these options for reducing the memory utilization:

  • switch to an on-disk datastore
  • update the in-memory datastore (or create a new version) with options for TTL and entry limit

We could also explore options for limiting which blobs are reprovided. e.g. only reprovide blobs from the last N blocks.

@peterargue peterargue added Bug Something isn't working S-Access Execution Cadence Execution Team labels Apr 28, 2024
@peterargue peterargue self-assigned this Apr 28, 2024
@guillaumemichel
Copy link

Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.

What is the order of growth of your network and of the content (number of provider records in the DB)?

@peterargue
Copy link
Contributor Author

peterargue commented May 23, 2024

Profiles point directly to the DHT as the source of the memory growth. In particular, the in-memory db used to store the list of providers for bitswap data.

What is the order of growth of your network and of the content (number of provider records in the DB)?

@guillaumemichel this network only had 4 nodes participating in the DHT. We don't expose any metrics on the provider records, so I'm not sure what the total count is, but we are producing 3-4 new records per second, and each node held ~10M blobs in their datastore.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Bug Something isn't working Execution Cadence Execution Team S-Access
Projects
None yet
Development

No branches or pull requests

2 participants