Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

refactor: snapshot and pruning functionality #140

Merged
merged 41 commits into from
Mar 24, 2022
Merged

Conversation

p0mvn
Copy link
Member

@p0mvn p0mvn commented Mar 12, 2022

Description

Story 1

Since fast storage was implemented, we have had to encourage node operators to have a large pruning-keep-recent value. Otherwise, it would be possible to prune a height currently under snapshot and panic. Using pruning-keep-every that is equal to snapshot-interval was not a recommended solution. Currently, pruning-keep-every values are never deleted.

As a node operator, I want to have the ability to select any combination of pruning and snapshot settings so that it is applicable to my specific use case

Story 2

In addition, several node operators reported that their disk size significantly grows over time despite the fact that pruning is enabled. More rigorous testing is required to eliminate the possibility of pruning errors.

As a node operator, I want pruning to function correctly so that the disk size of my machine does not grow beyond what is expected.

Story 3

Currently, pruning-keep-every is used for the same purpose as snapshot-interval.

As a node operator, I want to have unique parameters so that I don't spend time understanding redundant complexity

Story 4

The various combinations of snapshot and pruning are not well-understood and not rigorously tested.

As an engineer, I want to be confident that snapshot and pruning settings function correctly so that I can offer better support to node operators and debug with ease`

In This Pull Request

Pruning & Snapshot Configurations Update

  • pruning-keep-every is removed. Functionally, it is replaced by snapshot-interval. However, once the snapshot is complete, the snapshotted height is pruned away. There is no use case for pruning-keep-every other than snapshot-interval

Pruning Eventual Consistency

When snapshots are enabled, pruning would have the eventual consistency semantics. That is, some heights (the snapshot heights) are only pruned after the snapshot is complete

Rigorous Testing and Bug Fixes

This PR adds a lot of unit tests at various levels of abstraction, including configuration, pruning and snapshot components, abci.

Notes

  • For every height X that is a multiple of pruning-interval, when height X is being committed, we never prune X itself but only heights that are less than X. In other words, we never prune the current height being committed. However, we use X (the current height) to determine if now is the time to prune.
    => So we never end up pruning a height during the same commit while taking a snapshot for that height
    E.g.
rs.pruningManager.HandleHeight(X - 1) // update the list of heights eligible to be pruned at next interval
if rs.pruningManager.ShouldPruneAtHeight(X) { // check if should prune at
      ...
}

@github-actions github-actions bot added the C:CLI label Mar 12, 2022
@p0mvn p0mvn force-pushed the roman/snap-prune branch 2 times, most recently from 669c0a4 to 439562f Compare March 13, 2022 20:50
@p0mvn p0mvn force-pushed the roman/snap-prune branch 2 times, most recently from a199c6f to bddfe10 Compare March 16, 2022 22:53
@github-actions github-actions bot removed the C:CLI label Mar 16, 2022
@p0mvn
Copy link
Member Author

p0mvn commented Mar 17, 2022

Tested with prune = "everything", snapshot-interval = 100, snapshot-keep-recent = 2

It functions without issues. There is one caveat where, sometimes it happens so that the snapshot cannot be taken since snapshot-interval = 100 is too small. If we start taking a snapshot at height 100, we reach height 200 when the previous snapshot (at height 100) is not done yet. Then, height 200 is skipped with log message:

ERR failed to create state snapshot err="a snapshot operation is in progress: conflict" height=200

It is then pruned at the next pruning-interval

This should not be a problem, just good to be aware.

Example from live node:

  • height 3540400
    • snapshot successfully completes
root@roman-test:~# cat node.log | grep 3540400
4:05AM INF executed block height=3540400 module=state num_invalid_txs=0 num_valid_txs=0
4:05AM INF commit start height=3540400
4:05AM INF commit end height=3540400
4:05AM INF start pruning at height 3540400
4:05AM INF prune end, height - 3540400
4:05AM INF flushing metadata height=3540400
4:05AM INF flushing metadata finished height=3540400
4:05AM INF committed state app_hash=41B4E6D1418DBCB83566B443D87C2E5904A285D5E61C49020431EC774AFC2706 height=3540400 module=state num_txs=0
4:05AM INF creating state snapshot height=3540400
4:05AM INF indexed block height=3540400 module=txindex
4:14AM INF HandleHeightSnapshot height=3540400
4:14AM INF completed state snapshot format=1 height=3540400
4:15AM INF pruning the following heights: [3540790 3540791 3540792 3540793 3540794 3540795 3540796 3540400 3540797 3540798 3540799]
  • height 3540500
    • note "a snapshot operation is in progress: conflict"
    • this happened before height 3540400 completed
root@roman-test:~# cat node.log | grep 3540500
4:05AM INF executed block height=3540500 module=state num_invalid_txs=0 num_valid_txs=0
4:05AM INF commit start height=3540500
4:05AM INF commit end height=3540500
4:05AM INF start pruning at height 3540500
4:05AM INF prune end, height - 3540500
4:05AM INF flushing metadata height=3540500
4:05AM INF flushing metadata finished height=3540500
4:05AM INF committed state app_hash=10BE9A4ACE038571BE4FBE058C3FD747C40032AA88FF7FA94B2E509E13345AF4 height=3540500 module=state num_txs=0
4:05AM INF creating state snapshot height=3540500
4:05AM INF HandleHeightSnapshot height=3540500
4:05AM ERR failed to create state snapshot err="a snapshot operation is in progress: conflict" height=3540500
4:05AM INF indexed block height=3540500 module=txindex
4:05AM INF pruning the following heights: [3540501 3540500 3540502 3540503 3540504 3540505 3540506 3540507 3540508 3540509]
  • height 3540900
    • performs snapshot because it attempts to do so after the previous snapshot height finished
    • all snapshot height between 3540400 and 3540900 were skipped
root@roman-test:~# cat node.log | grep 3540900
4:22AM INF Timed out dur=4988.125269 height=3540900 module=consensus round=0 step=1
4:22AM INF commit is for a block we do not know about; set ProposalBlock=nil commit=9CDC70F87F747A1132DE693479E3E30B182D4E54DD81820032ECD47CEE77C3E6 commit_round=0 height=3540900 module=consensus proposal=
4:22AM INF received complete proposal block hash=9CDC70F87F747A1132DE693479E3E30B182D4E54DD81820032ECD47CEE77C3E6 height=3540900 module=consensus
4:22AM INF finalizing commit of block hash=9CDC70F87F747A1132DE693479E3E30B182D4E54DD81820032ECD47CEE77C3E6 height=3540900 module=consensus num_txs=0 root=FC7AE8E6C42B6EB0F659543E3BAD3DA51327CDE81051323C0907BA73DD962B9D
4:22AM INF executed block height=3540900 module=state num_invalid_txs=0 num_valid_txs=0
4:22AM INF commit start height=3540900
4:22AM INF commit end height=3540900
4:22AM INF start pruning at height 3540900
4:22AM INF prune end, height - 3540900
4:22AM INF flushing metadata height=3540900
4:22AM INF flushing metadata finished height=3540900
4:22AM INF committed state app_hash=0635DD2E11448F8D8B471E60C947190F51F1EE6BB1B9DBC86F69C0D97374221E height=3540900 module=state num_txs=0
4:22AM INF creating state snapshot height=3540900
4:22AM INF indexed block height=3540900 module=txindex
4:32AM INF HandleHeightSnapshot height=3540900
4:32AM INF completed state snapshot format=1 height=3540900
4:32AM INF pruning the following heights: [3541001 3541000 3541002 3541003 3540900 3541004 3541005 3541006 3541007 3541008 3541009]

Conclusion:

  • Note that all snapshot heights that are multiples of snapshot interval were eventually pruned
  • if snapshot-interval is small to the extent where at the current snapshot height we are still snapshotting the previous one, the current one is skipped

@p0mvn
Copy link
Member Author

p0mvn commented Mar 17, 2022

Checking data folder for space at height 3540717:

root@roman-test:~/.osmosisd# du -shc data/*
7.1G    data/application.db
421M    data/blockstore.db
1.0G    data/cs.wal
40K     data/evidence.db
4.0K    data/priv_validator_state.json
803M    data/snapshots
1.5G    data/state.db
1.1G    data/tx_index.db
4.0K    data/upgrade-info.json
12G     total

@p0mvn
Copy link
Member Author

p0mvn commented Mar 17, 2022

Checking data folder at height 3547769:

root@roman-test:~/.osmosisd# du -shc data/*
7.1G    data/application.db
435M    data/blockstore.db
1.0G    data/cs.wal
40K     data/evidence.db
4.0K    data/priv_validator_state.json
841M    data/snapshots
1.5G    data/state.db
1.1G    data/tx_index.db
4.0K    data/upgrade-info.json
12G     total

),
maxAgeBlocks: 0,
commitHeight: 499000,
expected: 490000,
expected: 489000,
Copy link
Member Author

@p0mvn p0mvn Mar 17, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has changed because now we use snapshot-interval in place of pruning-keep-every. Although I added sdk.NewSnapshotOptions(10000, 1)), snapshot interval is used as another strategy for calculating the block retention height and it just takes precedence over the old strategy used by this unit test.

@github-actions github-actions bot added the C:CLI label Mar 17, 2022
@p0mvn p0mvn changed the title WIP: snapshot and pruning refactor refactor: snapshot and pruning functionality Mar 17, 2022
@p0mvn p0mvn requested a review from ValarDragon March 17, 2022 19:28
Copy link
Member

@ValarDragon ValarDragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome job! I really like this improved abstraction between pruning and snapshot management you've made.

So far I've mostly gone through the boilerplate and some of the logic. Going to keep reviewing later today

store/types/store.go Outdated Show resolved Hide resolved
client/config/config_test.go Outdated Show resolved Hide resolved
pruning/types/options.go Outdated Show resolved Hide resolved
@tac0turtle
Copy link

this is awesome!! Id love to get this upstreamed into the sdk

@p0mvn
Copy link
Member Author

p0mvn commented Mar 21, 2022

this is awesome!! Id love to get this upstreamed into the sdk

Thanks. Will push this upstream once approved

@alexanderbez
Copy link
Collaborator

Just want to reference this here: cosmos#11152

As I removed the pruning-keep-every from the core SDK as it was more or less pointless (as you've pointed out as well).

snapshots/manager.go Outdated Show resolved Hide resolved
snapshots/manager.go Outdated Show resolved Hide resolved
@github-actions github-actions bot removed the C:CLI label Mar 21, 2022
Comment on lines 304 to 310
rms, ok := app.cms.(*rootmulti.Store)
if !ok {
return errors.New("state sync snapshots require a rootmulti store")
}
if err := rms.GetPruning().Validate(); err != nil {
return err
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this still be inside of an app.snapshotManager != nil?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No because pruning options are set independently from snapshot settings. If we do not explicitly set pruning options, pruning-nothing is set by default.

Pruning can and should work independently from snapshots and vice versa.

However, if snapshot are set, then it supplies snapshot-interval to the pruning.Manager. As a result, pruning.Manager knows which heights to skip until after the snapshot is taken.

Let me know if that makes sense

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess I'm a bit thrown off that app.cms has to be a root multi store here. Perhaps the logic should be:

Suggested change
rms, ok := app.cms.(*rootmulti.Store)
if !ok {
return errors.New("state sync snapshots require a rootmulti store")
}
if err := rms.GetPruning().Validate(); err != nil {
return err
}
rms, ok := app.cms.(*rootmulti.Store)
if !ok && app.snapshotInterval > 0 {
return errors.New("state sync snapshots require a rootmulti store")
} else if ok {
if err := rms.GetPruning().Validate(); err != nil {
return err
}
}

Copy link
Member Author

@p0mvn p0mvn Mar 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's correct, there should have been a check similar to the suggested:

if !ok && app.snapshotManager != nil {
		return errors.New("state sync snapshots require a rootmulti store")
}
if err := rms.GetPruning().Validate(); err != nil {
                return err
}

Updated. Thanks for catching that

type Manager struct {
logger log.Logger
opts *types.PruningOptions
snapshotInterval uint64
Copy link
Member

@ValarDragon ValarDragon Mar 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think its better for pruning and snapshots to be managed separately, or should one have a reference to the other?

I'm a bit confused about the interaction interface between the two managers

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pruning can work independently from snapshots if those are not enabled. In that case, snapshotInterval is 0 and we do not skip any heights.

However, if snapshot settings are set, then the snapshot manager is also set and the snapshotInterval within the pruningManager is non-negative (it is set here).

In such a case, snapshotInterval functions as the old pruning-keep-every so that there is no race between pruning and snapshots where we get the "active reader" error in iavl. There is one important difference - when the snapshot is complete, snapshot.Manager "tells" pruning.Manager to prune that height away with the following call:
https://github.com/osmosis-labs/cosmos-sdk/blob/8d4d6c7d718804d65330ff24a285392a74434131/snapshots/manager.go#L139

As a result, the snapshotInterval height is eventually pruned.

I agree that pruning and snapshot managers are coupled to each other. However, pruning and snapshots are coupled functionally so there is no way without creating the 2 components without these interactions. Functionally, the 2 components are independent - you can set pruning settings without snapshots and you can set snapshot settings without pruning.

Please let me know if I can explain anything further or if you have any suggestions on what to change.

Copy link
Member Author

@p0mvn p0mvn Mar 21, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To help with understanding this complexity, I can create UML and sequence diagrams if you think that would be helpful

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh the pruning manager is being set here: https://github.com/osmosis-labs/cosmos-sdk/pull/140/files#diff-84a22ba89cc7868bbafc59e07c35ba0bffb05115962cb885bf39eef1a1a8a66eR70

So: Pruning manager always needed, Snapshot manager optional, if snapshot manager set, must update a certain param in the pruning manager? It does so on initialization of snapshot manager.

If this parameter is set, the pruning manager will not delete any snapshot height, and it is up to the snapshot manager to prune it?

I don't think we need a UML diagram, would be convenient if we can add a note to this effect in the README for the pruning and snapshot folder though. (Unless theres a better place for it to live) And on the SetSnapshotInterval function https://github.com/osmosis-labs/cosmos-sdk/pull/140/files#diff-309b9123bdfc80cc367cc95308580b5489e26406b9f5f0920ac55c5b198a5a33R112-R115

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also if this is the case, the interaction front now 'clicks' for me!

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh the pruning manager is being set here: https://github.com/osmosis-labs/cosmos-sdk/pull/140/files#diff-84a22ba89cc7868bbafc59e07c35ba0bffb05115962cb885bf39eef1a1a8a66eR70

Yes, that's how the pruning manager gets its snapshotInterval member field updated if the snapshot options/manager are set

So: Pruning manager always needed, Snapshot manager optional, if snapshot manager set, must update a certain param in the pruning manager? It does so on initialization of snapshot manager.

That's correct

If this parameter is set, the pruning manager will not delete any snapshot height, and it is up to the snapshot manager to prune it?

That is mostly correct with one subtle difference - snapshot manager "tells" pruning manager to prune the snapshot height once it is done by calling this method. That is, the snapshot manager does not prune itself. Instead, it delegates that to the pruning manager.

would be convenient if we can add a note to this effect in the README for the pruning and snapshot folder though.

Will work on that

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Updated the READMEs

@alexanderbez
Copy link
Collaborator

@p0mvn could you summarize the main differences between this and the PR in the SDK? Is this PR mainly covering snapshot manager changes?

@p0mvn
Copy link
Member Author

p0mvn commented Mar 21, 2022

@p0mvn could you summarize the main differences between this and the PR in the SDK? Is this PR mainly covering snapshot manager changes?

Definitely @alexanderbez. Thanks for linking the SDK issue. I should have looked upstream first because the PR you linked is a large subset of the work done here.

On top of the work upstream, this PR lets snapshot-interval function as pruning-keep-every with one important difference - once a snapshot at height X is complete, the height X is eventually pruned away.

The reason for that is Osmosis nodes attempting to prune a height that is currently under the snapshot. As a result, they would fail with error. By avoiding pruning the "snapshot heights", that issue never occurs.

This problem started being particularly evident after the fast storage iavl changes. However, I'd guess that even without the IAVL changes, this is still possible if you have rigorous pruning with frequent snapshots enabled.

To temporarily mitigate, we were forced to require validators to have unnecessarily large pruning-keep-recent. With this change, this requirement is no longer needed. Snapshots may work even on pruning-everything.

As a result of the important difference I mentioned above, we are not polluting the disk by keeping the heights forever as is the case with old pruning-keep-every. On the contrary, these heights are removed once a snapshot is complete.

Other changes that stem from the main one:

  • created a pruning.Manager to concentrate the pruning logic in that module
  • refactored snapshot.Manager
  • added unit tests for various edge cases to make sure everything functions as we would expect

Comment on lines +102 to +110
func (m *Manager) HandleHeightSnapshot(height int64) {
if m.opts.GetPruningStrategy() == types.PruningNothing {
return
}
m.mx.Lock()
defer m.mx.Unlock()
m.logger.Debug("HandleHeightSnapshot", "height", height) // TODO: change log level to Debug
m.pruneSnapshotHeights.PushBack(height)
}
Copy link
Member

@ValarDragon ValarDragon Mar 22, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So if the node halts, is it possible that the snapshot height never gets pruned? Or is this list saved to disk anywhere, and loaded from disk?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems like we should be calling flushPruningSnapshotHeights here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These are flushed here:

rs.pruningManager.FlushPruningHeights(batch)

And loaded here:

if err := rs.pruningManager.LoadPruningHeights(rs.db); err != nil {

I followed the same logic to how we flush/load commit info and regular pruning heights

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

idt I get it, why don't we push this metadata to disk ASAP?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy to merge the PR though, and talk about this in a follow-up issue

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Issue created: #149

@p0mvn
Copy link
Member Author

p0mvn commented Mar 22, 2022

Needs osmosis-labs/iavl#36 to be merged to fix the data race test

Copy link
Member

@ValarDragon ValarDragon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM now! Thanks for adding those docs

I think we should make a follow-up issue about flushing the snapshotPruneHeights immediately once one is added, but I agree that whats done now should preserve the existing behavior

@p0mvn
Copy link
Member Author

p0mvn commented Mar 22, 2022

Starting burn tests at various configurations. Will run these nodes until tomorrow

Tests:

  1. prune nothing and snapshot at 500 intervals (mainnet)
  2. prune everything and snapshot at 500 intervals (mainnet)
  3. prune default and snapshot at 500 intervals (mainnet)
  4. prune custom 100-10 and snapshot at 500 intervals (testnet)
  5. prune custom 100-10 and no snapshots (mainnet)

@p0mvn
Copy link
Member Author

p0mvn commented Mar 23, 2022

Checking data folder size on disk with various configurations.

Approx height: 3702124

  • Node 1 - prune nothing and snapshot at 500 intervals (mainnet)
root@roman-multinode-pruning:~/.osmosisd/node1# du -shc data/*
12G     data/application.db
25G     data/blockstore.db
1000M   data/cs.wal
36K     data/evidence.db
4.0K    data/priv_validator_state.json
728M    data/snapshots
122G    data/state.db
11G     data/tx_index.db
4.0K    data/upgrade-info.json
170G    total
  • Node 2 - prune everything and snapshot at 500 intervals (mainnet)
root@roman-multinode-pruning:~/.osmosisd/node2# du -shc data/*
6.7G    data/application.db
25G     data/blockstore.db
1012M   data/cs.wal
36K     data/evidence.db
4.0K    data/priv_validator_state.json
718M    data/snapshots
122G    data/state.db
11G     data/tx_index.db
4.0K    data/upgrade-info.json
166G    total
  • Node 3 - prune default and snapshot at 500 intervals (mainnet)
root@roman-multinode-pruning:~/.osmosisd/node3# du -shc data/*
12G     data/application.db
25G     data/blockstore.db
1001M   data/cs.wal
36K     data/evidence.db
4.0K    data/priv_validator_state.json
718M    data/snapshots
122G    data/state.db
11G     data/tx_index.db
4.0K    data/upgrade-info.json
171G    total
  • Node 4 - prune custom 100-10 and snapshot at 500 intervals (testnet)
root@roman-test:~/.osmosisd# du -shc data/*
7.1G    data/application.db
611M    data/blockstore.db
1022M   data/cs.wal
48K     data/evidence.db
4.0K    data/priv_validator_state.json
930M    data/snapshots
1.8G    data/state.db
1.5G    data/tx_index.db
4.0K    data/upgrade-info.json
13G     total

@p0mvn
Copy link
Member Author

p0mvn commented Mar 23, 2022

Approximate height: 3710366

  • Node 1 - prune nothing and snapshot at 500 intervals (mainnet)
16G     data/application.db
26G     data/blockstore.db
1018M   data/cs.wal
36K     data/evidence.db
4.0K    data/priv_validator_state.json
541M    data/snapshots
125G    data/state.db
19G     data/tx_index.db
4.0K    data/upgrade-info.json
186G    total
  • Node 2 - prune everything and snapshot at 500 intervals (mainnet)
root@roman-multinode-pruning:~/.osmosisd/node2# du -shc data/*
6.9G    data/application.db
26G     data/blockstore.db
1.0G    data/cs.wal
36K     data/evidence.db
4.0K    data/priv_validator_state.json
541M    data/snapshots
125G    data/state.db
19G     data/tx_index.db
4.0K    data/upgrade-info.json
177G    total
  • Node 3 - prune default and snapshot at 500 intervals (mainnet)
root@roman-multinode-pruning:~/.osmosisd/node3# du -shc data/*
17G     data/application.db
26G     data/blockstore.db
1.0G    data/cs.wal
36K     data/evidence.db
4.0K    data/priv_validator_state.json
1.1G    data/snapshots
125G    data/state.db
19G     data/tx_index.db
4.0K    data/upgrade-info.json
187G    total
  • Node 4 - prune custom 100-10 and snapshot at 500 intervals (testnet)
root@roman-test:~/.osmosisd# du -shc data/*
7.2G    data/application.db
627M    data/blockstore.db
1018M   data/cs.wal
48K     data/evidence.db
4.0K    data/priv_validator_state.json
806M    data/snapshots
1.9G    data/state.db
1.5G    data/tx_index.db
4.0K    data/upgrade-info.json
13G     total

@p0mvn
Copy link
Member Author

p0mvn commented Mar 23, 2022

This is the diff between height 3702124 (left) and 3710366 (right)

image

@ValarDragon @alexanderbez could you help me understand if this is acceptable, please?

For example, I see that on prune everything on mainnet, the application db has grown by 0.2G. Prune everything has 10 latest heights as keep-recent.

Is this a problem? If yes, do you have any suggestions on how to go about it?

@p0mvn
Copy link
Member Author

p0mvn commented Mar 23, 2022

Also, prune default db has grown by 1G more than prune nothing. I think this might be attributed to me taking these snapshots at slightly different time periods. Since So this might not be a problem.

They should be roughly the same because prune default has 100,000 heights as keep-recent (roughly 1 week). Since the difference between these disk snapshots is only one day, prune default is, essentially, prune nothing

@p0mvn
Copy link
Member Author

p0mvn commented Mar 24, 2022

Prune everything application.db has grown a bit more. It is now 7.2G, up from 6.5G yesterday.

On more thought, this might still be fine due to how level db works - we keep adding more and more levels as we compact data.

I checked manually by querying the prune everything node. The versions get deleted as we would expect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

4 participants