Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

v2.2.1 compaction on same blocks over and over again #4089

Closed
baozuo opened this Issue Apr 16, 2018 · 4 comments

Comments

Projects
None yet
4 participants
@baozuo
Copy link

baozuo commented Apr 16, 2018

What did you do?

  1. Upgrade prometheus from v2.1.0 to v2.2.1
  2. remove custom config --storage.tsdb.min-block-duration=30m

What did you expect to see?
I expect the storage usage to be stable as in v2.1.0.

What did you see instead? Under which circumstances?
The storage usage keeps growing, due to the observed behavior that it keeps compacting the same blocks over and over again.

image

Environment

  • System information:

the official prometheus image running in k8s.
Linux 4.4.102-k8s x86_64

  • Prometheus version:
prometheus, version 2.2.1 (branch: HEAD, revision: 9579abd6d5b7b6a3989d21955399628a00408189)
  build user:       root@runner-7010ab76-project-1551-concurrent-0
  build date:       20180413-17:10:44
  go version:       go1.10.1
  • Alertmanager version:
    N/A

  • Prometheus configuration file:
    N/A

  • Alertmanager configuration file:
    N/A

  • Logs:

level=info ts=2018-04-15T17:44:51.163624645Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1523656800000 maxt=1523664000000
level=info ts=2018-04-15T17:45:22.660882942Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1523656800000 maxt=1523664000000
level=info ts=2018-04-15T17:45:54.733097351Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1523656800000 maxt=1523664000000
level=info ts=2018-04-15T17:46:24.961197242Z caller=compact.go:393 component=tsdb msg="compact blocks" count=1 mint=1523656800000 maxt=1523664000000

The meta file information of latest blocks:

--
01CB0PSV2JADHS4K4ADJXKA4Y9/meta.json:	"minTime": 1523642400000,
01CB0PSV2JADHS4K4ADJXKA4Y9/meta.json-	"maxTime": 1523658600000,
--
01CB0PRVAT464AHBRQD40ACQGH/meta.json:	"minTime": 1523658600000,
01CB0PRVAT464AHBRQD40ACQGH/meta.json-	"maxTime": 1523660400000,
--
01CB0PS7CAZP34TQA8EDHQBC25/meta.json:	"minTime": 1523660400000,
01CB0PS7CAZP34TQA8EDHQBC25/meta.json-	"maxTime": 1523662200000,

the target compacted block (mint=1523656800000 maxt=1523664000000) spans across the latest 3 blocks, and the 3 blocks are not cleaned up after the compaction is done.

The issue disappears when we bring back the custom config --storage.tsdb.min-block-duration=30m

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Apr 17, 2018

I seem to recall that changing the block duration for a Prometheus instance with existing tsdb data isn't supported. Ping @gouthamve @fabxc.

@brian-brazil

This comment has been minimized.

Copy link
Member

brian-brazil commented Jun 22, 2018

Yeah, that's not a setting you should be changing. Does this happen with the default settings?

@simonpasquier

This comment has been minimized.

Copy link
Member

simonpasquier commented Aug 7, 2018

Many TSDB bugs have been fixed between v2.2.1 and now. I didn't find any other report of a similar issue. I'm closing it for now. Feel free to reopen if you're still having the problem with Prometheus v2.3.2.

@lock

This comment has been minimized.

Copy link

lock bot commented Mar 22, 2019

This thread has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@lock lock bot locked and limited conversation to collaborators Mar 22, 2019

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
You can’t perform that action at this time.