From aa3513fc89849eb421468bf7b97adef0a9b33706 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Nicol=C3=A1s=20Pazos?= <32206519+npazosmendez@users.noreply.github.com> Date: Fri, 2 Feb 2024 15:38:50 -0300 Subject: [PATCH] remote write 2.0: sync with `main` branch (#13510) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit * consoles: exclude iowait and steal from CPU Utilisation 'iowait' and 'steal' indicate specific idle/wait states, which shouldn't be counted into CPU Utilisation. Also see https://github.com/prometheus-operator/kube-prometheus/pull/796 and https://github.com/kubernetes-monitoring/kubernetes-mixin/pull/667. Per the iostat man page: %idle Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. %iowait Show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. %steal Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor. Signed-off-by: Julian Wiedmann * tsdb: shrink txRing with smaller integers 4 billion active transactions ought to be enough for anyone. Signed-off-by: Bryan Boreham * tsdb: create isolation transaction slice on demand When Prometheus restarts it creates every series read in from the WAL, but many of those series will be finished, and never receive any more samples. By defering allocation of the txRing slice to when it is first needed, we save 32 bytes per stale series. Signed-off-by: Bryan Boreham * add cluster variable to Overview dashboard Signed-off-by: Erik Sommer * promql: simplify Native Histogram arithmetics Signed-off-by: Linas Medziunas * Cut 2.49.0-rc.0 (#13270) * Cut 2.49.0-rc.0 Signed-off-by: bwplotka * Removed the duplicate. Signed-off-by: bwplotka --------- Signed-off-by: bwplotka * Add unit protobuf parser Signed-off-by: Arianna Vespri * Go on adding protobuf parsing for unit Signed-off-by: Arianna Vespri * ui: create a reproduction for https://github.com/prometheus/prometheus/issues/13292 Signed-off-by: machine424 * Get conditional right Signed-off-by: Arianna Vespri * Get VM Scale Set NIC (#13283) Calling `*armnetwork.InterfacesClient.Get()` doesn't work for Scale Set VM NIC, because these use a different Resource ID format. Use `*armnetwork.InterfacesClient.GetVirtualMachineScaleSetNetworkInterface()` instead. This needs both the scale set name and the instance ID, so add an `InstanceID` field to the `virtualMachine` struct. `InstanceID` is empty for a VM that isn't a ScaleSetVM. Signed-off-by: Daniel Nicholls * Cut v2.49.0-rc.1 Signed-off-by: bwplotka * Delete debugging lines, amend error message for unit Signed-off-by: Arianna Vespri * Correct order in error message Signed-off-by: Arianna Vespri * Consider storage.ErrTooOldSample as non-retryable Signed-off-by: Daniel Kerbel * scrape_test.go: Increase scrape interval in TestScrapeLoopCache to reduce potential flakiness Signed-off-by: machine424 * Avoid creating string for suffix, consider counters without _total suffix Signed-off-by: Arianna Vespri * build(deps): bump github.com/prometheus/client_golang Bumps [github.com/prometheus/client_golang](https://github.com/prometheus/client_golang) from 1.17.0 to 1.18.0. - [Release notes](https://github.com/prometheus/client_golang/releases) - [Changelog](https://github.com/prometheus/client_golang/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/client_golang/compare/v1.17.0...v1.18.0) --- updated-dependencies: - dependency-name: github.com/prometheus/client_golang dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * build(deps): bump actions/setup-node from 3.8.1 to 4.0.1 Bumps [actions/setup-node](https://github.com/actions/setup-node) from 3.8.1 to 4.0.1. - [Release notes](https://github.com/actions/setup-node/releases) - [Commits](https://github.com/actions/setup-node/compare/5e21ff4d9bc1a8cf6de233a3057d20ec6b3fb69d...b39b52d1213e96004bfcb1c61a8a6fa8ab84f3e8) --- updated-dependencies: - dependency-name: actions/setup-node dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] * scripts: sort file list in embed directive Otherwise the resulting string depends on find, which afaict depends on the underlying filesystem. A stable file list make it easier to detect UI changes in downstreams that need to track UI assets. Signed-off-by: Jan Fajerski * Fix DataTableProps['data'] for resultType string Signed-off-by: Kevin Mingtarja * Fix handling of scalar and string in isHeatmapData Signed-off-by: Kevin Mingtarja * build(deps): bump github.com/influxdata/influxdb Bumps [github.com/influxdata/influxdb](https://github.com/influxdata/influxdb) from 1.11.2 to 1.11.4. - [Release notes](https://github.com/influxdata/influxdb/releases) - [Commits](https://github.com/influxdata/influxdb/compare/v1.11.2...v1.11.4) --- updated-dependencies: - dependency-name: github.com/influxdata/influxdb dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * build(deps): bump github.com/prometheus/prometheus Bumps [github.com/prometheus/prometheus](https://github.com/prometheus/prometheus) from 0.48.0 to 0.48.1. - [Release notes](https://github.com/prometheus/prometheus/releases) - [Changelog](https://github.com/prometheus/prometheus/blob/main/CHANGELOG.md) - [Commits](https://github.com/prometheus/prometheus/compare/v0.48.0...v0.48.1) --- updated-dependencies: - dependency-name: github.com/prometheus/prometheus dependency-type: direct:production update-type: version-update:semver-patch ... Signed-off-by: dependabot[bot] * Bump client_golang to v1.18.0 (#13373) Signed-off-by: Paschalis Tsilias * Drop old inmemory samples (#13002) * Drop old inmemory samples Co-authored-by: Paschalis Tsilias Signed-off-by: Paschalis Tsilias Signed-off-by: Marc Tuduri * Avoid copying timeseries when the feature is disabled Signed-off-by: Paschalis Tsilias Signed-off-by: Marc Tuduri * Run gofmt Signed-off-by: Paschalis Tsilias Signed-off-by: Marc Tuduri * Clarify docs Signed-off-by: Marc Tuduri * Add more logging info Signed-off-by: Marc Tuduri * Remove loggers Signed-off-by: Marc Tuduri * optimize function and add tests Signed-off-by: Marc Tuduri * Simplify filter Signed-off-by: Marc Tuduri * rename var Signed-off-by: Marc Tuduri * Update help info from metrics Signed-off-by: Marc Tuduri * use metrics to keep track of drop elements during buildWriteRequest Signed-off-by: Marc Tuduri * rename var in tests Signed-off-by: Marc Tuduri * pass time.Now as parameter Signed-off-by: Marc Tuduri * Change buildwriterequest during retries Signed-off-by: Marc Tuduri * Revert "Remove loggers" This reverts commit 54f91dfcae20488944162335ab4ad8be459df1ab. Signed-off-by: Marc Tuduri * use log level debug for loggers Signed-off-by: Marc Tuduri * Fix linter Signed-off-by: Paschalis Tsilias * Remove noisy debug-level logs; add 'reason' label to drop metrics Signed-off-by: Paschalis Tsilias * Remove accidentally committed files Signed-off-by: Paschalis Tsilias * Propagate logger to buildWriteRequest to log dropped data Signed-off-by: Paschalis Tsilias * Fix docs comment Signed-off-by: Paschalis Tsilias * Make drop reason more specific Signed-off-by: Paschalis Tsilias * Remove unnecessary pass of logger Signed-off-by: Paschalis Tsilias * Use snake_case for reason label Signed-off-by: Paschalis Tsilias * Fix dropped samples metric Signed-off-by: Paschalis Tsilias --------- Signed-off-by: Paschalis Tsilias Signed-off-by: Marc Tuduri Signed-off-by: Paschalis Tsilias Co-authored-by: Paschalis Tsilias Co-authored-by: Paschalis Tsilias * fix(discovery): allow requireUpdate util to timeout in discovery/file/file_test.go. The loop ran indefinitely if the condition isn't met. Before, each iteration created a new timer channel which was always outpaced by the other timer channel with smaller duration. minor detail: There was a memory leak: resources of the ~10 previous timers were constantly kept. With the fix, we may keep the resources of one timer around for defaultWait but this isn't worth the changes to make it right. Signed-off-by: machine424 * Merge pull request #13371 from kevinmingtarja/fix-isHeatmapData ui: fix handling of scalar and string in isHeatmapData * tsdb/{index,compact}: allow using custom postings encoding format (#13242) * tsdb/{index,compact}: allow using custom postings encoding format We would like to experiment with a different postings encoding format in Thanos so in this change I am proposing adding another argument to `NewWriter` which would allow users to change the format if needed. Also, wire the leveled compactor so that it would be possible to change the format there too. Signed-off-by: Giedrius Statkevičius * tsdb/compact: use a struct for leveled compactor options As discussed on Slack, let's use a struct for the options in leveled compactor. Signed-off-by: Giedrius Statkevičius * tsdb: make changes after Bryan's review - Make changes less intrusive - Turn the postings encoder type into a function - Add NewWriterWithEncoder() Signed-off-by: Giedrius Statkevičius --------- Signed-off-by: Giedrius Statkevičius * Cut 2.49.0-rc.2 Signed-off-by: bwplotka * build(deps): bump actions/setup-go from 3.5.0 to 5.0.0 in /scripts (#13362) Bumps [actions/setup-go](https://github.com/actions/setup-go) from 3.5.0 to 5.0.0. - [Release notes](https://github.com/actions/setup-go/releases) - [Commits](https://github.com/actions/setup-go/compare/6edd4406fa81c3da01a34fa6f6343087c207a568...0c52d547c9bc32b1aa3301fd7a9cb496313a4491) --- updated-dependencies: - dependency-name: actions/setup-go dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump github/codeql-action from 2.22.8 to 3.22.12 (#13358) Bumps [github/codeql-action](https://github.com/github/codeql-action) from 2.22.8 to 3.22.12. - [Release notes](https://github.com/github/codeql-action/releases) - [Changelog](https://github.com/github/codeql-action/blob/main/CHANGELOG.md) - [Commits](https://github.com/github/codeql-action/compare/407ffafae6a767df3e0230c3df91b6443ae8df75...012739e5082ff0c22ca6d6ab32e07c36df03c4a4) --- updated-dependencies: - dependency-name: github/codeql-action dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * put @nexucis has a release shepherd (#13383) Signed-off-by: Augustin Husson * Add analyze histograms command to promtool (#12331) Add `query analyze` command to promtool This command analyzes the buckets of classic and native histograms, based on data queried from the Prometheus query API, i.e. it doesn't require direct access to the TSDB files. Signed-off-by: Jeanette Tan --------- Signed-off-by: Jeanette Tan * included instance in all necessary descriptions Signed-off-by: Erik Sommer * tsdb/compact: fix passing merge func Fixing a very small logical problem I've introduced :(. Signed-off-by: Giedrius Statkevičius * tsdb: add enable overlapping compaction This functionality is needed in downstream projects because they have a separate component that does compaction. Upstreaming https://github.com/grafana/mimir-prometheus/blob/7c8e9a2a76fc729e9078889782928b2fdfe240e9/tsdb/compact.go#L323-L325. Signed-off-by: Giedrius Statkevičius * Cut 2.49.0 Signed-off-by: bwplotka * promtool: allow setting multiple matchers to "promtool tsdb dump" command. (#13296) Conditions are ANDed inside the same matcher but matchers are ORed Including unit tests for "promtool tsdb dump". Refactor some matchers scraping utils. Signed-off-by: machine424 * Fixed changelog Signed-off-by: bwplotka * tsdb/main: wire "EnableOverlappingCompaction" to tsdb.Options (#13398) This added the https://github.com/prometheus/prometheus/pull/13393 "EnableOverlappingCompaction" parameter to the compactor code but not to the tsdb.Options. I forgot about that. Add it to `tsdb.Options` too and set it to `true` in Prometheus. Copy/paste the description from https://github.com/prometheus/prometheus/pull/13393#issuecomment-1891787986 Signed-off-by: Giedrius Statkevičius * Issue #13268: fix quality value in accept header Signed-off-by: Kumar Kalpadiptya Roy * Cut 2.49.1 with scrape q= bugfix. Signed-off-by: bwplotka * Cut 2.49.1 web package. Signed-off-by: bwplotka * Restore more efficient version of NewPossibleNonCounterInfo annotation (#13022) Restore more efficient version of NewPossibleNonCounterInfo annotation Signed-off-by: Jeanette Tan --------- Signed-off-by: Jeanette Tan * Fix regressions introduced by #13242 Signed-off-by: Marco Pracucci * fix slice copy in 1.20 (#13389) The slices package is added to the standard library in Go 1.21; we need to import from the exp area to maintain compatibility with Go 1.20. Signed-off-by: tyltr * Docs: Query Basics: link to rate (#10538) Co-authored-by: Julien Pivotto * chore(kubernetes): check preconditions earlier and avoid unnecessary checks or iterations Signed-off-by: machine424 * Examples: link to `rate` for new users (#10535) * Examples: link to `rate` for new users Signed-off-by: Ted Robertson 10043369+tredondo@users.noreply.github.com Co-authored-by: Bryan Boreham * promql: use natural sort in sort_by_label and sort_by_label_desc (#13411) These functions are intended for humans, as robots can already sort the results however they please. Humans like things sorted "naturally": * https://blog.codinghorror.com/sorting-for-humans-natural-sort-order/ A similar thing has been done to Grafana, which is also used by humans: * https://github.com/grafana/grafana/pull/78024 * https://github.com/grafana/grafana/pull/78494 Signed-off-by: Ivan Babrou * TestLabelValuesWithMatchers: Add test case Signed-off-by: Arve Knudsen * remove obsolete build tag Signed-off-by: tyltr * Upgrade some golang dependencies for resty 2.11 Signed-off-by: Israel Blancas * Native Histograms: support `native_histogram_min_bucket_factor` in scrape_config (#13222) Native Histograms: support native_histogram_min_bucket_factor in scrape_config --------- Signed-off-by: Ziqi Zhao Signed-off-by: Björn Rabenstein Co-authored-by: George Krajcsovits Co-authored-by: Björn Rabenstein * Add warnings for histogramRate applied with isCounter not matching counter/gauge histogram (#13392) Add warnings for histogramRate applied with isCounter not matching counter/gauge histogram --------- Signed-off-by: Jeanette Tan * Minor fixes to otlp vendor update script Signed-off-by: Goutham * build(deps): bump github.com/hetznercloud/hcloud-go/v2 Bumps [github.com/hetznercloud/hcloud-go/v2](https://github.com/hetznercloud/hcloud-go) from 2.4.0 to 2.6.0. - [Release notes](https://github.com/hetznercloud/hcloud-go/releases) - [Changelog](https://github.com/hetznercloud/hcloud-go/blob/main/CHANGELOG.md) - [Commits](https://github.com/hetznercloud/hcloud-go/compare/v2.4.0...v2.6.0) --- updated-dependencies: - dependency-name: github.com/hetznercloud/hcloud-go/v2 dependency-type: direct:production update-type: version-update:semver-minor ... Signed-off-by: dependabot[bot] * Enhanced visibility for `promtool test rules` with JSON colored formatting (#13342) * Added diff flag for unit test to improvise readability & debugging Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Removed blank spaces Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Fixed linting error Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Added cli flags to documentation Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Revert unrrelated linting fixes Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Fixed review suggestions Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Cleanup Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Updated flag description Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * Updated flag description Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> --------- Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> * storage: skip merging when no remote storage configured Prometheus is hard-coded to use a fanout storage between TSDB and a remote storage which by default is empty. This change detects the empty storage and skips merging between result sets, which would make `Select()` sort results. Bottom line: we skip a sort unless there really is some remote storage configured. Signed-off-by: Bryan Boreham * Remove csmarchbanks from remote write owners (#13432) I have not had the time to keep up with remote write and have no plans to work on it in the near future so I am withdrawing my maintainership of that part of the codebase. I continue to focus on client_python. Signed-off-by: Chris Marchbanks * add more context cancellation check at evaluation time Signed-off-by: Ben Ye * Optimize label values with matchers by taking shortcuts (#13426) Don't calculate postings beforehand: we may not need them. If all matchers are for the requested label, we can just filter its values. Also, if there are no values at all, no need to run any kind of logic. Also add more labelValuesWithMatchers benchmarks Signed-off-by: Oleg Zaytsev * Add automatic memory limit handling Enable automatic detection of memory limits and configure GOMEMLIMIT to match. * Also includes a flag to allow controlling the reserved ratio. Signed-off-by: SuperQ * Update OSSF badge link (#13433) Provide a more user friendly interface Signed-off-by: Matthieu MOREL * SD Managers taking over responsibility for registration of debug metrics (#13375) SD Managers take over responsibility for SD metrics registration --------- Signed-off-by: Paulin Todev Signed-off-by: Björn Rabenstein Co-authored-by: Björn Rabenstein * Optimize histogram iterators (#13340) Optimize histogram iterators Histogram iterators allocate new objects in the AtHistogram and AtFloatHistogram methods, which makes calculating rates over long ranges expensive. In #13215 we allowed an existing object to be reused when converting an integer histogram to a float histogram. This commit follows the same idea and allows injecting an existing object in the AtHistogram and AtFloatHistogram methods. When the injected value is nil, iterators allocate new histograms, otherwise they populate and return the injected object. The commit also adds a CopyTo method to Histogram and FloatHistogram which is used in the BufferedIterator to overwrite items in the ring instead of making new copies. Note that a specialized HPoint pool is needed for all of this to work (`matrixSelectorHPool`). --------- Signed-off-by: Filip Petkovski Co-authored-by: George Krajcsovits * doc: Mark `mad_over_time` as experimental (#13440) We forgot to do that in https://github.com/prometheus/prometheus/pull/13059 Signed-off-by: beorn7 * Change metric label for Puppetdb from 'http' to 'puppetdb' Signed-off-by: Paulin Todev * mirror metrics.proto change & generate code Signed-off-by: Ziqi Zhao * TestHeadLabelValuesWithMatchers: Add test case (#13414) Add test case to TestHeadLabelValuesWithMatchers, while fixing a couple of typos in other test cases. Also enclosing some implicit sub-tests in a `t.Run` call to make them explicitly sub-tests. Signed-off-by: Arve Knudsen * update all go dependencies (#13438) Signed-off-by: Augustin Husson * build(deps): bump the k8s-io group with 2 updates (#13454) Bumps the k8s-io group with 2 updates: [k8s.io/api](https://github.com/kubernetes/api) and [k8s.io/client-go](https://github.com/kubernetes/client-go). Updates `k8s.io/api` from 0.28.4 to 0.29.1 - [Commits](https://github.com/kubernetes/api/compare/v0.28.4...v0.29.1) Updates `k8s.io/client-go` from 0.28.4 to 0.29.1 - [Changelog](https://github.com/kubernetes/client-go/blob/master/CHANGELOG.md) - [Commits](https://github.com/kubernetes/client-go/compare/v0.28.4...v0.29.1) --- updated-dependencies: - dependency-name: k8s.io/api dependency-type: direct:production update-type: version-update:semver-minor dependency-group: k8s-io - dependency-name: k8s.io/client-go dependency-type: direct:production update-type: version-update:semver-minor dependency-group: k8s-io ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump the go-opentelemetry-io group with 1 update (#13453) Bumps the go-opentelemetry-io group with 1 update: [go.opentelemetry.io/collector/semconv](https://github.com/open-telemetry/opentelemetry-collector). Updates `go.opentelemetry.io/collector/semconv` from 0.92.0 to 0.93.0 - [Release notes](https://github.com/open-telemetry/opentelemetry-collector/releases) - [Changelog](https://github.com/open-telemetry/opentelemetry-collector/blob/main/CHANGELOG-API.md) - [Commits](https://github.com/open-telemetry/opentelemetry-collector/compare/v0.92.0...v0.93.0) --- updated-dependencies: - dependency-name: go.opentelemetry.io/collector/semconv dependency-type: direct:production update-type: version-update:semver-minor dependency-group: go-opentelemetry-io ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump actions/upload-artifact from 3.1.3 to 4.0.0 (#13355) Bumps [actions/upload-artifact](https://github.com/actions/upload-artifact) from 3.1.3 to 4.0.0. - [Release notes](https://github.com/actions/upload-artifact/releases) - [Commits](https://github.com/actions/upload-artifact/compare/a8a3f3ad30e3422c9c7b888a15615d19a852ae32...c7d193f32edcb7bfad88892161225aeda64e9392) --- updated-dependencies: - dependency-name: actions/upload-artifact dependency-type: direct:production update-type: version-update:semver-major ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * build(deps): bump bufbuild/buf-push-action (#13357) Bumps [bufbuild/buf-push-action](https://github.com/bufbuild/buf-push-action) from 342fc4cdcf29115a01cf12a2c6dd6aac68dc51e1 to a654ff18effe4641ebea4a4ce242c49800728459. - [Release notes](https://github.com/bufbuild/buf-push-action/releases) - [Commits](https://github.com/bufbuild/buf-push-action/compare/342fc4cdcf29115a01cf12a2c6dd6aac68dc51e1...a654ff18effe4641ebea4a4ce242c49800728459) --- updated-dependencies: - dependency-name: bufbuild/buf-push-action dependency-type: direct:production ... Signed-off-by: dependabot[bot] Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * Labels: Add DropMetricName function, used in PromQL (#13446) This function is called very frequently when executing PromQL functions, and we can do it much more efficiently inside Labels. In the common case that `__name__` comes first in the labels, we simply re-point to start at the next label, which is nearly free. `DropMetricName` is now so cheap I removed the cache - benchmarks show everything still goes faster. Signed-off-by: Bryan Boreham * tsdb: simplify internal series delete function (#13261) Lifting an optimisation from Agent code, `seriesHashmap.del` can use the unique series reference, doesn't need to check Labels. Also streamline the logic for deleting from `unique` and `conflicts` maps, and add some comments to help the next person. Signed-off-by: Bryan Boreham * otlptranslator/update-copy.sh: Fix sed command lines Signed-off-by: Arve Knudsen * Rollback k8s.io requirements (#13462) Rollback k8s.io Go modules to v0.28.6 to avoid forcing upgrade of Go to 1.21. This allows us to keep compatibility with the currently supported upstream Go releases. Signed-off-by: SuperQ * Make update-copy.sh work for both OSX and GNU sed Signed-off-by: Arve Knudsen * Name @beorn7 and @krajorama as maintainers for native histograms I have been the de-facto maintainer for native histograms from the beginning. So let's put this into MAINTAINERS.md. In addition, I hereby proposose George Krajcsovits AKA Krajo as a co-maintainer. He has contributed a lot of native histogram code, but more importantly, he has contributed substantially to reviewing other contributors' native histogram code, up to a point where I was merely rubberstamping the PRs he had already reviewed. I'm confident that he is ready to to be granted commit rights as outlined in the "Maintainers" section of the governance: https://prometheus.io/governance/#maintainers According to the same section of the governance, I will announce the proposed change on the developers mailing list and will give some time for lazy consensus before merging this PR. Signed-off-by: beorn7 * ui/fix: correct url handling for stacked graphs (#13460) Signed-off-by: Yury Moladau * tsdb: use cheaper Mutex on series Mutex is 8 bytes; RWMutex is 24 bytes and much more complicated. Since `RLock` is only used in two places, `UpdateMetadata` and `Delete`, neither of which are hotspots, we should use the cheaper one. Signed-off-by: Bryan Boreham * Fix last_over_time for native histograms The last_over_time retains a histogram sample without making a copy. This sample is now coming from the buffered iterator used for windowing functions, and can be reused for reading subsequent samples as the iterator progresses. I would propose copying the sample in the last_over_time function, similar to how it is done for rate, sum_over_time and others. Signed-off-by: Filip Petkovski * Implementation NOTE: Rebased from main after refactor in #13014 Signed-off-by: Danny Kopping * Add feature flag Signed-off-by: Danny Kopping * Refactor concurrency control Signed-off-by: Danny Kopping * Optimising dependencies/dependents funcs to not produce new slices each request Signed-off-by: Danny Kopping * Refactoring Signed-off-by: Danny Kopping * Rename flag Signed-off-by: Danny Kopping * Refactoring for performance, and to allow controller to be overridden Signed-off-by: Danny Kopping * Block until all rules, both sync & async, have completed evaluating Updated & added tests Review feedback nits Return empty map if not indeterminate Use highWatermark to track inflight requests counter Appease the linter Clarify feature flag Signed-off-by: Danny Kopping * Fix typo in CLI flag description Signed-off-by: Marco Pracucci * Fixed auto-generated doc Signed-off-by: Marco Pracucci * Improve doc Signed-off-by: Marco Pracucci * Simplify the design to update concurrency controller once the rule evaluation has done Signed-off-by: Marco Pracucci * Add more test cases to TestDependenciesEdgeCases Signed-off-by: Marco Pracucci * Added more test cases to TestDependenciesEdgeCases Signed-off-by: Marco Pracucci * Improved RuleConcurrencyController interface doc Signed-off-by: Marco Pracucci * Introduced sequentialRuleEvalController Signed-off-by: Marco Pracucci * Remove superfluous nil check in Group.metrics Signed-off-by: Marco Pracucci * api: Serialize discovered and target labels into JSON directly (#13469) Converted maps into labels.Labels to avoid a lot of copying of data which leads to very high memory consumption while opening the /service-discovery endpoint in the Prometheus UI Signed-off-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com> * api: Serialize discovered labels into JSON directly in dropped targets (#13484) Converted maps into labels.Labels to avoid a lot of copying of data which leads to very high memory consumption while opening the /service-discovery endpoint in the Prometheus UI Signed-off-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com> * Add ShardedPostings() support to TSDB (#10421) This PR is a reference implementation of the proposal described in #10420. In addition to what described in #10420, in this PR I've introduced labels.StableHash(). The idea is to offer an hashing function which doesn't change over time, and that's used by query sharding in order to get a stable behaviour over time. The implementation of labels.StableHash() is the hashing function used by Prometheus before stringlabels, and what's used by Grafana Mimir for query sharding (because built before stringlabels was a thing). Follow up work As mentioned in #10420, if this PR is accepted I'm also open to upload another foundamental piece used by Grafana Mimir query sharding to accelerate the query execution: an optional, configurable and fast in-memory cache for the series hashes. Signed-off-by: Marco Pracucci * storage/remote: document why two benchmarks are skipped One was silently doing nothing; one was doing something but the work didn't go up linearly with iteration count. Signed-off-by: Bryan Boreham * Pod status changes not discovered by Kube Endpoints SD (#13337) * fix(discovery/kubernetes/endpoints): react to changes on Pods because some modifications can occur on them without triggering an update on the related Endpoints (The Pod phase changing from Pending to Running e.g.). --------- Signed-off-by: machine424 Co-authored-by: Guillermo Sanchez Gavier * Small improvements, add const, remove copypasta (#8106) Signed-off-by: Mikhail Fesenko Signed-off-by: Jesus Vazquez * Proposal to improve FPointSlice and HPointSlice allocation. (#13448) * Reusing points slice from previous series when the slice is under utilized * Adding comments on the bench test Signed-off-by: Alan Protasio * lint Signed-off-by: Nicolás Pazos * go mod tidy Signed-off-by: Nicolás Pazos --------- Signed-off-by: Julian Wiedmann Signed-off-by: Bryan Boreham Signed-off-by: Erik Sommer Signed-off-by: Linas Medziunas Signed-off-by: bwplotka Signed-off-by: Arianna Vespri Signed-off-by: machine424 Signed-off-by: Daniel Nicholls Signed-off-by: Daniel Kerbel Signed-off-by: dependabot[bot] Signed-off-by: Jan Fajerski Signed-off-by: Kevin Mingtarja Signed-off-by: Paschalis Tsilias Signed-off-by: Marc Tuduri Signed-off-by: Paschalis Tsilias Signed-off-by: Giedrius Statkevičius Signed-off-by: Augustin Husson Signed-off-by: Jeanette Tan Signed-off-by: Bartlomiej Plotka Signed-off-by: Kumar Kalpadiptya Roy Signed-off-by: Marco Pracucci Signed-off-by: tyltr Signed-off-by: Ted Robertson 10043369+tredondo@users.noreply.github.com Signed-off-by: Ivan Babrou Signed-off-by: Arve Knudsen Signed-off-by: Israel Blancas Signed-off-by: Ziqi Zhao Signed-off-by: Björn Rabenstein Signed-off-by: Goutham Signed-off-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> Signed-off-by: Chris Marchbanks Signed-off-by: Ben Ye Signed-off-by: Oleg Zaytsev Signed-off-by: SuperQ Signed-off-by: Ben Kochie Signed-off-by: Matthieu MOREL Signed-off-by: Paulin Todev Signed-off-by: Filip Petkovski Signed-off-by: beorn7 Signed-off-by: Augustin Husson Signed-off-by: Yury Moladau Signed-off-by: Danny Kopping Signed-off-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com> Signed-off-by: Mikhail Fesenko Signed-off-by: Jesus Vazquez Signed-off-by: Alan Protasio Signed-off-by: Nicolás Pazos Co-authored-by: Julian Wiedmann Co-authored-by: Bryan Boreham Co-authored-by: Erik Sommer Co-authored-by: Linas Medziunas Co-authored-by: Bartlomiej Plotka Co-authored-by: Arianna Vespri Co-authored-by: machine424 Co-authored-by: daniel-resdiary <109083091+daniel-resdiary@users.noreply.github.com> Co-authored-by: Daniel Kerbel Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Jan Fajerski Co-authored-by: Kevin Mingtarja Co-authored-by: Paschalis Tsilias Co-authored-by: Marc Tudurí Co-authored-by: Paschalis Tsilias Co-authored-by: Giedrius Statkevičius Co-authored-by: Augustin Husson Co-authored-by: Björn Rabenstein Co-authored-by: zenador Co-authored-by: gotjosh Co-authored-by: Ben Kochie Co-authored-by: Kumar Kalpadiptya Roy Co-authored-by: Marco Pracucci Co-authored-by: tyltr Co-authored-by: Ted Robertson <10043369+tredondo@users.noreply.github.com> Co-authored-by: Julien Pivotto Co-authored-by: Matthias Loibl Co-authored-by: Ivan Babrou Co-authored-by: Arve Knudsen Co-authored-by: Israel Blancas Co-authored-by: Ziqi Zhao Co-authored-by: George Krajcsovits Co-authored-by: Björn Rabenstein Co-authored-by: Goutham Co-authored-by: Rewanth Tammana <22347290+rewanthtammana@users.noreply.github.com> Co-authored-by: Chris Marchbanks Co-authored-by: Ben Ye Co-authored-by: Oleg Zaytsev Co-authored-by: Matthieu MOREL Co-authored-by: Paulin Todev Co-authored-by: Filip Petkovski Co-authored-by: Yury Molodov Co-authored-by: Danny Kopping Co-authored-by: Leegin <114397475+Leegin-darknight@users.noreply.github.com> Co-authored-by: Guillermo Sanchez Gavier Co-authored-by: Mikhail Fesenko Co-authored-by: Alan Protasio --- .github/CODEOWNERS | 2 +- .github/workflows/buf.yml | 2 +- .github/workflows/ci.yml | 2 +- .github/workflows/codeql-analysis.yml | 6 +- .github/workflows/fuzzing.yml | 2 +- .github/workflows/scorecards.yml | 4 +- CHANGELOG.md | 46 +- MAINTAINERS.md | 5 +- README.md | 2 +- RELEASE.md | 3 +- VERSION | 2 +- cmd/prometheus/main.go | 82 +- cmd/prometheus/main_unix_test.go | 1 - cmd/promtool/analyze.go | 370 + cmd/promtool/analyze_test.go | 170 + cmd/promtool/main.go | 264 +- cmd/promtool/query.go | 251 + cmd/promtool/sd.go | 17 +- cmd/promtool/testdata/dump-test-1.prom | 15 + cmd/promtool/testdata/dump-test-2.prom | 10 + cmd/promtool/testdata/dump-test-3.prom | 2 + cmd/promtool/tsdb.go | 24 +- cmd/promtool/tsdb_test.go | 107 + cmd/promtool/unittest.go | 52 +- cmd/promtool/unittest_test.go | 4 +- config/config.go | 10 +- config/config_default_test.go | 1 - consoles/node-cpu.html | 2 +- consoles/node-overview.html | 2 +- consoles/node.html | 2 +- discovery/aws/ec2.go | 28 +- discovery/aws/lightsail.go | 29 +- discovery/aws/metrics_ec2.go | 32 + discovery/aws/metrics_lightsail.go | 32 + discovery/azure/azure.go | 83 +- discovery/azure/metrics.go | 64 + discovery/consul/consul.go | 91 +- discovery/consul/consul_test.go | 37 +- discovery/consul/metrics.go | 73 + discovery/digitalocean/digitalocean.go | 26 +- discovery/digitalocean/digitalocean_test.go | 12 +- discovery/digitalocean/metrics.go | 32 + discovery/discoverer_metrics_noop.go | 28 + discovery/discovery.go | 47 +- discovery/dns/dns.go | 53 +- discovery/dns/dns_test.go | 11 +- discovery/dns/metrics.go | 66 + discovery/eureka/eureka.go | 27 +- discovery/eureka/eureka_test.go | 13 +- discovery/eureka/metrics.go | 32 + discovery/file/file.go | 66 +- discovery/file/file_test.go | 27 +- discovery/file/metrics.go | 76 + discovery/gce/gce.go | 26 +- discovery/gce/metrics.go | 32 + discovery/hetzner/hetzner.go | 26 +- discovery/hetzner/metrics.go | 32 + discovery/http/http.go | 45 +- discovery/http/http_test.go | 44 +- discovery/http/metrics.go | 57 + discovery/ionos/ionos.go | 30 +- discovery/ionos/metrics.go | 32 + discovery/kubernetes/endpoints.go | 69 +- discovery/kubernetes/endpoints_test.go | 120 + discovery/kubernetes/endpointslice.go | 32 +- discovery/kubernetes/kubernetes.go | 84 +- discovery/kubernetes/kubernetes_test.go | 25 +- discovery/kubernetes/metrics.go | 75 + discovery/kubernetes/pod.go | 7 +- discovery/legacymanager/manager.go | 12 +- discovery/legacymanager/manager_test.go | 58 +- discovery/linode/linode.go | 37 +- discovery/linode/linode_test.go | 12 +- discovery/linode/metrics.go | 57 + discovery/manager.go | 26 +- discovery/manager_test.go | 98 +- discovery/marathon/marathon.go | 26 +- discovery/marathon/marathon_test.go | 26 +- discovery/marathon/metrics.go | 32 + discovery/metrics.go | 2 +- discovery/metrics_refresh.go | 75 + discovery/moby/docker.go | 30 +- discovery/moby/docker_test.go | 11 +- discovery/moby/dockerswarm.go | 26 +- discovery/moby/metrics_docker.go | 32 + discovery/moby/metrics_dockerswarm.go | 32 + discovery/moby/nodes_test.go | 11 +- discovery/moby/services_test.go | 20 +- discovery/moby/tasks_test.go | 11 +- discovery/nomad/metrics.go | 57 + discovery/nomad/nomad.go | 37 +- discovery/nomad/nomad_test.go | 22 +- discovery/openstack/metrics.go | 32 + discovery/openstack/openstack.go | 26 +- discovery/ovhcloud/metrics.go | 32 + discovery/ovhcloud/ovhcloud.go | 26 +- discovery/ovhcloud/ovhcloud_test.go | 12 +- discovery/puppetdb/metrics.go | 32 + discovery/puppetdb/puppetdb.go | 26 +- discovery/puppetdb/puppetdb_test.go | 47 +- discovery/refresh/refresh.go | 56 +- discovery/refresh/refresh_test.go | 16 +- discovery/registry.go | 23 + discovery/scaleway/metrics.go | 32 + discovery/scaleway/scaleway.go | 26 +- discovery/triton/metrics.go | 32 + discovery/triton/triton.go | 26 +- discovery/triton/triton_test.go | 43 +- discovery/uyuni/metrics.go | 32 + discovery/uyuni/uyuni.go | 26 +- discovery/uyuni/uyuni_test.go | 22 +- discovery/vultr/metrics.go | 32 + discovery/vultr/vultr.go | 26 +- discovery/vultr/vultr_test.go | 12 +- discovery/xds/kuma.go | 44 +- discovery/xds/kuma_test.go | 16 +- discovery/xds/metrics.go | 73 + discovery/xds/xds.go | 21 +- discovery/xds/xds_test.go | 91 +- discovery/zookeeper/zookeeper.go | 11 + docs/command-line/prometheus.md | 4 +- docs/command-line/promtool.md | 28 +- docs/configuration/configuration.md | 44 + docs/feature_flags.md | 20 + docs/querying/basics.md | 4 +- docs/querying/examples.md | 2 +- docs/querying/functions.md | 11 +- .../examples/custom-sd/adapter-usage/main.go | 13 +- .../examples/custom-sd/adapter/adapter.go | 4 +- .../custom-sd/adapter/adapter_test.go | 10 +- documentation/examples/remote_storage/go.mod | 21 +- documentation/examples/remote_storage/go.sum | 40 +- .../prometheus-mixin/dashboards.libsonnet | 46 +- go.mod | 135 +- go.sum | 272 +- model/histogram/float_histogram.go | 44 +- model/histogram/float_histogram_test.go | 112 + model/histogram/generic_test.go | 2 +- model/histogram/histogram.go | 32 +- model/histogram/histogram_test.go | 122 + model/labels/labels.go | 13 + model/labels/labels_stringlabels.go | 21 + model/labels/labels_test.go | 6 + model/labels/sharding.go | 47 + model/labels/sharding_stringlabels.go | 54 + model/labels/sharding_test.go | 32 + model/textparse/openmetricsparse.go | 1 - model/textparse/protobufparse.go | 17 +- model/textparse/protobufparse_test.go | 2 + plugins/generate.go | 1 - prompb/io/prometheus/client/metrics.pb.go | 200 +- prompb/io/prometheus/client/metrics.proto | 3 + promql/bench_test.go | 19 + promql/engine.go | 186 +- promql/functions.go | 84 +- promql/fuzz.go | 1 - promql/fuzz_test.go | 1 - promql/parser/parse.go | 14 + promql/testdata/functions.test | 35 + promql/testdata/native_histograms.test | 11 + promql/value.go | 4 +- rules/fixtures/rules_dependencies.yaml | 7 + rules/fixtures/rules_multiple.yaml | 14 + rules/fixtures/rules_multiple_groups.yaml | 28 + .../fixtures/rules_multiple_independent.yaml | 15 + rules/group.go | 184 +- rules/manager.go | 115 +- rules/manager_test.go | 621 +- scrape/scrape.go | 34 +- scrape/scrape_test.go | 103 +- scrape/target.go | 29 + scrape/target_test.go | 61 + scripts/compress_assets.sh | 2 +- scripts/golangci-lint.yml | 2 +- scripts/tools.go | 1 - storage/buffer.go | 35 +- storage/buffer_test.go | 8 +- storage/fanout.go | 4 +- storage/interface.go | 14 + storage/memoized_iterator.go | 4 +- storage/merge.go | 11 +- storage/merge_test.go | 12 +- storage/remote/codec.go | 8 +- storage/remote/codec_test.go | 23 +- .../prometheus/normalize_label.go | 2 + .../prometheus/normalize_name.go | 4 +- .../otlptranslator/prometheus/unit_to_ucum.go | 3 +- .../prometheusremotewrite/helper.go | 11 +- .../prometheusremotewrite/histograms.go | 2 +- .../prometheusremotewrite/metrics_to_prw.go | 2 +- .../number_data_points.go | 2 +- storage/remote/otlptranslator/update-copy.sh | 15 +- storage/remote/queue_manager.go | 420 +- storage/remote/queue_manager_test.go | 285 +- storage/remote/write_handler.go | 2 +- storage/remote/write_handler_test.go | 27 +- storage/series.go | 12 +- tracing/tracing.go | 2 +- tsdb/block.go | 9 + tsdb/block_test.go | 21 +- tsdb/chunkenc/chunk.go | 39 +- tsdb/chunkenc/float_histogram.go | 53 +- tsdb/chunkenc/float_histogram_test.go | 12 +- tsdb/chunkenc/histogram.go | 115 +- tsdb/chunkenc/histogram_test.go | 26 +- tsdb/chunkenc/xor.go | 4 +- tsdb/chunks/chunks.go | 4 +- tsdb/chunks/head_chunks_other.go | 1 - tsdb/compact.go | 75 +- tsdb/compact_test.go | 4 +- tsdb/db.go | 48 +- tsdb/db_test.go | 8 +- tsdb/fileutil/dir_unix.go | 1 - tsdb/fileutil/dir_windows.go | 1 - tsdb/fileutil/flock_js.go | 1 - tsdb/fileutil/flock_solaris.go | 1 - tsdb/fileutil/flock_unix.go | 1 - tsdb/fileutil/mmap_386.go | 1 - tsdb/fileutil/mmap_amd64.go | 1 - tsdb/fileutil/mmap_arm64.go | 1 - tsdb/fileutil/mmap_js.go | 1 - tsdb/fileutil/mmap_unix.go | 1 - tsdb/fileutil/preallocate_other.go | 1 - tsdb/fileutil/sync.go | 1 - tsdb/fileutil/sync_darwin.go | 1 - tsdb/fileutil/sync_linux.go | 1 - tsdb/goversion/goversion.go | 1 - tsdb/head.go | 54 +- tsdb/head_append.go | 4 +- tsdb/head_read.go | 34 +- tsdb/head_read_test.go | 2 +- tsdb/head_test.go | 128 +- tsdb/index/index.go | 108 +- tsdb/index/index_test.go | 101 + tsdb/isolation.go | 25 +- tsdb/ooo_head_read.go | 4 + tsdb/ooo_head_read_test.go | 4 +- tsdb/querier.go | 54 +- tsdb/querier_bench_test.go | 13 +- tsdb/querier_test.go | 49 +- tsdb/tsdbblockutil.go | 4 +- tsdb/wal_test.go | 1 - util/annotations/annotations.go | 20 + util/runtime/limits_default.go | 1 - util/runtime/limits_windows.go | 1 - util/runtime/statfs.go | 1 - util/runtime/statfs_default.go | 1 - util/runtime/statfs_linux_386.go | 1 - util/runtime/statfs_uint32.go | 1 - util/runtime/uname_default.go | 1 - util/runtime/vmlimits_default.go | 1 - util/runtime/vmlimits_openbsd.go | 1 - web/api/v1/api.go | 22 +- web/api/v1/api_test.go | 84 +- web/federate.go | 14 +- web/ui/assets_embed.go | 1 - web/ui/module/codemirror-promql/package.json | 4 +- web/ui/module/lezer-promql/package.json | 2 +- web/ui/package-lock.json | 20933 ++-------------- web/ui/package.json | 2 +- web/ui/react-app/package.json | 4 +- .../src/pages/graph/DataTable.test.tsx | 4 +- .../react-app/src/pages/graph/DataTable.tsx | 2 +- .../pages/graph/GraphHeatmapHelpers.test.ts | 66 + .../src/pages/graph/GraphHeatmapHelpers.ts | 8 +- web/ui/react-app/src/pages/graph/Panel.tsx | 3 +- web/ui/ui.go | 1 - 267 files changed, 9819 insertions(+), 21047 deletions(-) create mode 100644 cmd/promtool/analyze.go create mode 100644 cmd/promtool/analyze_test.go create mode 100644 cmd/promtool/query.go create mode 100644 cmd/promtool/testdata/dump-test-1.prom create mode 100644 cmd/promtool/testdata/dump-test-2.prom create mode 100644 cmd/promtool/testdata/dump-test-3.prom create mode 100644 discovery/aws/metrics_ec2.go create mode 100644 discovery/aws/metrics_lightsail.go create mode 100644 discovery/azure/metrics.go create mode 100644 discovery/consul/metrics.go create mode 100644 discovery/digitalocean/metrics.go create mode 100644 discovery/discoverer_metrics_noop.go create mode 100644 discovery/dns/metrics.go create mode 100644 discovery/eureka/metrics.go create mode 100644 discovery/file/metrics.go create mode 100644 discovery/gce/metrics.go create mode 100644 discovery/hetzner/metrics.go create mode 100644 discovery/http/metrics.go create mode 100644 discovery/ionos/metrics.go create mode 100644 discovery/kubernetes/metrics.go create mode 100644 discovery/linode/metrics.go create mode 100644 discovery/marathon/metrics.go create mode 100644 discovery/metrics_refresh.go create mode 100644 discovery/moby/metrics_docker.go create mode 100644 discovery/moby/metrics_dockerswarm.go create mode 100644 discovery/nomad/metrics.go create mode 100644 discovery/openstack/metrics.go create mode 100644 discovery/ovhcloud/metrics.go create mode 100644 discovery/puppetdb/metrics.go create mode 100644 discovery/scaleway/metrics.go create mode 100644 discovery/triton/metrics.go create mode 100644 discovery/uyuni/metrics.go create mode 100644 discovery/vultr/metrics.go create mode 100644 discovery/xds/metrics.go create mode 100644 model/labels/sharding.go create mode 100644 model/labels/sharding_stringlabels.go create mode 100644 model/labels/sharding_test.go create mode 100644 rules/fixtures/rules_dependencies.yaml create mode 100644 rules/fixtures/rules_multiple.yaml create mode 100644 rules/fixtures/rules_multiple_groups.yaml create mode 100644 rules/fixtures/rules_multiple_independent.yaml create mode 100644 web/ui/react-app/src/pages/graph/GraphHeatmapHelpers.test.ts diff --git a/.github/CODEOWNERS b/.github/CODEOWNERS index 1aae1fff986..41530d46542 100644 --- a/.github/CODEOWNERS +++ b/.github/CODEOWNERS @@ -1,6 +1,6 @@ /web/ui @juliusv /web/ui/module @juliusv @nexucis -/storage/remote @csmarchbanks @cstyan @bwplotka @tomwilkie +/storage/remote @cstyan @bwplotka @tomwilkie /storage/remote/otlptranslator @gouthamve @jesusvazquez /discovery/kubernetes @brancz /tsdb @jesusvazquez diff --git a/.github/workflows/buf.yml b/.github/workflows/buf.yml index f6d5c9191a3..dc0694394bf 100644 --- a/.github/workflows/buf.yml +++ b/.github/workflows/buf.yml @@ -23,7 +23,7 @@ jobs: with: input: 'prompb' against: 'https://github.com/prometheus/prometheus.git#branch=main,ref=HEAD~1,subdir=prompb' - - uses: bufbuild/buf-push-action@342fc4cdcf29115a01cf12a2c6dd6aac68dc51e1 # v1.1.1 + - uses: bufbuild/buf-push-action@a654ff18effe4641ebea4a4ce242c49800728459 # v1.1.1 with: input: 'prompb' buf_token: ${{ secrets.BUF_TOKEN }} diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 8ba154e2588..f1e2b66bf1c 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -197,7 +197,7 @@ jobs: uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1 - uses: prometheus/promci@3cb0c3871f223bd5ce1226995bd52ffb314798b6 # v0.1.0 - name: Install nodejs - uses: actions/setup-node@5e21ff4d9bc1a8cf6de233a3057d20ec6b3fb69d # v3.8.1 + uses: actions/setup-node@b39b52d1213e96004bfcb1c61a8a6fa8ab84f3e8 # v4.0.1 with: node-version-file: "web/ui/.nvmrc" registry-url: "https://registry.npmjs.org" diff --git a/.github/workflows/codeql-analysis.yml b/.github/workflows/codeql-analysis.yml index 5e14936a95c..fd1ef19ef32 100644 --- a/.github/workflows/codeql-analysis.yml +++ b/.github/workflows/codeql-analysis.yml @@ -30,12 +30,12 @@ jobs: go-version: 1.21.x - name: Initialize CodeQL - uses: github/codeql-action/init@407ffafae6a767df3e0230c3df91b6443ae8df75 # v2.22.8 + uses: github/codeql-action/init@012739e5082ff0c22ca6d6ab32e07c36df03c4a4 # v3.22.12 with: languages: ${{ matrix.language }} - name: Autobuild - uses: github/codeql-action/autobuild@407ffafae6a767df3e0230c3df91b6443ae8df75 # v2.22.8 + uses: github/codeql-action/autobuild@012739e5082ff0c22ca6d6ab32e07c36df03c4a4 # v3.22.12 - name: Perform CodeQL Analysis - uses: github/codeql-action/analyze@407ffafae6a767df3e0230c3df91b6443ae8df75 # v2.22.8 + uses: github/codeql-action/analyze@012739e5082ff0c22ca6d6ab32e07c36df03c4a4 # v3.22.12 diff --git a/.github/workflows/fuzzing.yml b/.github/workflows/fuzzing.yml index 13f04f772ed..59975706071 100644 --- a/.github/workflows/fuzzing.yml +++ b/.github/workflows/fuzzing.yml @@ -21,7 +21,7 @@ jobs: fuzz-seconds: 600 dry-run: false - name: Upload Crash - uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3 + uses: actions/upload-artifact@c7d193f32edcb7bfad88892161225aeda64e9392 # v4.0.0 if: failure() && steps.build.outcome == 'success' with: name: artifacts diff --git a/.github/workflows/scorecards.yml b/.github/workflows/scorecards.yml index f71e1331b0b..a668a4ceb0c 100644 --- a/.github/workflows/scorecards.yml +++ b/.github/workflows/scorecards.yml @@ -37,7 +37,7 @@ jobs: # Upload the results as artifacts (optional). Commenting out will disable uploads of run results in SARIF # format to the repository Actions tab. - name: "Upload artifact" - uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # tag=v3.1.3 + uses: actions/upload-artifact@c7d193f32edcb7bfad88892161225aeda64e9392 # tag=v4.0.0 with: name: SARIF file path: results.sarif @@ -45,6 +45,6 @@ jobs: # Upload the results to GitHub's code scanning dashboard. - name: "Upload to code-scanning" - uses: github/codeql-action/upload-sarif@407ffafae6a767df3e0230c3df91b6443ae8df75 # tag=v2.22.8 + uses: github/codeql-action/upload-sarif@012739e5082ff0c22ca6d6ab32e07c36df03c4a4 # tag=v3.22.12 with: sarif_file: results.sarif diff --git a/CHANGELOG.md b/CHANGELOG.md index 71b8c97fe47..1f71eb49ba7 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -1,9 +1,47 @@ # Changelog -## unreleased - -* [ENHANCEMENT] TSDB: Make the wlog watcher read segments synchronously when not tailing. #13224 -* [BUGFIX] Agent: Participate in notify calls. #13223 +## 2.49.1 / 2024-01-15 + +* [BUGFIX] TSDB: Fixed a wrong `q=` value in scrape accept header #13313 + +## 2.49.0 / 2024-01-15 + +* [FEATURE] Promtool: Add `--run` flag promtool test rules command. #12206 +* [FEATURE] SD: Add support for `NS` records to DNS SD. #13219 +* [FEATURE] UI: Add heatmap visualization setting in the Graph tab, useful histograms. #13096 #13371 +* [FEATURE] Scraping: Add `scrape_config.enable_compression` (default true) to disable gzip compression when scraping the target. #13166 +* [FEATURE] PromQL: Add a `promql-experimental-functions` feature flag containing some new experimental PromQL functions. #13103 NOTE: More experimental functions might be added behind the same feature flag in the future. Added functions: + * Experimental `mad_over_time` (median absolute deviation around the median) function. #13059 + * Experimental `sort_by_label` and `sort_by_label_desc` functions allowing sorting returned series by labels. #11299 +* [FEATURE] SD: Add `__meta_linode_gpus` label to Linode SD. #13097 +* [FEATURE] API: Add `exclude_alerts` query parameter to `/api/v1/rules` to only return recording rules. #12999 +* [FEATURE] TSDB: --storage.tsdb.retention.time flag value is now exposed as a `prometheus_tsdb_retention_limit_seconds` metric. #12986 +* [FEATURE] Scraping: Add ability to specify priority of scrape protocols to accept during scrape (e.g. to scrape Prometheus proto format for certain jobs). This can be changed by setting `global.scrape_protocols` and `scrape_config.scrape_protocols`. #12738 +* [ENHANCEMENT] Scraping: Automated handling of scraping histograms that violate `scrape_config.native_histogram_bucket_limit` setting. #13129 +* [ENHANCEMENT] Scraping: Optimized memory allocations when scraping. #12992 +* [ENHANCEMENT] SD: Added cache for Azure SD to avoid rate-limits. #12622 +* [ENHANCEMENT] TSDB: Various improvements to OOO exemplar scraping. E.g. allowing ingestion of exemplars with the same timestamp, but with different labels. #13021 +* [ENHANCEMENT] API: Optimize `/api/v1/labels` and `/api/v1/label//values` when 1 set of matchers are used. #12888 +* [ENHANCEMENT] TSDB: Various optimizations for TSDB block index, head mmap chunks and WAL, reducing latency and memory allocations (improving API calls, compaction queries etc). #12997 #13058 #13056 #13040 +* [ENHANCEMENT] PromQL: Optimize memory allocations and latency when querying float histograms. #12954 +* [ENHANCEMENT] Rules: Instrument TraceID in log lines for rule evaluations. #13034 +* [ENHANCEMENT] PromQL: Optimize memory allocations in query_range calls. #13043 +* [ENHANCEMENT] Promtool: unittest interval now defaults to evaluation_intervals when not set. #12729 +* [BUGFIX] SD: Fixed Azure SD public IP reporting #13241 +* [BUGFIX] API: Fix inaccuracies in posting cardinality statistics. #12653 +* [BUGFIX] PromQL: Fix inaccuracies of `histogram_quantile` with classic histograms. #13153 +* [BUGFIX] TSDB: Fix rare fails or inaccurate queries with OOO samples. #13115 +* [BUGFIX] TSDB: Fix rare panics on append commit when exemplars are used. #13092 +* [BUGFIX] TSDB: Fix exemplar WAL storage, so remote write can send/receive samples before exemplars. #13113 +* [BUGFIX] Mixins: Fix `url` filter on remote write dashboards. #10721 +* [BUGFIX] PromQL/TSDB: Various fixes to float histogram operations. #12891 #12977 #12609 #13190 #13189 #13191 #13201 #13212 #13208 +* [BUGFIX] Promtool: Fix int32 overflow issues for 32-bit architectures. #12978 +* [BUGFIX] SD: Fix Azure VM Scale Set NIC issue. #13283 + +## 2.48.1 / 2023-12-07 + +* [BUGFIX] TSDB: Make the wlog watcher read segments synchronously when not tailing. #13224 +* [BUGFIX] Agent: Participate in notify calls (fixes slow down in remote write handling introduced in 2.45). #13223 ## 2.48.0 / 2023-11-16 diff --git a/MAINTAINERS.md b/MAINTAINERS.md index 902e9a6e949..a776eb3594e 100644 --- a/MAINTAINERS.md +++ b/MAINTAINERS.md @@ -8,8 +8,10 @@ Julien Pivotto ( / @roidelapluie) and Levi Harrison * `k8s`: Frederic Branczyk ( / @brancz) * `documentation` * `prometheus-mixin`: Matthias Loibl ( / @metalmatze) +* `model/histogram` and other code related to native histograms: Björn Rabenstein ( / @beorn7), +George Krajcsovits ( / @krajorama) * `storage` - * `remote`: Chris Marchbanks ( / @csmarchbanks), Callum Styan ( / @cstyan), Bartłomiej Płotka ( / @bwplotka), Tom Wilkie ( / @tomwilkie) + * `remote`: Callum Styan ( / @cstyan), Bartłomiej Płotka ( / @bwplotka), Tom Wilkie ( / @tomwilkie) * `tsdb`: Ganesh Vernekar ( / @codesome), Bartłomiej Płotka ( / @bwplotka), Jesús Vázquez ( / @jesusvazquez) * `agent`: Robert Fratto ( / @rfratto) * `web` @@ -17,6 +19,7 @@ Julien Pivotto ( / @roidelapluie) and Levi Harrison * `module`: Augustin Husson ( @nexucis) * `Makefile` and related build configuration: Simon Pasquier ( / @simonpasquier), Ben Kochie ( / @SuperQ) + For the sake of brevity, not all subtrees are explicitly listed. Due to the size of this repository, the natural changes in focus of maintainers over time, and nuances of where particular features live, this list will always be diff --git a/README.md b/README.md index 5fa6cc49e5b..0042793ff64 100644 --- a/README.md +++ b/README.md @@ -14,7 +14,7 @@ examples and guides.

[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/486/badge)](https://bestpractices.coreinfrastructure.org/projects/486) [![Gitpod ready-to-code](https://img.shields.io/badge/Gitpod-ready--to--code-blue?logo=gitpod)](https://gitpod.io/#https://github.com/prometheus/prometheus) [![Fuzzing Status](https://oss-fuzz-build-logs.storage.googleapis.com/badges/prometheus.svg)](https://bugs.chromium.org/p/oss-fuzz/issues/list?sort=-opened&can=1&q=proj:prometheus) -[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/prometheus/prometheus/badge)](https://api.securityscorecards.dev/projects/github.com/prometheus/prometheus) +[![OpenSSF Scorecard](https://api.securityscorecards.dev/projects/github.com/prometheus/prometheus/badge)](https://securityscorecards.dev/viewer/?uri=github.com/prometheus/prometheus) diff --git a/RELEASE.md b/RELEASE.md index 6ab2f638996..6815308f477 100644 --- a/RELEASE.md +++ b/RELEASE.md @@ -54,7 +54,8 @@ Release cadence of first pre-releases being cut is 6 weeks. | v2.47 | 2023-08-23 | Bryan Boreham (GitHub: @bboreham) | | v2.48 | 2023-10-04 | Levi Harrison (GitHub: @LeviHarrison) | | v2.49 | 2023-12-05 | Bartek Plotka (GitHub: @bwplotka) | -| v2.50 | 2024-01-16 | **searching for volunteer** | +| v2.50 | 2024-01-16 | Augustin Husson (GitHub: @nexucis) | +| v2.51 | 2024-02-13 | **searching for volunteer** | If you are interested in volunteering please create a pull request against the [prometheus/prometheus](https://github.com/prometheus/prometheus) repository and propose yourself for the release series of your choice. diff --git a/VERSION b/VERSION index 9a9feb08471..f5518081bd1 100644 --- a/VERSION +++ b/VERSION @@ -1 +1 @@ -2.48.0 +2.49.1 diff --git a/cmd/prometheus/main.go b/cmd/prometheus/main.go index ebace6dda00..2eed8e7ce81 100644 --- a/cmd/prometheus/main.go +++ b/cmd/prometheus/main.go @@ -33,6 +33,7 @@ import ( "syscall" "time" + "github.com/KimMachineGun/automemlimit/memlimit" "github.com/alecthomas/kingpin/v2" "github.com/alecthomas/units" "github.com/go-kit/log" @@ -136,6 +137,7 @@ type flagConfig struct { forGracePeriod model.Duration outageTolerance model.Duration resendDelay model.Duration + maxConcurrentEvals int64 web web.Options scrape scrape.Options tsdb tsdbOptions @@ -147,7 +149,8 @@ type flagConfig struct { queryMaxSamples int RemoteFlushDeadline model.Duration - featureList []string + featureList []string + memlimitRatio float64 // These options are extracted from featureList // for ease of use. enableExpandExternalLabels bool @@ -155,7 +158,9 @@ type flagConfig struct { enablePerStepStats bool enableAutoGOMAXPROCS bool // todo: how to use the enable feature flag properly + use the remote format enum type - rwFormat int + rwFormat int + enableAutoGOMEMLIMIT bool + enableConcurrentRuleEval bool prometheusURL string corsRegexString string @@ -202,6 +207,12 @@ func (c *flagConfig) setFeatureListOptions(logger log.Logger) error { case "auto-gomaxprocs": c.enableAutoGOMAXPROCS = true level.Info(logger).Log("msg", "Automatically set GOMAXPROCS to match Linux container CPU quota") + case "auto-gomemlimit": + c.enableAutoGOMEMLIMIT = true + level.Info(logger).Log("msg", "Automatically set GOMEMLIMIT to match Linux container or system memory limit") + case "concurrent-rule-eval": + c.enableConcurrentRuleEval = true + level.Info(logger).Log("msg", "Experimental concurrent rule evaluation enabled.") case "no-default-scrape-port": c.scrape.NoDefaultPort = true level.Info(logger).Log("msg", "No default port will be appended to scrape targets' addresses.") @@ -267,6 +278,9 @@ func main() { a.Flag("web.listen-address", "Address to listen on for UI, API, and telemetry."). Default("0.0.0.0:9090").StringVar(&cfg.web.ListenAddress) + a.Flag("auto-gomemlimit.ratio", "The ratio of reserved GOMEMLIMIT memory to the detected maximum container or system memory"). + Default("0.9").FloatVar(&cfg.memlimitRatio) + webConfig := a.Flag( "web.config.file", "[EXPERIMENTAL] Path to configuration file that can enable TLS or authentication.", @@ -407,6 +421,9 @@ func main() { serverOnlyFlag(a, "rules.alert.resend-delay", "Minimum amount of time to wait before resending an alert to Alertmanager."). Default("1m").SetValue(&cfg.resendDelay) + serverOnlyFlag(a, "rules.max-concurrent-evals", "Global concurrency limit for independent rules that can run concurrently."). + Default("4").Int64Var(&cfg.maxConcurrentEvals) + a.Flag("scrape.adjust-timestamps", "Adjust scrape timestamps by up to `scrape.timestamp-tolerance` to align them to the intended schedule. See https://github.com/prometheus/prometheus/issues/7846 for more context. Experimental. This flag will be removed in a future release."). Hidden().Default("true").BoolVar(&scrape.AlignScrapeTimestamps) @@ -434,7 +451,7 @@ func main() { a.Flag("scrape.discovery-reload-interval", "Interval used by scrape manager to throttle target groups updates."). Hidden().Default("5s").SetValue(&cfg.scrape.DiscoveryReloadInterval) - a.Flag("enable-feature", "Comma separated feature names to enable. Valid options: agent, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, promql-per-step-stats, promql-experimental-functions, remote-write-receiver (DEPRECATED), extra-scrape-metrics, new-service-discovery-manager, auto-gomaxprocs, no-default-scrape-port, native-histograms, otlp-write-receiver, metadata-wal-records. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details."). + a.Flag("enable-feature", "Comma separated feature names to enable. Valid options: agent, auto-gomemlimit, exemplar-storage, expand-external-labels, memory-snapshot-on-shutdown, promql-at-modifier, promql-negative-offset, promql-per-step-stats, promql-experimental-functions, remote-write-receiver (DEPRECATED), extra-scrape-metrics, new-service-discovery-manager, auto-gomaxprocs, no-default-scrape-port, native-histograms, otlp-write-receiver, metadata-wal-records. See https://prometheus.io/docs/prometheus/latest/feature_flags/ for more details."). Default("").StringsVar(&cfg.featureList) a.Flag("remote-write-format", "remote write proto format to use, valid options: 0 (1.0), 1 (reduced format), 3 (min64 format)"). @@ -475,6 +492,11 @@ func main() { os.Exit(3) } + if cfg.memlimitRatio <= 0.0 || cfg.memlimitRatio > 1.0 { + fmt.Fprintf(os.Stderr, "--auto-gomemlimit.ratio must be greater than 0 and less than or equal to 1.") + os.Exit(1) + } + localStoragePath := cfg.serverStoragePath if agentMode { localStoragePath = cfg.agentStoragePath @@ -638,9 +660,16 @@ func main() { level.Error(logger).Log("msg", "failed to register Kubernetes client metrics", "err", err) os.Exit(1) } + + sdMetrics, err := discovery.CreateAndRegisterSDMetrics(prometheus.DefaultRegisterer) + if err != nil { + level.Error(logger).Log("msg", "failed to register service discovery metrics", "err", err) + os.Exit(1) + } + if cfg.enableNewSDManager { { - discMgr := discovery.NewManager(ctxScrape, log.With(logger, "component", "discovery manager scrape"), prometheus.DefaultRegisterer, discovery.Name("scrape")) + discMgr := discovery.NewManager(ctxScrape, log.With(logger, "component", "discovery manager scrape"), prometheus.DefaultRegisterer, sdMetrics, discovery.Name("scrape")) if discMgr == nil { level.Error(logger).Log("msg", "failed to create a discovery manager scrape") os.Exit(1) @@ -649,7 +678,7 @@ func main() { } { - discMgr := discovery.NewManager(ctxNotify, log.With(logger, "component", "discovery manager notify"), prometheus.DefaultRegisterer, discovery.Name("notify")) + discMgr := discovery.NewManager(ctxNotify, log.With(logger, "component", "discovery manager notify"), prometheus.DefaultRegisterer, sdMetrics, discovery.Name("notify")) if discMgr == nil { level.Error(logger).Log("msg", "failed to create a discovery manager notify") os.Exit(1) @@ -658,7 +687,7 @@ func main() { } } else { { - discMgr := legacymanager.NewManager(ctxScrape, log.With(logger, "component", "discovery manager scrape"), prometheus.DefaultRegisterer, legacymanager.Name("scrape")) + discMgr := legacymanager.NewManager(ctxScrape, log.With(logger, "component", "discovery manager scrape"), prometheus.DefaultRegisterer, sdMetrics, legacymanager.Name("scrape")) if discMgr == nil { level.Error(logger).Log("msg", "failed to create a discovery manager scrape") os.Exit(1) @@ -667,7 +696,7 @@ func main() { } { - discMgr := legacymanager.NewManager(ctxNotify, log.With(logger, "component", "discovery manager notify"), prometheus.DefaultRegisterer, legacymanager.Name("notify")) + discMgr := legacymanager.NewManager(ctxNotify, log.With(logger, "component", "discovery manager notify"), prometheus.DefaultRegisterer, sdMetrics, legacymanager.Name("notify")) if discMgr == nil { level.Error(logger).Log("msg", "failed to create a discovery manager notify") os.Exit(1) @@ -703,6 +732,20 @@ func main() { } } + if cfg.enableAutoGOMEMLIMIT { + if _, err := memlimit.SetGoMemLimitWithOpts( + memlimit.WithRatio(cfg.memlimitRatio), + memlimit.WithProvider( + memlimit.ApplyFallback( + memlimit.FromCgroup, + memlimit.FromSystem, + ), + ), + ); err != nil { + level.Warn(logger).Log("component", "automemlimit", "msg", "Failed to set GOMEMLIMIT automatically", "err", err) + } + } + if !agentMode { opts := promql.EngineOpts{ Logger: log.With(logger, "component", "query engine"), @@ -722,17 +765,19 @@ func main() { queryEngine = promql.NewEngine(opts) ruleManager = rules.NewManager(&rules.ManagerOptions{ - Appendable: fanoutStorage, - Queryable: localStorage, - QueryFunc: rules.EngineQueryFunc(queryEngine, fanoutStorage), - NotifyFunc: rules.SendAlerts(notifierManager, cfg.web.ExternalURL.String()), - Context: ctxRule, - ExternalURL: cfg.web.ExternalURL, - Registerer: prometheus.DefaultRegisterer, - Logger: log.With(logger, "component", "rule manager"), - OutageTolerance: time.Duration(cfg.outageTolerance), - ForGracePeriod: time.Duration(cfg.forGracePeriod), - ResendDelay: time.Duration(cfg.resendDelay), + Appendable: fanoutStorage, + Queryable: localStorage, + QueryFunc: rules.EngineQueryFunc(queryEngine, fanoutStorage), + NotifyFunc: rules.SendAlerts(notifierManager, cfg.web.ExternalURL.String()), + Context: ctxRule, + ExternalURL: cfg.web.ExternalURL, + Registerer: prometheus.DefaultRegisterer, + Logger: log.With(logger, "component", "rule manager"), + OutageTolerance: time.Duration(cfg.outageTolerance), + ForGracePeriod: time.Duration(cfg.forGracePeriod), + ResendDelay: time.Duration(cfg.resendDelay), + MaxConcurrentEvals: cfg.maxConcurrentEvals, + ConcurrentEvalsEnabled: cfg.enableConcurrentRuleEval, }) } @@ -1655,6 +1700,7 @@ func (opts tsdbOptions) ToTSDBOptions() tsdb.Options { EnableMemorySnapshotOnShutdown: opts.EnableMemorySnapshotOnShutdown, EnableNativeHistograms: opts.EnableNativeHistograms, OutOfOrderTimeWindow: opts.OutOfOrderTimeWindow, + EnableOverlappingCompaction: true, } } diff --git a/cmd/prometheus/main_unix_test.go b/cmd/prometheus/main_unix_test.go index 7224e25d708..417d062d66a 100644 --- a/cmd/prometheus/main_unix_test.go +++ b/cmd/prometheus/main_unix_test.go @@ -12,7 +12,6 @@ // limitations under the License. // //go:build !windows -// +build !windows package main diff --git a/cmd/promtool/analyze.go b/cmd/promtool/analyze.go new file mode 100644 index 00000000000..c1f523de525 --- /dev/null +++ b/cmd/promtool/analyze.go @@ -0,0 +1,370 @@ +// Copyright 2023 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package main + +import ( + "context" + "errors" + "fmt" + "io" + "math" + "net/http" + "net/url" + "os" + "sort" + "strconv" + "strings" + "time" + + v1 "github.com/prometheus/client_golang/api/prometheus/v1" + "github.com/prometheus/common/model" + + "github.com/prometheus/prometheus/model/labels" +) + +var ( + errNotNativeHistogram = fmt.Errorf("not a native histogram") + errNotEnoughData = fmt.Errorf("not enough data") + + outputHeader = `Bucket stats for each histogram series over time +------------------------------------------------ +First the min, avg, and max number of populated buckets, followed by the total +number of buckets (only if different from the max number of populated buckets +which is typical for classic but not native histograms).` + outputFooter = `Aggregated bucket stats +----------------------- +Each line shows min/avg/max over the series above.` +) + +type QueryAnalyzeConfig struct { + metricType string + duration time.Duration + time string + matchers []string +} + +// run retrieves metrics that look like conventional histograms (i.e. have _bucket +// suffixes) or native histograms, depending on metricType flag. +func (c *QueryAnalyzeConfig) run(url *url.URL, roundtripper http.RoundTripper) error { + if c.metricType != "histogram" { + return fmt.Errorf("analyze type is %s, must be 'histogram'", c.metricType) + } + + ctx := context.Background() + + api, err := newAPI(url, roundtripper, nil) + if err != nil { + return err + } + + var endTime time.Time + if c.time != "" { + endTime, err = parseTime(c.time) + if err != nil { + return fmt.Errorf("error parsing time '%s': %w", c.time, err) + } + } else { + endTime = time.Now() + } + + return c.getStatsFromMetrics(ctx, api, endTime, os.Stdout, c.matchers) +} + +func (c *QueryAnalyzeConfig) getStatsFromMetrics(ctx context.Context, api v1.API, endTime time.Time, out io.Writer, matchers []string) error { + fmt.Fprintf(out, "%s\n\n", outputHeader) + metastatsNative := newMetaStatistics() + metastatsClassic := newMetaStatistics() + for _, matcher := range matchers { + seriesSel := seriesSelector(matcher, c.duration) + matrix, err := querySamples(ctx, api, seriesSel, endTime) + if err != nil { + return err + } + + matrices := make(map[string]model.Matrix) + for _, series := range matrix { + // We do not handle mixed types. If there are float values, we assume it is a + // classic histogram, otherwise we assume it is a native histogram, and we + // ignore series with errors if they do not match the expected type. + if len(series.Values) == 0 { + stats, err := calcNativeBucketStatistics(series) + if err != nil { + if errors.Is(err, errNotNativeHistogram) || errors.Is(err, errNotEnoughData) { + continue + } + return err + } + fmt.Fprintf(out, "- %s (native): %v\n", series.Metric, *stats) + metastatsNative.update(stats) + } else { + lbs := model.LabelSet(series.Metric).Clone() + if _, ok := lbs["le"]; !ok { + continue + } + metricName := string(lbs[labels.MetricName]) + if !strings.HasSuffix(metricName, "_bucket") { + continue + } + delete(lbs, labels.MetricName) + delete(lbs, "le") + key := formatSeriesName(metricName, lbs) + matrices[key] = append(matrices[key], series) + } + } + + for key, matrix := range matrices { + stats, err := calcClassicBucketStatistics(matrix) + if err != nil { + if errors.Is(err, errNotEnoughData) { + continue + } + return err + } + fmt.Fprintf(out, "- %s (classic): %v\n", key, *stats) + metastatsClassic.update(stats) + } + } + fmt.Fprintf(out, "\n%s\n", outputFooter) + if metastatsNative.Count() > 0 { + fmt.Fprintf(out, "\nNative %s\n", metastatsNative) + } + if metastatsClassic.Count() > 0 { + fmt.Fprintf(out, "\nClassic %s\n", metastatsClassic) + } + return nil +} + +func seriesSelector(metricName string, duration time.Duration) string { + builder := strings.Builder{} + builder.WriteString(metricName) + builder.WriteRune('[') + builder.WriteString(duration.String()) + builder.WriteRune(']') + return builder.String() +} + +func formatSeriesName(metricName string, lbs model.LabelSet) string { + builder := strings.Builder{} + builder.WriteString(metricName) + builder.WriteString(lbs.String()) + return builder.String() +} + +func querySamples(ctx context.Context, api v1.API, query string, end time.Time) (model.Matrix, error) { + values, _, err := api.Query(ctx, query, end) + if err != nil { + return nil, err + } + + matrix, ok := values.(model.Matrix) + if !ok { + return nil, fmt.Errorf("query of buckets resulted in non-Matrix") + } + + return matrix, nil +} + +// minPop/avgPop/maxPop is for the number of populated (non-zero) buckets. +// total is the total number of buckets across all samples in the series, +// populated or not. +type statistics struct { + minPop, maxPop, total int + avgPop float64 +} + +func (s statistics) String() string { + if s.maxPop == s.total { + return fmt.Sprintf("%d/%.3f/%d", s.minPop, s.avgPop, s.maxPop) + } + return fmt.Sprintf("%d/%.3f/%d/%d", s.minPop, s.avgPop, s.maxPop, s.total) +} + +func calcClassicBucketStatistics(matrix model.Matrix) (*statistics, error) { + numBuckets := len(matrix) + + stats := &statistics{ + minPop: math.MaxInt, + total: numBuckets, + } + + if numBuckets == 0 || len(matrix[0].Values) < 2 { + return stats, errNotEnoughData + } + + numSamples := len(matrix[0].Values) + + sortMatrix(matrix) + + totalPop := 0 + for timeIdx := 0; timeIdx < numSamples; timeIdx++ { + curr, err := getBucketCountsAtTime(matrix, numBuckets, timeIdx) + if err != nil { + return stats, err + } + countPop := 0 + for _, b := range curr { + if b != 0 { + countPop++ + } + } + + totalPop += countPop + if stats.minPop > countPop { + stats.minPop = countPop + } + if stats.maxPop < countPop { + stats.maxPop = countPop + } + } + stats.avgPop = float64(totalPop) / float64(numSamples) + return stats, nil +} + +func sortMatrix(matrix model.Matrix) { + sort.SliceStable(matrix, func(i, j int) bool { + return getLe(matrix[i]) < getLe(matrix[j]) + }) +} + +func getLe(series *model.SampleStream) float64 { + lbs := model.LabelSet(series.Metric) + le, _ := strconv.ParseFloat(string(lbs["le"]), 64) + return le +} + +func getBucketCountsAtTime(matrix model.Matrix, numBuckets, timeIdx int) ([]int, error) { + counts := make([]int, numBuckets) + if timeIdx >= len(matrix[0].Values) { + // Just return zeroes instead of erroring out so we can get partial results. + return counts, nil + } + counts[0] = int(matrix[0].Values[timeIdx].Value) + for i, bucket := range matrix[1:] { + if timeIdx >= len(bucket.Values) { + // Just return zeroes instead of erroring out so we can get partial results. + return counts, nil + } + curr := bucket.Values[timeIdx] + prev := matrix[i].Values[timeIdx] + // Assume the results are nicely aligned. + if curr.Timestamp != prev.Timestamp { + return counts, fmt.Errorf("matrix result is not time aligned") + } + counts[i+1] = int(curr.Value - prev.Value) + } + return counts, nil +} + +type bucketBounds struct { + boundaries int32 + upper, lower float64 +} + +func makeBucketBounds(b *model.HistogramBucket) bucketBounds { + return bucketBounds{ + boundaries: b.Boundaries, + upper: float64(b.Upper), + lower: float64(b.Lower), + } +} + +func calcNativeBucketStatistics(series *model.SampleStream) (*statistics, error) { + stats := &statistics{ + minPop: math.MaxInt, + } + + overall := make(map[bucketBounds]struct{}) + totalPop := 0 + if len(series.Histograms) == 0 { + return nil, errNotNativeHistogram + } + if len(series.Histograms) == 1 { + return nil, errNotEnoughData + } + for _, histogram := range series.Histograms { + for _, bucket := range histogram.Histogram.Buckets { + bb := makeBucketBounds(bucket) + overall[bb] = struct{}{} + } + countPop := len(histogram.Histogram.Buckets) + + totalPop += countPop + if stats.minPop > countPop { + stats.minPop = countPop + } + if stats.maxPop < countPop { + stats.maxPop = countPop + } + } + stats.avgPop = float64(totalPop) / float64(len(series.Histograms)) + stats.total = len(overall) + return stats, nil +} + +type distribution struct { + min, max, count int + avg float64 +} + +func newDistribution() distribution { + return distribution{ + min: math.MaxInt, + } +} + +func (d *distribution) update(num int) { + if d.min > num { + d.min = num + } + if d.max < num { + d.max = num + } + d.count++ + d.avg += float64(num)/float64(d.count) - d.avg/float64(d.count) +} + +func (d distribution) String() string { + return fmt.Sprintf("%d/%.3f/%d", d.min, d.avg, d.max) +} + +type metaStatistics struct { + minPop, avgPop, maxPop, total distribution +} + +func newMetaStatistics() *metaStatistics { + return &metaStatistics{ + minPop: newDistribution(), + avgPop: newDistribution(), + maxPop: newDistribution(), + total: newDistribution(), + } +} + +func (ms metaStatistics) Count() int { + return ms.minPop.count +} + +func (ms metaStatistics) String() string { + if ms.maxPop == ms.total { + return fmt.Sprintf("histogram series (%d in total):\n- min populated: %v\n- avg populated: %v\n- max populated: %v", ms.Count(), ms.minPop, ms.avgPop, ms.maxPop) + } + return fmt.Sprintf("histogram series (%d in total):\n- min populated: %v\n- avg populated: %v\n- max populated: %v\n- total: %v", ms.Count(), ms.minPop, ms.avgPop, ms.maxPop, ms.total) +} + +func (ms *metaStatistics) update(s *statistics) { + ms.minPop.update(s.minPop) + ms.avgPop.update(int(s.avgPop)) + ms.maxPop.update(s.maxPop) + ms.total.update(s.total) +} diff --git a/cmd/promtool/analyze_test.go b/cmd/promtool/analyze_test.go new file mode 100644 index 00000000000..83d2ac4a3db --- /dev/null +++ b/cmd/promtool/analyze_test.go @@ -0,0 +1,170 @@ +// Copyright 2023 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package main + +import ( + "fmt" + "testing" + + "github.com/stretchr/testify/require" + + "github.com/prometheus/common/model" +) + +var ( + exampleMatrix = model.Matrix{ + &model.SampleStream{ + Metric: model.Metric{ + "le": "+Inf", + }, + Values: []model.SamplePair{ + { + Value: 31, + Timestamp: 100, + }, + { + Value: 32, + Timestamp: 200, + }, + { + Value: 40, + Timestamp: 300, + }, + }, + }, + &model.SampleStream{ + Metric: model.Metric{ + "le": "0.5", + }, + Values: []model.SamplePair{ + { + Value: 10, + Timestamp: 100, + }, + { + Value: 11, + Timestamp: 200, + }, + { + Value: 11, + Timestamp: 300, + }, + }, + }, + &model.SampleStream{ + Metric: model.Metric{ + "le": "10", + }, + Values: []model.SamplePair{ + { + Value: 30, + Timestamp: 100, + }, + { + Value: 31, + Timestamp: 200, + }, + { + Value: 37, + Timestamp: 300, + }, + }, + }, + &model.SampleStream{ + Metric: model.Metric{ + "le": "2", + }, + Values: []model.SamplePair{ + { + Value: 25, + Timestamp: 100, + }, + { + Value: 26, + Timestamp: 200, + }, + { + Value: 27, + Timestamp: 300, + }, + }, + }, + } + exampleMatrixLength = len(exampleMatrix) +) + +func init() { + sortMatrix(exampleMatrix) +} + +func TestGetBucketCountsAtTime(t *testing.T) { + cases := []struct { + matrix model.Matrix + length int + timeIdx int + expected []int + }{ + { + exampleMatrix, + exampleMatrixLength, + 0, + []int{10, 15, 5, 1}, + }, + { + exampleMatrix, + exampleMatrixLength, + 1, + []int{11, 15, 5, 1}, + }, + { + exampleMatrix, + exampleMatrixLength, + 2, + []int{11, 16, 10, 3}, + }, + } + + for _, c := range cases { + t.Run(fmt.Sprintf("exampleMatrix@%d", c.timeIdx), func(t *testing.T) { + res, err := getBucketCountsAtTime(c.matrix, c.length, c.timeIdx) + require.NoError(t, err) + require.Equal(t, c.expected, res) + }) + } +} + +func TestCalcClassicBucketStatistics(t *testing.T) { + cases := []struct { + matrix model.Matrix + expected *statistics + }{ + { + exampleMatrix, + &statistics{ + minPop: 4, + avgPop: 4, + maxPop: 4, + total: 4, + }, + }, + } + + for i, c := range cases { + t.Run(fmt.Sprintf("case %d", i), func(t *testing.T) { + res, err := calcClassicBucketStatistics(c.matrix) + require.NoError(t, err) + require.Equal(t, c.expected, res) + }) + } +} diff --git a/cmd/promtool/main.go b/cmd/promtool/main.go index 508b681b882..0332c33eaa3 100644 --- a/cmd/promtool/main.go +++ b/cmd/promtool/main.go @@ -35,9 +35,7 @@ import ( "github.com/go-kit/log" "github.com/google/pprof/profile" "github.com/prometheus/client_golang/api" - v1 "github.com/prometheus/client_golang/api/prometheus/v1" "github.com/prometheus/client_golang/prometheus" - "github.com/prometheus/client_golang/prometheus/promhttp" "github.com/prometheus/client_golang/prometheus/testutil/promlint" config_util "github.com/prometheus/common/config" "github.com/prometheus/common/model" @@ -185,6 +183,14 @@ func main() { queryLabelsEnd := queryLabelsCmd.Flag("end", "End time (RFC3339 or Unix timestamp).").String() queryLabelsMatch := queryLabelsCmd.Flag("match", "Series selector. Can be specified multiple times.").Strings() + queryAnalyzeCfg := &QueryAnalyzeConfig{} + queryAnalyzeCmd := queryCmd.Command("analyze", "Run queries against your Prometheus to analyze the usage pattern of certain metrics.") + queryAnalyzeCmd.Flag("server", "Prometheus server to query.").Required().URLVar(&serverURL) + queryAnalyzeCmd.Flag("type", "Type of metric: histogram.").Required().StringVar(&queryAnalyzeCfg.metricType) + queryAnalyzeCmd.Flag("duration", "Time frame to analyze.").Default("1h").DurationVar(&queryAnalyzeCfg.duration) + queryAnalyzeCmd.Flag("time", "Query time (RFC3339 or Unix timestamp), defaults to now.").StringVar(&queryAnalyzeCfg.time) + queryAnalyzeCmd.Flag("match", "Series selector. Can be specified multiple times.").Required().StringsVar(&queryAnalyzeCfg.matchers) + pushCmd := app.Command("push", "Push to a Prometheus server.") pushCmd.Flag("http.config.file", "HTTP client configuration file for promtool to connect to Prometheus.").PlaceHolder("").ExistingFileVar(&httpConfigFilePath) pushMetricsCmd := pushCmd.Command("metrics", "Push metrics to a prometheus remote write (for testing purpose only).") @@ -204,6 +210,7 @@ func main() { "test-rule-file", "The unit test file.", ).Required().ExistingFiles() + testRulesDiff := testRulesCmd.Flag("diff", "[Experimental] Print colored differential output between expected & received output.").Default("false").Bool() defaultDBPath := "data/" tsdbCmd := app.Command("tsdb", "Run tsdb commands.") @@ -230,7 +237,7 @@ func main() { dumpPath := tsdbDumpCmd.Arg("db path", "Database path (default is "+defaultDBPath+").").Default(defaultDBPath).String() dumpMinTime := tsdbDumpCmd.Flag("min-time", "Minimum timestamp to dump.").Default(strconv.FormatInt(math.MinInt64, 10)).Int64() dumpMaxTime := tsdbDumpCmd.Flag("max-time", "Maximum timestamp to dump.").Default(strconv.FormatInt(math.MaxInt64, 10)).Int64() - dumpMatch := tsdbDumpCmd.Flag("match", "Series selector.").Default("{__name__=~'(?s:.*)'}").String() + dumpMatch := tsdbDumpCmd.Flag("match", "Series selector. Can be specified multiple times.").Default("{__name__=~'(?s:.*)'}").Strings() importCmd := tsdbCmd.Command("create-blocks-from", "[Experimental] Import samples from input and produce TSDB blocks. Please refer to the storage docs for more details.") importHumanReadable := importCmd.Flag("human-readable", "Print human readable values.").Short('r').Bool() @@ -369,6 +376,7 @@ func main() { EnableNegativeOffset: true, }, *testRulesRun, + *testRulesDiff, *testRulesFiles...), ) @@ -390,6 +398,9 @@ func main() { case importRulesCmd.FullCommand(): os.Exit(checkErr(importRules(serverURL, httpRoundTripper, *importRulesStart, *importRulesEnd, *importRulesOutputDir, *importRulesEvalInterval, *maxBlockDuration, *importRulesFiles...))) + case queryAnalyzeCmd.FullCommand(): + os.Exit(checkErr(queryAnalyzeCfg.run(serverURL, httpRoundTripper))) + case documentationCmd.FullCommand(): os.Exit(checkErr(documentcli.GenerateMarkdown(app.Model(), os.Stdout))) @@ -997,246 +1008,6 @@ func checkMetricsExtended(r io.Reader) ([]metricStat, int, error) { return stats, total, nil } -// QueryInstant performs an instant query against a Prometheus server. -func QueryInstant(url *url.URL, roundTripper http.RoundTripper, query, evalTime string, p printer) int { - if url.Scheme == "" { - url.Scheme = "http" - } - config := api.Config{ - Address: url.String(), - RoundTripper: roundTripper, - } - - // Create new client. - c, err := api.NewClient(config) - if err != nil { - fmt.Fprintln(os.Stderr, "error creating API client:", err) - return failureExitCode - } - - eTime := time.Now() - if evalTime != "" { - eTime, err = parseTime(evalTime) - if err != nil { - fmt.Fprintln(os.Stderr, "error parsing evaluation time:", err) - return failureExitCode - } - } - - // Run query against client. - api := v1.NewAPI(c) - - ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) - val, _, err := api.Query(ctx, query, eTime) // Ignoring warnings for now. - cancel() - if err != nil { - return handleAPIError(err) - } - - p.printValue(val) - - return successExitCode -} - -// QueryRange performs a range query against a Prometheus server. -func QueryRange(url *url.URL, roundTripper http.RoundTripper, headers map[string]string, query, start, end string, step time.Duration, p printer) int { - if url.Scheme == "" { - url.Scheme = "http" - } - config := api.Config{ - Address: url.String(), - RoundTripper: roundTripper, - } - - if len(headers) > 0 { - config.RoundTripper = promhttp.RoundTripperFunc(func(req *http.Request) (*http.Response, error) { - for key, value := range headers { - req.Header.Add(key, value) - } - return roundTripper.RoundTrip(req) - }) - } - - // Create new client. - c, err := api.NewClient(config) - if err != nil { - fmt.Fprintln(os.Stderr, "error creating API client:", err) - return failureExitCode - } - - var stime, etime time.Time - - if end == "" { - etime = time.Now() - } else { - etime, err = parseTime(end) - if err != nil { - fmt.Fprintln(os.Stderr, "error parsing end time:", err) - return failureExitCode - } - } - - if start == "" { - stime = etime.Add(-5 * time.Minute) - } else { - stime, err = parseTime(start) - if err != nil { - fmt.Fprintln(os.Stderr, "error parsing start time:", err) - return failureExitCode - } - } - - if !stime.Before(etime) { - fmt.Fprintln(os.Stderr, "start time is not before end time") - return failureExitCode - } - - if step == 0 { - resolution := math.Max(math.Floor(etime.Sub(stime).Seconds()/250), 1) - // Convert seconds to nanoseconds such that time.Duration parses correctly. - step = time.Duration(resolution) * time.Second - } - - // Run query against client. - api := v1.NewAPI(c) - r := v1.Range{Start: stime, End: etime, Step: step} - ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) - val, _, err := api.QueryRange(ctx, query, r) // Ignoring warnings for now. - cancel() - - if err != nil { - return handleAPIError(err) - } - - p.printValue(val) - return successExitCode -} - -// QuerySeries queries for a series against a Prometheus server. -func QuerySeries(url *url.URL, roundTripper http.RoundTripper, matchers []string, start, end string, p printer) int { - if url.Scheme == "" { - url.Scheme = "http" - } - config := api.Config{ - Address: url.String(), - RoundTripper: roundTripper, - } - - // Create new client. - c, err := api.NewClient(config) - if err != nil { - fmt.Fprintln(os.Stderr, "error creating API client:", err) - return failureExitCode - } - - stime, etime, err := parseStartTimeAndEndTime(start, end) - if err != nil { - fmt.Fprintln(os.Stderr, err) - return failureExitCode - } - - // Run query against client. - api := v1.NewAPI(c) - ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) - val, _, err := api.Series(ctx, matchers, stime, etime) // Ignoring warnings for now. - cancel() - - if err != nil { - return handleAPIError(err) - } - - p.printSeries(val) - return successExitCode -} - -// QueryLabels queries for label values against a Prometheus server. -func QueryLabels(url *url.URL, roundTripper http.RoundTripper, matchers []string, name, start, end string, p printer) int { - if url.Scheme == "" { - url.Scheme = "http" - } - config := api.Config{ - Address: url.String(), - RoundTripper: roundTripper, - } - - // Create new client. - c, err := api.NewClient(config) - if err != nil { - fmt.Fprintln(os.Stderr, "error creating API client:", err) - return failureExitCode - } - - stime, etime, err := parseStartTimeAndEndTime(start, end) - if err != nil { - fmt.Fprintln(os.Stderr, err) - return failureExitCode - } - - // Run query against client. - api := v1.NewAPI(c) - ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) - val, warn, err := api.LabelValues(ctx, name, matchers, stime, etime) - cancel() - - for _, v := range warn { - fmt.Fprintln(os.Stderr, "query warning:", v) - } - if err != nil { - return handleAPIError(err) - } - - p.printLabelValues(val) - return successExitCode -} - -func handleAPIError(err error) int { - var apiErr *v1.Error - if errors.As(err, &apiErr) && apiErr.Detail != "" { - fmt.Fprintf(os.Stderr, "query error: %v (detail: %s)\n", apiErr, strings.TrimSpace(apiErr.Detail)) - } else { - fmt.Fprintln(os.Stderr, "query error:", err) - } - - return failureExitCode -} - -func parseStartTimeAndEndTime(start, end string) (time.Time, time.Time, error) { - var ( - minTime = time.Now().Add(-9999 * time.Hour) - maxTime = time.Now().Add(9999 * time.Hour) - err error - ) - - stime := minTime - etime := maxTime - - if start != "" { - stime, err = parseTime(start) - if err != nil { - return stime, etime, fmt.Errorf("error parsing start time: %w", err) - } - } - - if end != "" { - etime, err = parseTime(end) - if err != nil { - return stime, etime, fmt.Errorf("error parsing end time: %w", err) - } - } - return stime, etime, nil -} - -func parseTime(s string) (time.Time, error) { - if t, err := strconv.ParseFloat(s, 64); err == nil { - s, ns := math.Modf(t) - return time.Unix(int64(s), int64(ns*float64(time.Second))).UTC(), nil - } - if t, err := time.Parse(time.RFC3339Nano, s); err == nil { - return t, nil - } - return time.Time{}, fmt.Errorf("cannot parse %q to a valid timestamp", s) -} - type endpointsGroup struct { urlToFilename map[string]string postProcess func(b []byte) ([]byte, error) @@ -1390,15 +1161,12 @@ func importRules(url *url.URL, roundTripper http.RoundTripper, start, end, outpu evalInterval: evalInterval, maxBlockDuration: maxBlockDuration, } - client, err := api.NewClient(api.Config{ - Address: url.String(), - RoundTripper: roundTripper, - }) + api, err := newAPI(url, roundTripper, nil) if err != nil { return fmt.Errorf("new api client error: %w", err) } - ruleImporter := newRuleImporter(log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr)), cfg, v1.NewAPI(client)) + ruleImporter := newRuleImporter(log.NewLogfmtLogger(log.NewSyncWriter(os.Stderr)), cfg, api) errs := ruleImporter.loadGroups(ctx, files) for _, err := range errs { if err != nil { diff --git a/cmd/promtool/query.go b/cmd/promtool/query.go new file mode 100644 index 00000000000..0d7cb12cf42 --- /dev/null +++ b/cmd/promtool/query.go @@ -0,0 +1,251 @@ +// Copyright 2023 The Prometheus Authors +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package main + +import ( + "context" + "errors" + "fmt" + "math" + "net/http" + "net/url" + "os" + "strconv" + "strings" + "time" + + "github.com/prometheus/client_golang/api" + v1 "github.com/prometheus/client_golang/api/prometheus/v1" + "github.com/prometheus/client_golang/prometheus/promhttp" + + _ "github.com/prometheus/prometheus/plugins" // Register plugins. +) + +func newAPI(url *url.URL, roundTripper http.RoundTripper, headers map[string]string) (v1.API, error) { + if url.Scheme == "" { + url.Scheme = "http" + } + config := api.Config{ + Address: url.String(), + RoundTripper: roundTripper, + } + + if len(headers) > 0 { + config.RoundTripper = promhttp.RoundTripperFunc(func(req *http.Request) (*http.Response, error) { + for key, value := range headers { + req.Header.Add(key, value) + } + return roundTripper.RoundTrip(req) + }) + } + + // Create new client. + client, err := api.NewClient(config) + if err != nil { + return nil, err + } + + api := v1.NewAPI(client) + return api, nil +} + +// QueryInstant performs an instant query against a Prometheus server. +func QueryInstant(url *url.URL, roundTripper http.RoundTripper, query, evalTime string, p printer) int { + api, err := newAPI(url, roundTripper, nil) + if err != nil { + fmt.Fprintln(os.Stderr, "error creating API client:", err) + return failureExitCode + } + + eTime := time.Now() + if evalTime != "" { + eTime, err = parseTime(evalTime) + if err != nil { + fmt.Fprintln(os.Stderr, "error parsing evaluation time:", err) + return failureExitCode + } + } + + // Run query against client. + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) + val, _, err := api.Query(ctx, query, eTime) // Ignoring warnings for now. + cancel() + if err != nil { + return handleAPIError(err) + } + + p.printValue(val) + + return successExitCode +} + +// QueryRange performs a range query against a Prometheus server. +func QueryRange(url *url.URL, roundTripper http.RoundTripper, headers map[string]string, query, start, end string, step time.Duration, p printer) int { + api, err := newAPI(url, roundTripper, headers) + if err != nil { + fmt.Fprintln(os.Stderr, "error creating API client:", err) + return failureExitCode + } + + var stime, etime time.Time + + if end == "" { + etime = time.Now() + } else { + etime, err = parseTime(end) + if err != nil { + fmt.Fprintln(os.Stderr, "error parsing end time:", err) + return failureExitCode + } + } + + if start == "" { + stime = etime.Add(-5 * time.Minute) + } else { + stime, err = parseTime(start) + if err != nil { + fmt.Fprintln(os.Stderr, "error parsing start time:", err) + return failureExitCode + } + } + + if !stime.Before(etime) { + fmt.Fprintln(os.Stderr, "start time is not before end time") + return failureExitCode + } + + if step == 0 { + resolution := math.Max(math.Floor(etime.Sub(stime).Seconds()/250), 1) + // Convert seconds to nanoseconds such that time.Duration parses correctly. + step = time.Duration(resolution) * time.Second + } + + // Run query against client. + r := v1.Range{Start: stime, End: etime, Step: step} + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) + val, _, err := api.QueryRange(ctx, query, r) // Ignoring warnings for now. + cancel() + + if err != nil { + return handleAPIError(err) + } + + p.printValue(val) + return successExitCode +} + +// QuerySeries queries for a series against a Prometheus server. +func QuerySeries(url *url.URL, roundTripper http.RoundTripper, matchers []string, start, end string, p printer) int { + api, err := newAPI(url, roundTripper, nil) + if err != nil { + fmt.Fprintln(os.Stderr, "error creating API client:", err) + return failureExitCode + } + + stime, etime, err := parseStartTimeAndEndTime(start, end) + if err != nil { + fmt.Fprintln(os.Stderr, err) + return failureExitCode + } + + // Run query against client. + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) + val, _, err := api.Series(ctx, matchers, stime, etime) // Ignoring warnings for now. + cancel() + + if err != nil { + return handleAPIError(err) + } + + p.printSeries(val) + return successExitCode +} + +// QueryLabels queries for label values against a Prometheus server. +func QueryLabels(url *url.URL, roundTripper http.RoundTripper, matchers []string, name, start, end string, p printer) int { + api, err := newAPI(url, roundTripper, nil) + if err != nil { + fmt.Fprintln(os.Stderr, "error creating API client:", err) + return failureExitCode + } + + stime, etime, err := parseStartTimeAndEndTime(start, end) + if err != nil { + fmt.Fprintln(os.Stderr, err) + return failureExitCode + } + + // Run query against client. + ctx, cancel := context.WithTimeout(context.Background(), 2*time.Minute) + val, warn, err := api.LabelValues(ctx, name, matchers, stime, etime) + cancel() + + for _, v := range warn { + fmt.Fprintln(os.Stderr, "query warning:", v) + } + if err != nil { + return handleAPIError(err) + } + + p.printLabelValues(val) + return successExitCode +} + +func handleAPIError(err error) int { + var apiErr *v1.Error + if errors.As(err, &apiErr) && apiErr.Detail != "" { + fmt.Fprintf(os.Stderr, "query error: %v (detail: %s)\n", apiErr, strings.TrimSpace(apiErr.Detail)) + } else { + fmt.Fprintln(os.Stderr, "query error:", err) + } + + return failureExitCode +} + +func parseStartTimeAndEndTime(start, end string) (time.Time, time.Time, error) { + var ( + minTime = time.Now().Add(-9999 * time.Hour) + maxTime = time.Now().Add(9999 * time.Hour) + err error + ) + + stime := minTime + etime := maxTime + + if start != "" { + stime, err = parseTime(start) + if err != nil { + return stime, etime, fmt.Errorf("error parsing start time: %w", err) + } + } + + if end != "" { + etime, err = parseTime(end) + if err != nil { + return stime, etime, fmt.Errorf("error parsing end time: %w", err) + } + } + return stime, etime, nil +} + +func parseTime(s string) (time.Time, error) { + if t, err := strconv.ParseFloat(s, 64); err == nil { + s, ns := math.Modf(t) + return time.Unix(int64(s), int64(ns*float64(time.Second))).UTC(), nil + } + if t, err := time.Parse(time.RFC3339Nano, s); err == nil { + return t, nil + } + return time.Time{}, fmt.Errorf("cannot parse %q to a valid timestamp", s) +} diff --git a/cmd/promtool/sd.go b/cmd/promtool/sd.go index 155152e1acf..4892743fc06 100644 --- a/cmd/promtool/sd.go +++ b/cmd/promtool/sd.go @@ -78,12 +78,25 @@ func CheckSD(sdConfigFiles, sdJobName string, sdTimeout time.Duration, noDefault defer cancel() for _, cfg := range scrapeConfig.ServiceDiscoveryConfigs { - d, err := cfg.NewDiscoverer(discovery.DiscovererOptions{Logger: logger, Registerer: registerer}) + reg := prometheus.NewRegistry() + refreshMetrics := discovery.NewRefreshMetrics(reg) + metrics := cfg.NewDiscovererMetrics(reg, refreshMetrics) + err := metrics.Register() + if err != nil { + fmt.Fprintln(os.Stderr, "Could not register service discovery metrics", err) + return failureExitCode + } + + d, err := cfg.NewDiscoverer(discovery.DiscovererOptions{Logger: logger, Metrics: metrics}) if err != nil { fmt.Fprintln(os.Stderr, "Could not create new discoverer", err) return failureExitCode } - go d.Run(ctx, targetGroupChan) + go func() { + d.Run(ctx, targetGroupChan) + metrics.Unregister() + refreshMetrics.Unregister() + }() } var targetGroups []*targetgroup.Group diff --git a/cmd/promtool/testdata/dump-test-1.prom b/cmd/promtool/testdata/dump-test-1.prom new file mode 100644 index 00000000000..878cdecab8a --- /dev/null +++ b/cmd/promtool/testdata/dump-test-1.prom @@ -0,0 +1,15 @@ +{__name__="heavy_metric", foo="bar"} 5 0 +{__name__="heavy_metric", foo="bar"} 4 60000 +{__name__="heavy_metric", foo="bar"} 3 120000 +{__name__="heavy_metric", foo="bar"} 2 180000 +{__name__="heavy_metric", foo="bar"} 1 240000 +{__name__="heavy_metric", foo="foo"} 5 0 +{__name__="heavy_metric", foo="foo"} 4 60000 +{__name__="heavy_metric", foo="foo"} 3 120000 +{__name__="heavy_metric", foo="foo"} 2 180000 +{__name__="heavy_metric", foo="foo"} 1 240000 +{__name__="metric", baz="abc", foo="bar"} 1 0 +{__name__="metric", baz="abc", foo="bar"} 2 60000 +{__name__="metric", baz="abc", foo="bar"} 3 120000 +{__name__="metric", baz="abc", foo="bar"} 4 180000 +{__name__="metric", baz="abc", foo="bar"} 5 240000 diff --git a/cmd/promtool/testdata/dump-test-2.prom b/cmd/promtool/testdata/dump-test-2.prom new file mode 100644 index 00000000000..4ac2ffa5aec --- /dev/null +++ b/cmd/promtool/testdata/dump-test-2.prom @@ -0,0 +1,10 @@ +{__name__="heavy_metric", foo="foo"} 5 0 +{__name__="heavy_metric", foo="foo"} 4 60000 +{__name__="heavy_metric", foo="foo"} 3 120000 +{__name__="heavy_metric", foo="foo"} 2 180000 +{__name__="heavy_metric", foo="foo"} 1 240000 +{__name__="metric", baz="abc", foo="bar"} 1 0 +{__name__="metric", baz="abc", foo="bar"} 2 60000 +{__name__="metric", baz="abc", foo="bar"} 3 120000 +{__name__="metric", baz="abc", foo="bar"} 4 180000 +{__name__="metric", baz="abc", foo="bar"} 5 240000 diff --git a/cmd/promtool/testdata/dump-test-3.prom b/cmd/promtool/testdata/dump-test-3.prom new file mode 100644 index 00000000000..faa278101ed --- /dev/null +++ b/cmd/promtool/testdata/dump-test-3.prom @@ -0,0 +1,2 @@ +{__name__="metric", baz="abc", foo="bar"} 2 60000 +{__name__="metric", baz="abc", foo="bar"} 3 120000 diff --git a/cmd/promtool/tsdb.go b/cmd/promtool/tsdb.go index e6df9b78cf2..4bba8421c2d 100644 --- a/cmd/promtool/tsdb.go +++ b/cmd/promtool/tsdb.go @@ -667,7 +667,7 @@ func analyzeCompaction(ctx context.Context, block tsdb.BlockReader, indexr tsdb. it := fhchk.Iterator(nil) bucketCount := 0 for it.Next() == chunkenc.ValFloatHistogram { - _, f := it.AtFloatHistogram() + _, f := it.AtFloatHistogram(nil) bucketCount += len(f.PositiveBuckets) bucketCount += len(f.NegativeBuckets) } @@ -682,7 +682,7 @@ func analyzeCompaction(ctx context.Context, block tsdb.BlockReader, indexr tsdb. it := hchk.Iterator(nil) bucketCount := 0 for it.Next() == chunkenc.ValHistogram { - _, f := it.AtHistogram() + _, f := it.AtHistogram(nil) bucketCount += len(f.PositiveBuckets) bucketCount += len(f.NegativeBuckets) } @@ -706,7 +706,7 @@ func analyzeCompaction(ctx context.Context, block tsdb.BlockReader, indexr tsdb. return nil } -func dumpSamples(ctx context.Context, path string, mint, maxt int64, match string) (err error) { +func dumpSamples(ctx context.Context, path string, mint, maxt int64, match []string) (err error) { db, err := tsdb.OpenDBReadOnly(path, nil) if err != nil { return err @@ -720,11 +720,21 @@ func dumpSamples(ctx context.Context, path string, mint, maxt int64, match strin } defer q.Close() - matchers, err := parser.ParseMetricSelector(match) + matcherSets, err := parser.ParseMetricSelectors(match) if err != nil { return err } - ss := q.Select(ctx, false, nil, matchers...) + + var ss storage.SeriesSet + if len(matcherSets) > 1 { + var sets []storage.SeriesSet + for _, mset := range matcherSets { + sets = append(sets, q.Select(ctx, true, nil, mset...)) + } + ss = storage.NewMergeSeriesSet(sets, storage.ChainedSeriesMerge) + } else { + ss = q.Select(ctx, false, nil, matcherSets[0]...) + } for ss.Next() { series := ss.At() @@ -735,11 +745,11 @@ func dumpSamples(ctx context.Context, path string, mint, maxt int64, match strin fmt.Printf("%s %g %d\n", lbs, val, ts) } for it.Next() == chunkenc.ValFloatHistogram { - ts, fh := it.AtFloatHistogram() + ts, fh := it.AtFloatHistogram(nil) fmt.Printf("%s %s %d\n", lbs, fh.String(), ts) } for it.Next() == chunkenc.ValHistogram { - ts, h := it.AtHistogram() + ts, h := it.AtHistogram(nil) fmt.Printf("%s %s %d\n", lbs, h.String(), ts) } if it.Err() != nil { diff --git a/cmd/promtool/tsdb_test.go b/cmd/promtool/tsdb_test.go index 0f0040cd3dc..aeb51a07e03 100644 --- a/cmd/promtool/tsdb_test.go +++ b/cmd/promtool/tsdb_test.go @@ -14,9 +14,18 @@ package main import ( + "bytes" + "context" + "io" + "math" + "os" + "runtime" + "strings" "testing" "github.com/stretchr/testify/require" + + "github.com/prometheus/prometheus/promql" ) func TestGenerateBucket(t *testing.T) { @@ -41,3 +50,101 @@ func TestGenerateBucket(t *testing.T) { require.Equal(t, tc.step, step) } } + +// getDumpedSamples dumps samples and returns them. +func getDumpedSamples(t *testing.T, path string, mint, maxt int64, match []string) string { + t.Helper() + + oldStdout := os.Stdout + r, w, _ := os.Pipe() + os.Stdout = w + + err := dumpSamples( + context.Background(), + path, + mint, + maxt, + match, + ) + require.NoError(t, err) + + w.Close() + os.Stdout = oldStdout + + var buf bytes.Buffer + io.Copy(&buf, r) + return buf.String() +} + +func TestTSDBDump(t *testing.T) { + storage := promql.LoadedStorage(t, ` + load 1m + metric{foo="bar", baz="abc"} 1 2 3 4 5 + heavy_metric{foo="bar"} 5 4 3 2 1 + heavy_metric{foo="foo"} 5 4 3 2 1 + `) + + tests := []struct { + name string + mint int64 + maxt int64 + match []string + expectedDump string + }{ + { + name: "default match", + mint: math.MinInt64, + maxt: math.MaxInt64, + match: []string{"{__name__=~'(?s:.*)'}"}, + expectedDump: "testdata/dump-test-1.prom", + }, + { + name: "same matcher twice", + mint: math.MinInt64, + maxt: math.MaxInt64, + match: []string{"{foo=~'.+'}", "{foo=~'.+'}"}, + expectedDump: "testdata/dump-test-1.prom", + }, + { + name: "no duplication", + mint: math.MinInt64, + maxt: math.MaxInt64, + match: []string{"{__name__=~'(?s:.*)'}", "{baz='abc'}"}, + expectedDump: "testdata/dump-test-1.prom", + }, + { + name: "well merged", + mint: math.MinInt64, + maxt: math.MaxInt64, + match: []string{"{__name__='heavy_metric'}", "{baz='abc'}"}, + expectedDump: "testdata/dump-test-1.prom", + }, + { + name: "multi matchers", + mint: math.MinInt64, + maxt: math.MaxInt64, + match: []string{"{__name__='heavy_metric',foo='foo'}", "{__name__='metric'}"}, + expectedDump: "testdata/dump-test-2.prom", + }, + { + name: "with reduced mint and maxt", + mint: int64(60000), + maxt: int64(120000), + match: []string{"{__name__='metric'}"}, + expectedDump: "testdata/dump-test-3.prom", + }, + } + for _, tt := range tests { + t.Run(tt.name, func(t *testing.T) { + dumpedMetrics := getDumpedSamples(t, storage.Dir(), tt.mint, tt.maxt, tt.match) + expectedMetrics, err := os.ReadFile(tt.expectedDump) + require.NoError(t, err) + if strings.Contains(runtime.GOOS, "windows") { + // We use "/n" while dumping on windows as well. + expectedMetrics = bytes.ReplaceAll(expectedMetrics, []byte("\r\n"), []byte("\n")) + } + // even though in case of one matcher samples are not sorted, the order in the cases above should stay the same. + require.Equal(t, string(expectedMetrics), dumpedMetrics) + }) + } +} diff --git a/cmd/promtool/unittest.go b/cmd/promtool/unittest.go index a25a8596d42..a89288c44a4 100644 --- a/cmd/promtool/unittest.go +++ b/cmd/promtool/unittest.go @@ -15,6 +15,7 @@ package main import ( "context" + "encoding/json" "errors" "fmt" "os" @@ -27,6 +28,7 @@ import ( "github.com/go-kit/log" "github.com/grafana/regexp" + "github.com/nsf/jsondiff" "github.com/prometheus/common/model" "gopkg.in/yaml.v2" @@ -40,7 +42,7 @@ import ( // RulesUnitTest does unit testing of rules based on the unit testing files provided. // More info about the file format can be found in the docs. -func RulesUnitTest(queryOpts promql.LazyLoaderOpts, runStrings []string, files ...string) int { +func RulesUnitTest(queryOpts promql.LazyLoaderOpts, runStrings []string, diffFlag bool, files ...string) int { failed := false var run *regexp.Regexp @@ -49,7 +51,7 @@ func RulesUnitTest(queryOpts promql.LazyLoaderOpts, runStrings []string, files . } for _, f := range files { - if errs := ruleUnitTest(f, queryOpts, run); errs != nil { + if errs := ruleUnitTest(f, queryOpts, run, diffFlag); errs != nil { fmt.Fprintln(os.Stderr, " FAILED:") for _, e := range errs { fmt.Fprintln(os.Stderr, e.Error()) @@ -67,7 +69,7 @@ func RulesUnitTest(queryOpts promql.LazyLoaderOpts, runStrings []string, files . return successExitCode } -func ruleUnitTest(filename string, queryOpts promql.LazyLoaderOpts, run *regexp.Regexp) []error { +func ruleUnitTest(filename string, queryOpts promql.LazyLoaderOpts, run *regexp.Regexp, diffFlag bool) []error { fmt.Println("Unit Testing: ", filename) b, err := os.ReadFile(filename) @@ -109,7 +111,7 @@ func ruleUnitTest(filename string, queryOpts promql.LazyLoaderOpts, run *regexp. if t.Interval == 0 { t.Interval = unitTestInp.EvaluationInterval } - ers := t.test(evalInterval, groupOrderMap, queryOpts, unitTestInp.RuleFiles...) + ers := t.test(evalInterval, groupOrderMap, queryOpts, diffFlag, unitTestInp.RuleFiles...) if ers != nil { errs = append(errs, ers...) } @@ -173,7 +175,7 @@ type testGroup struct { } // test performs the unit tests. -func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]int, queryOpts promql.LazyLoaderOpts, ruleFiles ...string) []error { +func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]int, queryOpts promql.LazyLoaderOpts, diffFlag bool, ruleFiles ...string) []error { // Setup testing suite. suite, err := promql.NewLazyLoader(nil, tg.seriesLoadingString(), queryOpts) if err != nil { @@ -345,8 +347,44 @@ func (tg *testGroup) test(evalInterval time.Duration, groupOrderMap map[string]i } expString := indentLines(expAlerts.String(), " ") gotString := indentLines(gotAlerts.String(), " ") - errs = append(errs, fmt.Errorf("%s alertname: %s, time: %s, \n exp:%v, \n got:%v", - testName, testcase.Alertname, testcase.EvalTime.String(), expString, gotString)) + if diffFlag { + // If empty, populates an empty value + if gotAlerts.Len() == 0 { + gotAlerts = append(gotAlerts, labelAndAnnotation{ + Labels: labels.Labels{}, + Annotations: labels.Labels{}, + }) + } + // If empty, populates an empty value + if expAlerts.Len() == 0 { + expAlerts = append(expAlerts, labelAndAnnotation{ + Labels: labels.Labels{}, + Annotations: labels.Labels{}, + }) + } + + diffOpts := jsondiff.DefaultConsoleOptions() + expAlertsJSON, err := json.Marshal(expAlerts) + if err != nil { + errs = append(errs, fmt.Errorf("error marshaling expected %s alert: [%s]", tg.TestGroupName, err.Error())) + continue + } + + gotAlertsJSON, err := json.Marshal(gotAlerts) + if err != nil { + errs = append(errs, fmt.Errorf("error marshaling received %s alert: [%s]", tg.TestGroupName, err.Error())) + continue + } + + res, diff := jsondiff.Compare(expAlertsJSON, gotAlertsJSON, &diffOpts) + if res != jsondiff.FullMatch { + errs = append(errs, fmt.Errorf("%s alertname: %s, time: %s, \n diff: %v", + testName, testcase.Alertname, testcase.EvalTime.String(), indentLines(diff, " "))) + } + } else { + errs = append(errs, fmt.Errorf("%s alertname: %s, time: %s, \n exp:%v, \n got:%v", + testName, testcase.Alertname, testcase.EvalTime.String(), expString, gotString)) + } } } diff --git a/cmd/promtool/unittest_test.go b/cmd/promtool/unittest_test.go index fb4012e3c14..b8170d784e4 100644 --- a/cmd/promtool/unittest_test.go +++ b/cmd/promtool/unittest_test.go @@ -125,7 +125,7 @@ func TestRulesUnitTest(t *testing.T) { } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - if got := RulesUnitTest(tt.queryOpts, nil, tt.args.files...); got != tt.want { + if got := RulesUnitTest(tt.queryOpts, nil, false, tt.args.files...); got != tt.want { t.Errorf("RulesUnitTest() = %v, want %v", got, tt.want) } }) @@ -178,7 +178,7 @@ func TestRulesUnitTestRun(t *testing.T) { } for _, tt := range tests { t.Run(tt.name, func(t *testing.T) { - if got := RulesUnitTest(tt.queryOpts, tt.args.run, tt.args.files...); got != tt.want { + if got := RulesUnitTest(tt.queryOpts, tt.args.run, false, tt.args.files...); got != tt.want { t.Errorf("RulesUnitTest() = %v, want %v", got, tt.want) } }) diff --git a/config/config.go b/config/config.go index ddcca84dc78..7fa03a14450 100644 --- a/config/config.go +++ b/config/config.go @@ -610,9 +610,12 @@ type ScrapeConfig struct { // More than this label value length post metric-relabeling will cause the // scrape to fail. 0 means no limit. LabelValueLengthLimit uint `yaml:"label_value_length_limit,omitempty"` - // More than this many buckets in a native histogram will cause the scrape to - // fail. + // If there are more than this many buckets in a native histogram, + // buckets will be merged to stay within the limit. NativeHistogramBucketLimit uint `yaml:"native_histogram_bucket_limit,omitempty"` + // If the growth factor of one bucket to the next is smaller than this, + // buckets will be merged to increase the factor sufficiently. + NativeHistogramMinBucketFactor float64 `yaml:"native_histogram_min_bucket_factor,omitempty"` // Keep no more than this many dropped targets per job. // 0 means no limit. KeepDroppedTargets uint `yaml:"keep_dropped_targets,omitempty"` @@ -1124,6 +1127,9 @@ type QueueConfig struct { MinBackoff model.Duration `yaml:"min_backoff,omitempty"` MaxBackoff model.Duration `yaml:"max_backoff,omitempty"` RetryOnRateLimit bool `yaml:"retry_on_http_429,omitempty"` + + // Samples older than the limit will be dropped. + SampleAgeLimit model.Duration `yaml:"sample_age_limit,omitempty"` } // MetadataConfig is the configuration for sending metadata to remote diff --git a/config/config_default_test.go b/config/config_default_test.go index f5333f4c883..26623590d96 100644 --- a/config/config_default_test.go +++ b/config/config_default_test.go @@ -12,7 +12,6 @@ // limitations under the License. //go:build !windows -// +build !windows package config diff --git a/consoles/node-cpu.html b/consoles/node-cpu.html index d6c515d2dd0..284ad738f2b 100644 --- a/consoles/node-cpu.html +++ b/consoles/node-cpu.html @@ -47,7 +47,7 @@

CPU Usage