Skip to content

Commit

Permalink
Merge pull request #1473 from grafana/prepare-2.0.0-rc.3
Browse files Browse the repository at this point in the history
Prepare 2.0.0-rc.3
  • Loading branch information
pracucci committed Mar 14, 2022
2 parents c0c349e + ebf4983 commit cc59645
Show file tree
Hide file tree
Showing 187 changed files with 3,868 additions and 2,728 deletions.
5 changes: 4 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

### Mimirtool

## 2.0.0-rc.2
## 2.0.0-rc.3

### Grafana Mimir

Expand Down Expand Up @@ -699,6 +699,7 @@ _Changes since `grafana/cortex-jsonnet` `1.9.0`._
* [ENHANCEMENT] Added support to multi-zone store-gateway deployments. #608 #615
* [ENHANCEMENT] Show supplementary alertmanager services in the Rollout Progress dashboard. #738 #855
* [ENHANCEMENT] Added `mimir` to default job names. This makes dashboards and alerts working when Mimir is installed in single-binary mode and the deployment is named `mimir`. #921
* [ENHANCEMENT] Introduced a new alert for the Alertmanager: `MimirAlertmanagerAllocatingTooMuchMemory`. It has two severities based on the memory usage against limits, a `warning` level at 80% and a `critical` level at 90%. #1206
* [BUGFIX] Fixed `CortexIngesterHasNotShippedBlocks` alert false positive in case an ingester instance had ingested samples in the past, then no traffic was received for a long period and then it started receiving samples again. [#308](https://github.com/grafana/cortex-jsonnet/pull/308)
* [BUGFIX] Fixed `CortexInconsistentRuntimeConfig` metric. [#335](https://github.com/grafana/cortex-jsonnet/pull/335)
* [BUGFIX] Fixed scaling dashboard to correctly work when a Cortex service deployment spans across multiple zones (a zone is expected to have the `zone-[a-z]` suffix). [#365](https://github.com/grafana/cortex-jsonnet/pull/365)
Expand Down Expand Up @@ -820,6 +821,7 @@ _Changes since `grafana/cortex-jsonnet` `1.9.0`._
* [CHANGE] Remove the support for the test-exporter. #1133
* [CHANGE] Removed `$.distributor_deployment_labels`, `$.ingester_deployment_labels` and `$.querier_deployment_labels` fields, that were used by gossip.libsonnet to inject additional label. Now the label is injected directly into pods of statefulsets and deployments. #1297
* [CHANGE] Disabled `-ingester.readiness-check-ring-health`. #1352
* [CHANGE] Changed Alertmanager CPU request from `100m` to `2` cores, and memory request from `1Gi` to `10Gi`. Set Alertmanager memory limit to `15Gi`. #1206
* [FEATURE] Added query sharding support. It can be enabled setting `cortex_query_sharding_enabled: true` in the `_config` object. #653
* [FEATURE] Added shuffle-sharding support. It can be enabled and configured using the following config: #902
```
Expand Down Expand Up @@ -851,6 +853,7 @@ _Changes since `grafana/cortex-jsonnet` `1.9.0`._
* [BUGFIX] Treat `compactor_blocks_retention_period` type as string rather than int.[#395](https://github.com/grafana/cortex-jsonnet/pull/395)
* [BUGFIX] Pass `-ruler-storage.s3.endpoint` to ruler when using S3. [#421](https://github.com/grafana/cortex-jsonnet/pull/421)
* [BUGFIX] Remove service selector on label `gossip_ring_member` from other services than `gossip-ring`. [#1008](https://github.com/grafana/mimir/pull/1008)
* [BUGFIX] Rename `-ingester.readiness-check-ring-health` to `-ingester.ring.readiness-check-ring-health`, to reflect current name of flag. #1460

### Mimirtool

Expand Down
9 changes: 5 additions & 4 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -33,8 +33,8 @@ BINARY_SUFFIX ?= ""
IMAGE_PREFIX ?= grafana/
BUILD_IMAGE ?= $(IMAGE_PREFIX)mimir-build-image

# For a tag push GITHUB_REF will look like refs/tags/<tag_name>,
# If finding refs/tags/ does not equal emptystring then use
# For a tag push, $GITHUB_REF will look like refs/tags/<tag_name>.
# If finding refs/tags/ does not equal empty string, then use
# the tag we are at as the image tag.
ifneq (,$(findstring refs/tags/, $(GITHUB_REF)))
GIT_TAG := $(shell git tag --points-at HEAD)
Expand Down Expand Up @@ -62,7 +62,7 @@ DOC_TEMPLATES := docs/sources/configuring/reference-configuration-parameters.tem

# Documents to run through embedding
DOC_EMBED := docs/sources/architecture/components/query-frontend/using-the-query-frontend-with-prometheus.md \
docs/sources/operating-grafana-mimir/mirror-requests-to-a-second-cluster.md \
docs/sources/operating/mirroring-requests-to-a-second-cluster.md \
docs/sources/architecture/components/overrides-exporter.md \
docs/sources/getting-started/_index.md \
operations/mimir/README.md
Expand Down Expand Up @@ -109,6 +109,7 @@ push-multiarch-mimir:
$(SUDO) docker buildx build -o type=registry --platform linux/amd64,linux/arm64 --build-arg=revision=$(GIT_REVISION) --build-arg=goproxyValue=$(GOPROXY_VALUE) --build-arg=USE_BINARY_SUFFIX=true -t $(IMAGE_PREFIX)mimir:$(IMAGE_TAG) cmd/mimir

# This target fetches current build image, and tags it with "latest" tag. It can be used instead of building the image locally.
.PHONY: fetch-build-image
fetch-build-image:
docker pull $(BUILD_IMAGE):$(LATEST_BUILD_IMAGE_TAG)
docker tag $(BUILD_IMAGE):$(LATEST_BUILD_IMAGE_TAG) $(BUILD_IMAGE):latest
Expand Down Expand Up @@ -206,7 +207,7 @@ GOVOLUMES= -v $(shell pwd)/.cache:/go/cache:delegated,z \
# Mount local ssh credentials to be able to clone private repos when doing `mod-check`
SSHVOLUME= -v ~/.ssh/:/root/.ssh:delegated,z

exes $(EXES) protos $(PROTO_GOS) lint test test-with-race cover shell mod-check check-protos doc format dist: mimir-build-image/$(UPTODATE)
exes $(EXES) protos $(PROTO_GOS) lint test test-with-race cover shell mod-check check-protos doc format dist: fetch-build-image
@mkdir -p $(shell pwd)/.pkg
@mkdir -p $(shell pwd)/.cache
@echo
Expand Down
2 changes: 1 addition & 1 deletion VERSION
Original file line number Diff line number Diff line change
@@ -1 +1 @@
2.0.0-rc.2
2.0.0-rc.3
9 changes: 6 additions & 3 deletions docs/docs.mk
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@ DOCS_BASE_URL ?= "localhost:$(DOCS_HOST_PORT)"

DOCS_VERSION = next

DOCS_DOCKER_RUN_FLAGS = -ti -v $(CURDIR)/$(DOCS_DIR):/hugo/content/docs/$(DOCS_PROJECT)/$(DOCS_VERSION):ro,z -e HUGO_REFLINKSERRORLEVEL=ERROR -p $(DOCS_HOST_PORT):$(DOCS_LISTEN_PORT) --rm $(DOCS_IMAGE)
HUGO_REFLINKSERRORLEVEL ?= WARNING
DOCS_DOCKER_RUN_FLAGS = -ti -v $(CURDIR)/$(DOCS_DIR):/hugo/content/docs/$(DOCS_PROJECT)/$(DOCS_VERSION):ro,z -e HUGO_REFLINKSERRORLEVEL=$(HUGO_REFLINKSERRORLEVEL) -p $(DOCS_HOST_PORT):$(DOCS_LISTEN_PORT) --rm $(DOCS_IMAGE)
DOCS_DOCKER_CONTAINER = $(DOCS_PROJECT)-docs

# This wrapper will serve documentation on a local webserver.
Expand All @@ -23,7 +24,9 @@ define docs_docker_run
@if [[ -z $${NON_INTERACTIVE} ]]; then \
read -p "Press a key to continue"; \
fi
@docker run --name $(DOCS_DOCKER_CONTAINER) $(DOCS_DOCKER_RUN_FLAGS) /bin/bash -c 'find content/docs/ -mindepth 1 -maxdepth 1 -type d -a ! -name "$(DOCS_PROJECT)" -exec rm -rf {} \; && touch content/docs/mimir/_index.md && exec $(1)'
# The loki _index.md file is intentionally used until the equivalent file in the grafana/website repository is
# created for Mimir.
@docker run --name $(DOCS_DOCKER_CONTAINER) $(DOCS_DOCKER_RUN_FLAGS) /bin/bash -c 'mv content/docs/loki/_index.md content/docs/$(DOCS_PROJECT)/ && find content/docs/ -mindepth 1 -maxdepth 1 -type d -a ! -name "$(DOCS_PROJECT)" -exec rm -rf {} \; && exec $(1)'
endef

.PHONY: docs-docker-rm
Expand All @@ -37,4 +40,4 @@ docs-pull:
.PHONY: docs
docs: ## Serve documentation locally.
docs: docs-pull
$(call docs_docker_run,hugo server --debug --baseUrl=$(DOCS_BASE_URL) -p $(DOCS_LISTEN_PORT) --bind 0.0.0.0)
$(call docs_docker_run,make server HUGO_PORT=$(DOCS_LISTEN_PORT))
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
---
title: Contributing
linkTitle: "Contributing"
weight: 10
menu:
---

Welcome! We're excited that you're interested in contributing. Below are some basic guidelines.
Expand Down
41 changes: 41 additions & 0 deletions docs/internal/contributing/design-patterns-and-conventions.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,41 @@
---
title: "Design patterns and code conventions"
description: ""
weight: 10
---

# Design patterns and code conventions

Grafana Mimir adopts some design patterns and code conventions that we ask you to follow when contributing to the project. These conventions have been adopted based on the experience gained over the time and aim to enforce good coding practices and keep a consistent UX (ie. config).

## Go coding style

Grafana Mimir follows the [Go Code Review Comments](https://github.com/golang/go/wiki/CodeReviewComments) styleguide and the [Formatting and style](https://peter.bourgon.org/go-in-production/#formatting-and-style) section of Peter Bourgon's [Go: Best Practices for Production Environments](https://peter.bourgon.org/go-in-production/).

## No global variables

- Do not use global variables

## Prometheus metrics

When registering a metric:

- Do not use a global variable for the metric
- Create and register the metric with `promauto.With(reg)`
- In any internal Grafana Mimir component, do not register the metric to the default prometheus registerer, but take the registerer in input (ie. `NewComponent(reg prometheus.Registerer)`)

Testing metrics:

- When writing using tests, test exported metrics using `testutil.GatherAndCompare()`

## Config file and CLI flags conventions

Naming:

- Config file options should be lowercase with words `_` (underscore) separated (ie. `memcached_client`)
- CLI flags should be lowercase with words `-` (dash) separated (ie. `memcached-client`)
- When adding a new config option, look if a similar one already exists within the [config](../configuration/config-file-reference.md) and keep the same naming (ie. `addresses` for a list of network endpoints)

Documentation:

- A CLI flag mentioned in the documentation or changelog should be always prefixed with a single `-` (dash)
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
---
title: "How integration tests work"
linkTitle: "How integration tests work"
weight: 5
slug: how-integration-tests-work
---

Mimir integration tests are written in Go and based on a [custom framework](https://github.com/grafana/mimir/tree/main/integration/e2e) running Mimir and its dependencies in Docker containers and using the Go [`testing`](https://golang.org/pkg/testing/) package for assertions. Integration tests run in CI for every PR, and can be easily executed locally during development (it just requires Docker).
Expand Down
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
---
title: "How to upgrade Golang version"
linkTitle: "How to upgrade Golang version"
weight: 4
slug: how-to-upgrade-golang-version
---

To upgrade the Golang version:
Expand Down
5 changes: 1 addition & 4 deletions docs/internal/how-to-update-the-build-image.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,8 @@
---
title: "How to update the build image"
linkTitle: "How to update the build image"
weight: 5
slug: how-to-update-the-build-image
---

The build image currently can only be updated by a Cortex maintainer. If you're not a maintainer you can still open a PR with the changes, asking a maintainer to assist you publishing the updated image. The procedure is:
The build image currently can only be updated by a Grafana Mimir maintainer. If you're not a maintainer you can still open a PR with the changes, asking a maintainer to assist you publishing the updated image. The procedure is:

1. Update `mimir-build-image/Dockerfile`.
2. Build the and publish the image by using `make push-multiarch-build-image`. This will build and push multiplatform docker image (for linux/amd64 and linux/arm64). Pushing to `grafana/mimir-build-image` repository can only be done by a maintainer. Running this step successfully requires [Docker Buildx](https://docs.docker.com/buildx/working-with-buildx/), but does not require a specific platform.
Expand Down
34 changes: 26 additions & 8 deletions docs/internal/tools/trafficdump.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,44 @@
---
title: "Tenant injector"
title: "Trafficdump"
description: ""
weight: 100
---

# Trafficdump

Trafficdump tool can read packets from captured tcpdump output, reassemble them into TCP streams
and parse HTTP requests and responses. It then prints requests and responses as json (one request/response per line)
Trafficdump is a tool that can read packets from captured `tcpdump` output, reassemble them into TCP streams
and parse HTTP requests and responses. It then prints requests and responses as JSON (one request/response per line)
for further processing. Trafficdump can only parse "raw" HTTP requests and responses, and not HTTP requests and responses
wrapped in gRPC, as used by Mimir between some components. Best place to capture such traffic is on the entrypoint to Mimir
(eg. authentication gateway/proxy).
wrapped in gRPC, as used by Grafana Mimir between some components. The best place to capture such traffic is on the entrypoint to Grafana Mimir
(e.g. authentication gateway/proxy).

It has some Mimir-specific and generic HTTP features:
It has some Grafana Mimir-specific and generic HTTP features:

- filter requests based on Tenant (in Basic or X-Scope-OrgId header)
- filter requests based on URL path
- filter requests based on status code of the response
- decode Mimir push requests
- decode Grafana Mimir push requests
- filter requests based on matching series in push requests

Trafficdump can be used to inspect both remote-write requests and queries.

Note that trafficdump currently cannot decode `LINUX_SSL2` link type, which is used when doing `tcpdump -i any` on Linux.
## Installation

Trafficdump requires that the pcap library be installed prior to tool compilation. The following are examples for
installing the prerequisite pcap library:

- `sudo apt install libpcap-dev` : Ubuntu and its derivatives
- `dnf install libpcap-devel` : Fedora, CentOS, Red Hat

Once libpcap is installed, build the `trafficdump` binary in the `tools/trafficdump` directory:

```shell
cd mimir/tools/trafficdump
make
```

If the build is successful the `trafficdump` binary will be in the same directory. You can list the tool's options with
`./trafficdump -h`.

Note that Trafficdump currently cannot decode `LINUX_SSL2` link type, which is used when doing `tcpdump -i any` on Linux.
Capturing traffic with `tcpdump -i eth0` (and link type ETHERNET / EN10MB) works fine.
40 changes: 15 additions & 25 deletions docs/sources/_index.md
Original file line number Diff line number Diff line change
@@ -1,41 +1,31 @@
---
title: "Mimir technical documentation"
linkTitle: "Documentation"
title: "Grafana Mimir technical documentation"
weight: 1
menu:
main:
weight: 1
---

Mimir provides horizontally scalable, highly available, multi-tenant, long-term storage for [Prometheus](https://prometheus.io).
Grafana Mimir provides horizontally scalable, highly available, multi-tenant, long-term storage for [Prometheus](https://prometheus.io).

- **Horizontally scalable:** Mimir can run across multiple machines in a cluster, exceeding the throughput and storage of a single machine. This enables you to send the metrics from multiple Prometheus servers to a single Mimir cluster and run "globally aggregated" queries across all data in a single place.
- **Highly available:** When run in a cluster, Mimir can replicate data between machines. This allows you to survive machine failure without gaps in your graphs.
- **Multi-tenant:** Mimir can isolate data and queries from multiple different independent
- **Horizontally scalable:** Grafana Mimir can run across multiple machines in a cluster, exceeding the throughput and storage of a single machine. This enables you to send the metrics from multiple Prometheus servers to a single Grafana Mimir cluster and run globally aggregated queries across all data in a single place.
- **Highly available:** When run in a cluster, Grafana Mimir replicates data between machines.
This makes Grafana Mimir resilient to machine failure, which ensures that there is no data missing in your graphs.
- **Multi-tenant:** Grafana Mimir can isolate data and queries from multiple independent
Prometheus sources in a single cluster, allowing untrusted parties to share the same cluster.
- **Long term storage:** Mimir supports S3, GCS, Swift and Microsoft Azure for long term storage of metric data. This allows you to durably store data for longer than the lifetime of any single machine, and use this data for long term capacity planning.
- **Long-term storage:** Grafana Mimir supports S3, GCS, Swift, and Microsoft Azure for long-term storage of metric data. This enables you to durably store data for longer than the lifetime of a single machine, and use this data for long-term capacity planning.

## Documentation

If you’re new to Mimir, read the [Getting started guide](getting-started/_index.md).
If you’re new to Grafana Mimir, read [Getting started with Grafana Mimir]({{< relref "./getting-started/_index.md" >}}).

Before deploying Mimir with a permanent storage backend, read:
Before deploying Grafana Mimir, read:

1. [An overview of Mimir’s architecture](architecture.md)
1. [Getting started with Mimir](getting-started/_index.md)
1. [Configuring Mimir](configuring/_index.md)
1. [Grafana Mimir architecture]({{< relref "architecture.md" >}})
1. [Getting started with Grafana Mimir]({{< relref "getting-started/_index.md" >}})
1. [Configuring Grafana Mimir]({{< relref "configuring/_index.md" >}})

There are also individual [guides](guides/_index.md) to many tasks.
Before deploying, review the important [security advice](guides/security.md).
## Hosted Grafana Mimir (Prometheus as a service)

## Contributing

To contribute to Mimir, see the [contributor guidelines](contributing/).

## Hosted Mimir (Prometheus as a service)

Mimir is used in [Grafana Cloud](https://grafana.com/cloud), and is primarily used as a [remote write](https://prometheus.io/docs/operating/configuration/#remote_write) destination for Prometheus via a Prometheus-compatible query API.
Grafana Mimir is used in [Grafana Cloud](https://grafana.com/cloud), and is primarily used as a [remote write](https://prometheus.io/docs/operating/configuration/#remote_write) destination for Prometheus via a Prometheus-compatible query API.

### Grafana Cloud

As the creators of [Grafana](https://grafana.com/oss/grafana/), [Loki](https://grafana.com/oss/loki/), and [Tempo](https://grafana.com/oss/tempo/), Grafana Labs can offer you the most wholistic Observability-as-a-Service stack out there.
As the creators of [Grafana](https://grafana.com/oss/grafana/), [Grafana Loki](https://grafana.com/oss/loki/), and [Grafana Tempo](https://grafana.com/oss/tempo/), Grafana Labs can offer you the most holistic Observability-as-a-Service stack out there.
Loading

0 comments on commit cc59645

Please sign in to comment.