Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prometheus Remote Write Exporter (5/6) #216

Merged
merged 2 commits into from
Dec 9, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 12 additions & 11 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,18 +12,19 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `opentelemetry-sdk-extension-aws` Add method to return fields injected by propagator
([#226](https://github.com/open-telemetry/opentelemetry-python/pull/226))
- `opentelemetry-exporter-prometheus-remote-write` Prometheus Remote Write Exporter Setup
((#180)[https://github.com/open-telemetry/opentelemetry-python-contrib/pull/180])
- `opentelemetry-exporter-prometheus-remote-write` Add Exporter constructor validation methods
((#206)[https://github.com/open-telemetry/opentelemetry-python-contrib/pull/206])
- `opentelemetry-exporter-prometheus-remote-write` Add conversion to TimeSeries methods
((#207)[https://github.com/open-telemetry/opentelemetry-python-contrib/pull/207])
- `opentelemetry-exporter-prometheus-remote-write` Add request methods
((#212)[https://github.com/open-telemetry/opentelemetry-python-contrib/pull/212])
([#180](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/180))
- `opentelemetry-exporter-prometheus-remote-write` Add Exporter constructor validation methods in Prometheus Remote Write Exporter
([#206](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/206))
- `opentelemetry-exporter-prometheus-remote-write` Add conversion to TimeSeries methods in Prometheus Remote Write Exporter
([#207](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/207))
- `opentelemetry-exporter-prometheus-remote-write` Add request methods to Prometheus Remote Write Exporter
([#212](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/212))
Comment on lines +15 to +21
Copy link
Contributor Author

@AzfaarQureshi AzfaarQureshi Dec 9, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Noticed that the markdown was incorrect for some of the previous Remote Write Exporter links (text)[url] instead of [text](url). Also took the opportunity to specify changes were to the Prometheus RW Exporter.

- `opentelemetry-instrumentation-fastapi` Added support for excluding some routes with env var `OTEL_PYTHON_FASTAPI_EXCLUDED_URLS`
([#237](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/237))
- `opentelemetry-instrumentation-starlette` Added support for excluding some routes with env var `OTEL_PYTHON_STARLETTE_EXCLUDED_URLS`
([#237](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/237))

- Add Prometheus Remote Write Exporter integration tests in opentelemetry-docker-tests
([#216](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/216))

### Changed
- `opentelemetry-instrumentation-asgi`, `opentelemetry-instrumentation-wsgi` Return `None` for `CarrierGetter` if key not found
Expand Down Expand Up @@ -92,7 +93,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
([#1239](https://github.com/open-telemetry/opentelemetry-python/pull/1239))
- `opentelemetry-instrumentation-django` Bugfix use request.path replace request.get_full_path(). It will get correct span name
([#1309](https://github.com/open-telemetry/opentelemetry-python/pull/1309#))
- `opentelemetry-instrumentation-django` Record span status and http.status_code attribute on exception
- `opentelemetry-instrumentation-django` Record span status and http.status_code attribute on exception
([#1257](https://github.com/open-telemetry/opentelemetry-python/pull/1257))
- `opentelemetry-instrumentation-grpc` Rewrite gRPC server interceptor
([#1171](https://github.com/open-telemetry/opentelemetry-python/pull/1171))
Expand Down Expand Up @@ -120,7 +121,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
- `opentelemetry-instrumentation-pymysql` Bumped version from 0.9.3 to 0.10.1
([#1228](https://github.com/open-telemetry/opentelemetry-python/pull/1228))
- `opentelemetry-instrumentation-django` Changed span name extraction from request to comply semantic convention
([#992](https://github.com/open-telemetry/opentelemetry-python/pull/992))
([#992](https://github.com/open-telemetry/opentelemetry-python/pull/992))

## [0.13b0](https://github.com/open-telemetry/opentelemetry-python/releases/tag/v0.13b0) - 2020-09-17

Expand Down Expand Up @@ -264,7 +265,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
## [0.8b0](https://github.com/open-telemetry/opentelemetry-python/releases/tag/v0.8.0) - 2020-05-27

### Added
- `opentelemetry-ext-datadog` Add exporter to Datadog
- `opentelemetry-ext-datadog` Add exporter to Datadog
([#572](https://github.com/open-telemetry/opentelemetry-python/pull/572))
- `opentelemetry-ext-sqlite3` Initial release
- `opentelemetry-ext-psycopg2` Implement instrumentor interface, enabling auto-instrumentation
Expand Down
8 changes: 8 additions & 0 deletions tests/opentelemetry-docker-tests/tests/docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -39,3 +39,11 @@ services:
- "16686:16686"
- "14268:14268"
- "9411:9411"
cortex:
image: quay.io/cortexproject/cortex:v1.5.0
command:
- -config.file=./config/cortex-config.yml
volumes:
- ./prometheus-remote-write-cortex/cortex-config.yml:/config/cortex-config.yml:ro
ports:
- 9009:9009
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# This Cortex Config is copied from the Cortex Project documentation
# Source: https://github.com/cortexproject/cortex/blob/master/docs/configuration/single-process-config.yaml

# Configuration for running Cortex in single-process mode.
# This configuration should not be used in production.
# It is only for getting started and development.

# Disable the requirement that every request to Cortex has a
# X-Scope-OrgID header. `fake` will be substituted in instead.
auth_enabled: false

server:
http_listen_port: 9009

# Configure the server to allow messages up to 100MB.
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
grpc_server_max_concurrent_streams: 1000

distributor:
shard_by_all_labels: true
pool:
health_check_ingesters: true

ingester_client:
grpc_client_config:
# Configure the client to allow messages up to 100MB.
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
use_gzip_compression: true

ingester:
# We want our ingesters to flush chunks at the same time to optimise
# deduplication opportunities.
spread_flushes: true
chunk_age_jitter: 0

walconfig:
wal_enabled: true
recover_from_wal: true
wal_dir: /tmp/cortex/wal

lifecycler:
# The address to advertise for this ingester. Will be autodiscovered by
# looking up address on eth0 or en0; can be specified if this fails.
# address: 127.0.0.1

# We want to start immediately and flush on shutdown.
join_after: 0
min_ready_duration: 0s
final_sleep: 0s
num_tokens: 512
tokens_file_path: /tmp/cortex/wal/tokens

# Use an in memory ring store, so we don't need to launch a Consul.
ring:
kvstore:
store: inmemory
replication_factor: 1

# Use local storage - BoltDB for the index, and the filesystem
# for the chunks.
schema:
configs:
- from: 2019-07-29
store: boltdb
object_store: filesystem
schema: v10
index:
prefix: index_
period: 1w

storage:
boltdb:
directory: /tmp/cortex/index

filesystem:
directory: /tmp/cortex/chunks

delete_store:
store: boltdb

purger:
object_store_type: filesystem

frontend_worker:
# Configure the frontend worker in the querier to match worker count
# to max_concurrent on the queriers.
match_max_concurrent: true

# Configure the ruler to scan the /tmp/cortex/rules directory for prometheus
# rules: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules
ruler:
enable_api: true
enable_sharding: false
storage:
type: local
local:
directory: /tmp/cortex/rules

Original file line number Diff line number Diff line change
@@ -0,0 +1,116 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from opentelemetry import metrics
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

file should have the copyright at the top

Copy link
Contributor Author

@AzfaarQureshi AzfaarQureshi Dec 8, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, just added!

from opentelemetry.exporter.prometheus_remote_write import (
PrometheusRemoteWriteMetricsExporter,
)
from opentelemetry.test.test_base import TestBase


def observer_callback(observer):
array = [1.0, 15.0, 25.0, 26.0]
for (index, usage) in enumerate(array):
labels = {"test_label": str(index)}
observer.observe(usage, labels)


class TestPrometheusRemoteWriteExporterCortex(TestBase):
def setUp(self):
super().setUp
self.exporter = PrometheusRemoteWriteMetricsExporter(
endpoint="http://localhost:9009/api/prom/push",
headers={"X-Scope-Org-ID": "5"},
)
self.labels = {"environment": "testing"}
self.meter = self.meter_provider.get_meter(__name__)
metrics.get_meter_provider().start_pipeline(
self.meter, self.exporter, 1,
)

def test_export_counter(self):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is being tested here? There is no assertion. If what you want to test is that no exceptions were raised use fail. More information here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We originally had the pattern you linked but were recommended internally to let the test Error out naturally since that fails as well. The main issue we found with catch ExceptionType was it only handled one Exception type. Is this wrong?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it is wrong.

Every test case must have at least one assertion, since that indicates what is the intention of the test case. Even if your intention is test that this code does not raise an exception then that must be made obvious by using an assertion. It is not the same thing when a test case fails because, let's say, an IndexError was raised because a queried sequence was too short than when a test case fails with an AssertionError because some counter was less than expected. The first situation tells you that the actual testing was not completed because something else went wrong in the process of testing. The second one tells you that you were able to reach the point where you could do the testing that you wanted.

For example, imagine you are testing the performance of a server. As every test case, it begins with a criteria that must be met for the test to pass. Let's say, the server must be able to reply back to 100 requests in a second. So, your test case basically consists in this:

  1. Start the server
  2. Send the server 100 requests in one second
  3. Assert that you got 100 replies

If your test case fails with an AssertionError whose message is something like We only got 94 replies, then you know that the server is not performing as expected and your testing process can be considered complete. If your test case fails with ImportError then you can possibly figure out that some dependency of the server code is not installed in your testing environment. But the most important thing that you know from this second scenario is that you did not test what you wanted to test. That means you can not yet make any conclusions about the performance of the server because you got an exception different from an AssertionError. Only this kind of exception should be telling you that the test case failed because what you wanted to test did not comply with what was expected from it. A testing process is only complete if there are 0 failures or if every failure that there is is caused by an AssertionError. Every other exception raised is telling you, we were not able to perform the test, fix the issue and run again.

This makes sense because a tester can't report back to a developer any other result than a pass or a failure with an AssertionError. If a tester reports back an ImportError the developer will argue rightly that the code under test was not tested so it can't be considered as being at fault.

So, to summarize, it is perfectly fine that your testing criteria is this code must run without raising any exception. Even so, you must assert that, so that it is obvious to anyone that reads your test case what the intention of the test case is.

Something else, Python allows you to catch multiple exceptions types. Also keep in mind that what except does is pretty much the same as isinstance does, so you can catch multiple exceptions if they share a parent:

class Parent(Exception):
    pass


class Child0(Parent):
    pass


class Child1(Parent):
    pass


try:
    raise Child0()
except Parent:
    print("Caught a Child0")

try:
    raise Child1()
except Parent:
    print("Caught a Child1")
Caught a Child0
Caught a Child1

Copy link
Contributor

@ocelotl ocelotl Nov 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

By the way, pytest makes it confusing because every exception raised in a test case is reported as a failure. In my opinion, any non-AssertionError exception raised in a test case should be reported as an ERROR instead of a FAILURE. An ERROR is something that went wrong while getting ready to test something. This is what happens when an exception is raised in a fixture, which is exactly that, code that is run to get something ready for testing. Here is an example:

from pytest import fixture


@fixture
def preparation():
    1 / 0


def test_case(preparation):

    assert 1 > 0
pytest test_error.py 
===================================================================================== test session starts ======================================================================================
platform linux -- Python 3.8.3, pytest-5.4.3, py-1.9.0, pluggy-0.13.1
rootdir: /home/ocelotl/lightstep/metric_language_equivalent_tests/python/varman
plugins: cov-2.10.0
collected 1 item                                                                                                                                                                               

test_error.py E                                                                                                                                                                          [100%]

============================================================================================ ERRORS ============================================================================================
_________________________________________________________________________________ ERROR at setup of test_case __________________________________________________________________________________

    @fixture
    def preparation():
>       1 / 0
E       ZeroDivisionError: division by zero

test_error.py:6: ZeroDivisionError
=================================================================================== short test summary info ====================================================================================
ERROR test_error.py::test_case - ZeroDivisionError: division by zero
======================================================================================= 1 error in 0.14s =======================================================================================

As you can see here, the result of the run is ERROR and not FAILURE because pytest is telling us that it was unable to do the actual testing because something went wrong when getting ready to run the actual test. ERROR means the tester must fix something, FAILURE means the developer must fix something.

Maybe someday I'll write a pytest plugin that will make every non AssertionError exception to be reported as an ERROR 馃槢

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh okay, thanks for the detailed answer! I will fix it rn :D

Copy link
Contributor

@shovnik shovnik Nov 27, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although I am not the one making the fix, just wanted to say thanks for the detailed response. It was very insightful.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My pleasure, fantastic job on getting all these PRs done @AzfaarQureshi @shovnik 馃挭

try:
requests_counter = self.meter.create_counter(
name="counter",
description="test_export_counter",
unit="1",
value_type=int,
)
requests_counter.add(25, self.labels)
except Exception as e:
self.fail(
"Export counter failed with unexpected error {}".format(e)
)

def test_export_valuerecorder(self):
try:
requests_size = self.meter.create_valuerecorder(
name="valuerecorder",
description="test_export_valuerecorder",
unit="1",
value_type=int,
)
requests_size.record(25, self.labels)
except Exception as e:
self.fail(
"Export valuerecorder failed with unexpected error {}".format(
e
)
)

def test_export_updowncounter(self):
try:
requests_size = self.meter.create_updowncounter(
name="updowncounter",
description="test_export_updowncounter",
unit="1",
value_type=int,
)
requests_size.add(-25, self.labels)
except Exception as e:
self.fail(
"Export updowncounter failed with unexpected error {}".format(
e
)
)

def test_export_sumobserver(self):
try:
self.meter.register_sumobserver(
callback=observer_callback,
name="sumobserver",
description="test_export_sumobserver",
unit="1",
value_type=float,
)
except Exception as e:
self.fail(
"Export sumobserver failed with unexpected error {}".format(e)
)

def test_export_updownsumobserver(self):
try:
self.meter.register_updownsumobserver(
callback=observer_callback,
name="updownsumobserver",
description="test_export_updownsumobserver",
unit="1",
value_type=float,
)
except Exception as e:
self.fail(
"Export updownsumobserver failed with unexpected error {}".format(
e
)
)
10 changes: 7 additions & 3 deletions tox.ini
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ commands_pre =
test: pip install {toxinidir}/opentelemetry-python-core/opentelemetry-api {toxinidir}/opentelemetry-python-core/opentelemetry-sdk {toxinidir}/opentelemetry-python-core/tests/util

test: pip install {toxinidir}/opentelemetry-python-core/opentelemetry-instrumentation

celery: pip install {toxinidir}/instrumentation/opentelemetry-instrumentation-celery[test]

grpc: pip install {toxinidir}/instrumentation/opentelemetry-instrumentation-grpc[test]
Expand Down Expand Up @@ -341,7 +341,8 @@ deps =
sqlalchemy ~= 1.3.16
redis ~= 3.3.11
celery ~= 4.0, != 4.4.4

protobuf>=3.13.0
requests==2.25.0
changedir =
tests/opentelemetry-docker-tests/tests

Expand All @@ -361,7 +362,10 @@ commands_pre =
-e {toxinidir}/instrumentation/opentelemetry-instrumentation-aiopg \
-e {toxinidir}/instrumentation/opentelemetry-instrumentation-redis \
-e {toxinidir}/instrumentation/opentelemetry-instrumentation-system-metrics \
-e {toxinidir}/opentelemetry-python-core/exporter/opentelemetry-exporter-opencensus
-e {toxinidir}/opentelemetry-python-core/exporter/opentelemetry-exporter-opencensus \
-e {toxinidir}/exporter/opentelemetry-exporter-prometheus-remote-write
sudo apt-get install libsnappy-dev
pip install python-snappy
docker-compose up -d
python check_availability.py
commands =
Expand Down