Stackdriver end to end tests#305
Conversation
Modified structur to use start time and end time from view data Preparing stackdriver to receive transport class Update tests of stackdriver exporter | Creation of a sample with the new exporter Removed returning error pattern Update readme with Stackdriver stats info Stats information related to Stackdriver was added in the main readme. Two sections were created to accomodate Trace and Stats contents. adjust title for trace and stats sections
|
We found a Contributor License Agreement for you (the sender of this pull request), but were unable to find agreements for all the commit author(s) or Co-authors. If you authored these, maybe you used a different email address in the git commits than was used to sign the CLA (login here to double check)? If these were authored by someone else, then they will need to sign a CLA as well, and confirm that they're okay with these being contributed to Google. |
| stats_recorder = stats.stats_recorder | ||
|
|
||
| client = monitoring_v3.MetricServiceClient() | ||
| exporter = stackdriver.StackdriverStatsExporter(options=stackdriver.Options(project_id="opencenus-node"), |
There was a problem hiding this comment.
Why using opencensus-node project id?
|
@vcasadei Thanks for this PR! I have just configured the credentials to give it permission to write to stackdriver monitoring, the CI is running and the system tests for stackdriver stats exporter is passed. And I also patched the monitoring client in the stackdriver unit test to prevent it from talking to the real monitoring client. Ping me if you need anything:) |
| measure_map.measure_int_put(VIDEO_SIZE_MEASURE_ASYNC, 25 * MiB) | ||
|
|
||
| measure_map.record(tag_map) | ||
| self.assertTrue(True) |
There was a problem hiding this comment.
This isn't test anything, maybe assert something meaningful here?
There was a problem hiding this comment.
In this case, there is an issue because we are testing in an async way. So, due to our async transport ( that wait for something like 5 seconds to send data ) we could not read it from Stackdriver api to verify if data was sent in a correct way.
| name = exporter.client.project_path(PROJECT) | ||
| list_metrics_descriptors = exporter.client.list_metric_descriptors(name) | ||
| element = [element for element in list_metrics_descriptors if element.description == "SampleViewDescriptionSync"] | ||
| self.assertTrue(any(element)) |
There was a problem hiding this comment.
Printing out the element it looks like
[name: "projects/[PROJECT_ID]/metricDescriptors/custom.googleapis.com/opencensus/SampleViewNameSync"
labels {
key: "SampleKeySync"
}
metric_kind: CUMULATIVE
value_type: DISTRIBUTION
unit: "By"
description: "SampleViewDescriptionSync"
display_name: "OpenCensus/SampleViewNameSync"
type: "custom.googleapis.com/opencensus/SampleViewNameSync"
]
Can we assert each of the attributes equals what we expected?
And we will also need to remove the exported metrics after running the test, because next time even if the exporter fails, the test will still pass as it is using the previous data. Or we can randomly generate some unique name for the metric to make sure the one we assert is the one we just exported.
| list_metrics_descriptors = exporter.client.list_metric_descriptors(name) | ||
| element = next((element for element in list_metrics_descriptors if element.description == view_description), None) | ||
|
|
||
| self.assertNotEqual(element, None) |
There was a problem hiding this comment.
self.assertIsNotNone(element)
| list_metrics_descriptors = exporter.client.list_metric_descriptors(name) | ||
| element = next((element for element in list_metrics_descriptors if element.description == view_description), None) | ||
|
|
||
| self.assertNotEqual(element, None) |
| time.sleep(random.randint(1, 10) / 1000.0) | ||
|
|
||
| name = exporter.client.project_path(PROJECT) | ||
| list_metrics_descriptors = exporter.client.list_metric_descriptors(name) |
There was a problem hiding this comment.
Retry logic should be added here instead of adding to the whole test, we want to wait for the metric creation and we don't want to create the metric again when failing.
|
@songy23 Could you help us fix the CLA? Thanks! |
|
A Googler has manually verified that the CLAs look good. (Googler, please make sure the reason for overriding the CLA status is clearly documented in these comments.) |
|
Sure, should be good to merge now. |
No description provided.