Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Azure] [Metrics] Update group by dimensions logic #36491

Conversation

zmoog
Copy link
Contributor

@zmoog zmoog commented Sep 3, 2023

Proposed commit message

When users define dimensions in the config, the current implementation groups metrics by timestamp and single dimension.

Grouping by ts + single dimension can sometimes lead to multiple documents with the same dimension values. This does not play well with TSDB, because it expects all documents with the same timestamp to have a unique combination of dimensions value.

I am updating the group by dimensions logic to use all dimensions for grouping instead of just one.

It is working fine with the test cases I am using, but this needs more testing and understanding.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Author's Checklist

  • [ ]

How to test this PR locally

Related issues

Use cases

Screenshots

Logs

When users define dimensions in the config, the current implementation
groups metrics by timestamp and single dimension.

Grouping by ts + single dimension can sometimes lead to multiple
documents with the same dimension values. This does not play well with
TSDB, because it expects all documents with the same timestamp to have
a unique combination of dimensions value.

I am updating the group by dimensions logic to use all dimensions for
grouping instead of just one.

It is working fine with the test cases I am using, but this needs more
testing and understanding.

Refs: elastic/integrations#7160
@botelastic botelastic bot added the needs_team Indicates that the issue/PR needs a Team:* label label Sep 3, 2023
@zmoog zmoog self-assigned this Sep 3, 2023
@zmoog zmoog added the Team:Cloud-Monitoring Label for the Cloud Monitoring team label Sep 3, 2023
@botelastic botelastic bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Sep 3, 2023
@mergify
Copy link
Contributor

mergify bot commented Sep 3, 2023

This pull request does not have a backport label.
If this is a bug or security fix, could you label this PR @zmoog? 🙏.
For such, you'll need to label your PR with:

  • The upcoming major version of the Elastic Stack
  • The upcoming minor version of the Elastic Stack (if you're not pushing a breaking change)

To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-v8./d.0 is the label to automatically backport to the 8./d branch. /d is the digit

@elasticmachine
Copy link
Collaborator

elasticmachine commented Sep 3, 2023

💚 Build Succeeded

the below badges are clickable and redirect to their specific view in the CI or DOCS
Pipeline View Test View Changes Artifacts preview preview

Expand to view the summary

Build stats

  • Start Time: 2023-09-03T22:46:37.316+0000

  • Duration: 54 min 44 sec

Test stats 🧪

Test Results
Failed 0
Passed 1478
Skipped 96
Total 1574

💚 Flaky test report

Tests succeeded.

🤖 GitHub comments

Expand to view the GitHub comments

To re-run your PR in the CI, just comment with:

  • /test : Re-trigger the build.

  • /package : Generate the packages and run the E2E tests.

  • /beats-tester : Run the installation tests with beats-tester.

  • run elasticsearch-ci/docs : Re-trigger the docs validation. (use unformatted text in the comment!)

Copy link
Contributor

mergify bot commented Nov 21, 2023

This pull request is now in conflicts. Could you fix it? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b zmoog/azure-container-service-duplicated-documents upstream/zmoog/azure-container-service-duplicated-documents
git merge upstream/main
git push upstream zmoog/azure-container-service-duplicated-documents

1 similar comment
Copy link
Contributor

mergify bot commented Feb 5, 2024

This pull request is now in conflicts. Could you fix it? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b zmoog/azure-container-service-duplicated-documents upstream/zmoog/azure-container-service-duplicated-documents
git merge upstream/main
git push upstream zmoog/azure-container-service-duplicated-documents

Copy link
Contributor

mergify bot commented Feb 5, 2024

This pull request does not have a backport label.
If this is a bug or security fix, could you label this PR @zmoog? 🙏.
For such, you'll need to label your PR with:

  • The upcoming major version of the Elastic Stack
  • The upcoming minor version of the Elastic Stack (if you're not pushing a breaking change)

To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-v8./d.0 is the label to automatically backport to the 8./d branch. /d is the digit

@zmoog
Copy link
Contributor Author

zmoog commented Mar 4, 2024

Superseded by #36823

@zmoog zmoog closed this Mar 4, 2024
This pull request was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Cloud-Monitoring Label for the Cloud Monitoring team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Azure] [container_service] Duplicated documents for the kube_node_status_condition metric
2 participants