Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Map cloud.instance.id to the vm id in the compute_vm metricset #20754

Closed
narph opened this issue Aug 24, 2020 · 1 comment · Fixed by #20889
Closed

Map cloud.instance.id to the vm id in the compute_vm metricset #20754

narph opened this issue Aug 24, 2020 · 1 comment · Fixed by #20889
Assignees
Labels
Team:Platforms Label for the Integrations - Platforms team

Comments

@narph
Copy link
Contributor

narph commented Aug 24, 2020

Resulted from #19758.

We are currently matching cloud.instance.id to the azure resource id in all azure metricsets.
This does not necessary match the definition of this property and also it's implementation in the add_cloud_metadata processor.

Few steps here to match the vm id to the cloud.instance.id:

  • remove the matching of the cloud.instance.id for all metricsets as they don't comply
  • add the azure.resource.id property back to all metricsets
  • the mapping of fields implementation is shared by all metricsets so the best course of action in this case is to:
    • rewrite compute_vm as a light metricset (will not affect event format/config)
    • add a processor for mapping the machine size, instance id and any future cloud related fields we will add at this level

PR in progress.

@narph narph self-assigned this Aug 24, 2020
@narph narph added the Team:Platforms Label for the Integrations - Platforms team label Aug 24, 2020
@elasticmachine
Copy link
Collaborator

Pinging @elastic/integrations-platforms (Team:Platforms)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Team:Platforms Label for the Integrations - Platforms team
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants