Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubeletstatsreceiver: Sync available volume metadata from /pods endpoint #690

Merged
merged 4 commits into from
Aug 18, 2020

Conversation

asuresh4
Copy link
Member

@asuresh4 asuresh4 commented Aug 11, 2020

Description: Optionally sync k8s.volume.type label with volume metric. This information is collected from the Pod spec exposed via /pods endpoint only if the receiver is configured to do so (using extra_metadata_labels option).

Currently, Persistent Volume Claim is synced as its own type. This will be updated in a subsequent PR so that metadata from the actual storage backing the claim is synced instead using the Kubernetes API.

Testing: Added tests.

Documentation: Updated README.

@asuresh4 asuresh4 requested a review from a team August 11, 2020 19:27
@codecov
Copy link

codecov bot commented Aug 11, 2020

Codecov Report

Merging #690 into master will increase coverage by 0.06%.
The diff coverage is 100.00%.

Impacted file tree graph

@@            Coverage Diff             @@
##           master     #690      +/-   ##
==========================================
+ Coverage   86.13%   86.20%   +0.06%     
==========================================
  Files         207      207              
  Lines       11317    11365      +48     
==========================================
+ Hits         9748     9797      +49     
- Misses       1238     1239       +1     
+ Partials      331      329       -2     
Flag Coverage Δ
#integration 57.03% <ø> (ø)
#unit 85.99% <100.00%> (+0.06%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
receiver/kubeletstatsreceiver/config.go 100.00% <ø> (ø)
...ceiver/kubeletstatsreceiver/kubelet/accumulator.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/metadata.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/resource.go 100.00% <100.00%> (ø)
receiver/kubeletstatsreceiver/kubelet/volume.go 100.00% <100.00%> (ø)
...eiver/awsxrayreceiver/internal/udppoller/poller.go 97.56% <0.00%> (-2.44%) ⬇️
receiver/carbonreceiver/transport/tcp_server.go 65.71% <0.00%> (-1.91%) ⬇️
exporter/signalfxexporter/dimensions/requests.go 93.10% <0.00%> (+8.62%) ⬆️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update 73eec0d...aab1d68. Read the comment docs.

)

var supportedLabels = map[MetadataLabel]bool{
MetadataLabelContainerID: true,
labelVolumeType: true,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't it be MetadataLabelVolumeType?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed.

Copy link
Member

@dmitryax dmitryax left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

receiver/kubeletstatsreceiver/kubelet/metadata.go Outdated Show resolved Hide resolved
Co-authored-by: Dmitrii Anoshin <anoshindx@gmail.com>
@asuresh4
Copy link
Member Author

@bogdandrutu - this is ready to merge. Please do when you get a chance.

@bogdandrutu bogdandrutu merged commit 14fe86a into open-telemetry:master Aug 18, 2020
@asuresh4 asuresh4 deleted the kubelet-volumes-2 branch January 19, 2021 17:02
ljmsc referenced this pull request in ljmsc/opentelemetry-collector-contrib Feb 21, 2022
Increase instance size for prior-go
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants