Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[receiver/vcenter] Add support for caching traversing data #30612

Closed
atoulme opened this issue Jan 16, 2024 · 7 comments
Closed

[receiver/vcenter] Add support for caching traversing data #30612

atoulme opened this issue Jan 16, 2024 · 7 comments

Comments

@atoulme
Copy link
Contributor

atoulme commented Jan 16, 2024

Component(s)

receiver/vcenter

Is your feature request related to a problem? Please describe.

The vCenter receiver currently parses the whole tree on each scrape.

This doesn't scale well when you have a very large structure.

Describe the solution you'd like

Offer caching of the tree parsed by the receiver, separately from the scraping action.

Describe alternatives you've considered

No response

Additional context

No response

@atoulme atoulme added enhancement New feature or request needs triage New item requiring triage labels Jan 16, 2024
Copy link
Contributor

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@djaglowski
Copy link
Member

This is an interesting idea at a high level but I'm not sure I understand what is being proposed.

First, is parsing the json really the performance bottleneck, or is the goal to avoid requesting such a large structure (if that's even possible).

If we'd still request the same payload, do we just use portions of the raw payload as keys, and the parsed equivalent as values? Is this generalizable or is the cached "typed" e.g. a "vm cache", a "disk cache"? Does the cache need to be purged periodically or does it naturally ?

@atoulme
Copy link
Contributor Author

atoulme commented Jan 16, 2024

I have been looking at it and will post a spike for your review.

@atoulme
Copy link
Contributor Author

atoulme commented Jan 16, 2024

#30624 is open for discussion.

@djaglowski
Copy link
Member

I looked over the PR but still don't understand the intention behind this. From what I can tell, we would actually scrape data in the background every refresh_ttl interval. Then, each collection_interval, we just return the most recently collected data. Is that right?

@atoulme atoulme removed the needs triage New item requiring triage label Jan 18, 2024
Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

Pinging code owners:

See Adding Labels via Comments if you do not have permissions to add labels yourself.

@github-actions github-actions bot added the Stale label Mar 19, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale May 18, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants