New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RelationDataContent.__setitem__
should dynamically dispatch to a file if it's too long
#801
Comments
This seems like a good idea. It's surprising to me that charms have managed to run into this though. Crazy users. Maybe we could also log a warning: "Why so much relation data?" :P |
|
I would say that it's because some of the "nuts and bolts" get masked a bit. Grafana dashboards are huge, no surprise there, only surprising that it didn't already detect the length and spit it out to a file, because the first But outside of that, this is broadly either not intuitive or not exposed. Charm authors using OF are guided towards using dicts (well, In this case, it's Prometheus scrape targets. Normally, there wouldn't be that many on one charm, but the point of intersection here is a proxy/bridge between the "old" reactive/LMA charms and the COS observability charms. So it's structured like:
The proxy/bridge sits in one model, and forms a bit of a "funnel", so it's a Instead of a single charm providing, let's say, 4 scrape jobs, there are potentially as many as there are That said, as mentioned, we've seen that a single Grafana dashboard can push over this limit. We may as well do the same sort of detection/splitting for Where |
Agree - we should make sure to handle this consistently with all the hook tools. It does appear that |
Want me to draft a PR? |
I'm not going to stop you ;-) |
We've already seen this with Grafana Dashboards, which routinely overflow the maximum argument length from
subprocess
, but it was also observed that relating Prometheus to a very large number of targets could overflow and cause a strange lookingOSError
on aRelationChangedEvent
Ultimately, this is due to
relation_set
calling back tosubprocess
to handlerelation-set ...
.We already split long log messages, and
relation-set
takes a--file
parameter which reads in YAML, allowing the limit to be bypassed. If OF determines that the length of the relation data is anywhere near the limit, we could defer to something like:If an optarg were added to
relation_set
where, if present, data was loaded from a file. This seems easy enough to add, avoids requiring charm authors to carefully think about the size/length of their data bags and potentially destructure them to avoid it mapping back to amap[string]string
on the backend, and yields the desired behavior.The text was updated successfully, but these errors were encountered: