Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: dont drop events in buffer #227

Open
wants to merge 1 commit into
base: main
Choose a base branch
from
Open

Conversation

CrabNejonas
Copy link
Contributor

@CrabNejonas CrabNejonas commented Feb 8, 2024

This removes the fixed size EventBuffer and replaces it with standard Vec.

Testing has shown that even under very heavy load the buffer size never grows above 1200 spans or so which is fine IMO.

This also adds a periodic timer (40secs with the default config) to resize the internal buffers so acquired capacity will eventually be released.

The two downsides are that if you leave your app running for a long time without connecting the devtools the internal buffers will grow unbounded and that clients attaching after one has already been attached will loose all previous events.

After more testing and weighing the options I decided it makes more sense to just increase the limit for the fixed-size EventBuffer structs to be fitted better.
The reasons are:

  • Later clients missing out on all previously emitted data is a bad UX bc it looks the instrumentation is broken even if his would be technically correct behaviour
  • If you leave your app running while having the instrumentation running but never attaching a client using Vec will consume unbounded memory. I expect this to happen a lot as commenting out the instrumentation code is very annoying and users will probably only open the client when they actually need it
  • From testing 2048 should be enough to fit all events even under most significant event pressure.

Alternatives

Originally I had considered adjusting the publish interval based on some metric so that it would be reduced the fuller the internal buffers get. However, due to the way the tokio::mpsc channel works this wouldn't really adjust the timing by much anyway and just add extra complexity

Alternatively we can just increase the bounds of the fixed size buffers (from my testing increasing the spans size from 512 to 2048 would be enough) but that might be a bit fragile this is the approach now

Copy link

netlify bot commented Feb 8, 2024

Deploy Preview for cn-devtools-app canceled.

Name Link
🔨 Latest commit 553bb89
🔍 Latest deploy log https://app.netlify.com/sites/cn-devtools-app/deploys/65c4eddd3a1d900008e096e3

@johann-crabnebula
Copy link
Contributor

@CrabNejonas can you react to the comment please. Could you maybe also explain how much memory we are talking here? How does it scale? In my testing blowing up the buffer really helped with consistency of the frontend.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants