Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance Audit 2017: Beacon: ResourceTiming Compression #174

Closed
nicjansma opened this issue Dec 22, 2017 · 3 comments
Closed

Performance Audit 2017: Beacon: ResourceTiming Compression #174

nicjansma opened this issue Dec 22, 2017 · 3 comments

Comments

@nicjansma
Copy link

@nicjansma nicjansma commented Dec 22, 2017

ResourceTiming is currently the most expensive plugin as far as JavaScript CPU time and beacon size for larger sites.

image

Depending on the site and device, compressing the ResourceTiming entries could take 20-100 milliseconds or more.

We should investigate ways of reducing this cost. Some ideas:

  • Before compressing the entries, we must iterate over all of the frames in the page to gather all entries. We could use the PerformanceObserver in some (or all) frames to get notified of new entries, instead of crawling for them
  • The optimizeTrie is the most expensive operation, where it iterates over each character in every URL. We could look into forming a non-perfect Trie by grouping every 10 characters together, or something
@bluesmoon

This comment has been minimized.

Copy link

@bluesmoon bluesmoon commented Dec 22, 2017

For optimizeTrie, we could also just split on [./?&=] and use those as groups. Also, reversing the hostname should be built in.

We should also see if we can move the optimization over to a WebWorker and use postMessage to send the resulting string back to the main thread.

@nicjansma

This comment has been minimized.

Copy link
Author

@nicjansma nicjansma commented Feb 11, 2019

We have a fix in review (Akamai repo), which splits the URL at the path separator / instead of by every character in convertToTrie().

Summary of the benefits:

  • These optimizations are more important on larger sites (i.e. 100+ URLs)
  • On sites where the ResourceTiming optimization was taking > 100ms (on desktop CPUs), changing to splitting at / reduced CPU time to ~25ms at only a 4% growth in data (restiming) size.
  • On sites taking 25-35ms, changing to / reduced CPU time to ~10ms at 2-3% data growth
@nicjansma

This comment has been minimized.

Copy link
Author

@nicjansma nicjansma commented Jun 3, 2019

Merged in as 7834819

@nicjansma nicjansma closed this Jun 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
2 participants
You can’t perform that action at this time.