fix(compute/build): normalise and bucket heap allocations #1130
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
We started annotating Wasm binaries with additional metadata and part of that data was the number of memory heap allocations. Including this metric means every time you build a package (even if you have changed ZERO lines of code) it'll produce a package with a different hash (e.g.
fastly compute hash-files
). This behaviour breaks Terraform's ability to reason about whether the package has changed, as each time you build the amount of memory used to create the package is variable/fluctuating.Solution
We're going to bucket the memory allocation numbers to provide a consistent output. If the memory allocations increase significantly then the bucket value will change (as we're considering the change significant enough to be considered some kind of important code/dependency change).
Screenshots
Below demonstrates me building a package twice (no code changes) and the memory allocation value is now consistent (
2-5MB
):The following demonstrates me building a package twice (no code changes) and the hash produced is thus the same: