-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support use with RecordIdsPlugin (and HardSourceWebpackPlugin) #20
Comments
Brilliant report and advice. Thank you for this. I shall definitely look into it. Maximum compatibility with the existing webpack eco system must be a priority. |
@mzgoddard it's quite amazing that second recommendation idea. Thank you for the gist example! |
I have a unit tested POC working. :) No more manipulation of the module or chunk ids. |
It's pretty cool actually, overall this has simplified my library. The Webpack plugin is only needed for the server now. |
@swernerx this is ready for some testing, do you have an example config I could use for the HardSourceWebpackPlugin? |
I do nothing special: // Improve source caching in Webpack v2
// This thing seems to have magical effects on rebuild times. Problem is that it's
// still unusable right now because of a range of issues.
new HardSourceWebpackPlugin({
// Either an absolute path or relative to output.path.
cacheDirectory: path.resolve(root, ".hardsource", `${target}-${mode}`),
// Either an absolute path or relative to output.path. Sets webpack's
// recordsPath if not already set.
recordsPath: path.resolve(root, ".hardsource", `${target}-${mode}`, "records.json"),
// Optional field. This field determines when to throw away the whole
// cache if for example npm modules were updated.
environmentHash: {
root: root,
directories: [ "node_modules" ],
files: [ "package.json", "yarn.lock" ]
}
}), |
Hmmm, build works the first time (production), but hangs the second. |
"Error: offline-plugin: Plugin's runtime wasn't added to one of your bundle entries. See this https://goo.gl/YwewYp for details." |
Yep, it doesn't play nicely with @mzgoddard - ping. |
Confirming this is an issue between HardSourceWebpackPlugin and offline-plugin even when this plugin is not active. |
Closed as working. In the case of projects using Staged. (not yet pushed or published) |
@ctrlplusb Here is a breaking reproduction of the stated issue. https://github.com/mzgoddard/webpack-code-split-record-ids-heap-overflow CodeSplit sets chunk Ids to be too large for webpack 2 (I think that detail was missed in my original report). A heap overlow is the result running this repro two times. offline-plugin does have an issue with hard-source atm but that isn't related to this issue. |
@mzgoddard that repo is empty :-) |
@ctrlplusb Heh. Lol. I did everything but push the repo. It is now filled. https://github.com/mzgoddard/webpack-code-split-record-ids-heap-overflow |
Awesome man you are a ridiculous legend! So helpful. Quality. :) I will use this repo to test the latest alpha release and will report back. :) |
When used with RecordIdsPlugin, CodeSplit incidentally stores chunk ids in the records that when used in follow up builds leads to a small part of webpack. Used with a chunk named like
vendor
CodeSplit currently creates the chunk hash, 820075192, for it. When that webpack code snippet hits that id it tries to make an array of about 820075188 members, which would need more than 3.2 gigs and causes NodeJS to normally run out memory and exit with an error.(This is related to HardSourceWebpackPlugin (mzgoddard/hard-source-webpack-plugin#51 (comment)) which uses RecordIdsPlugin to ensure module and chunk ids are immutable in relation to module and chunk identifiers.)
Right now I can think of two solutions.
The second is definitely a more involved change but removes any concern for incompatibility with RecordIds and hash collisions on chunk name.
Another note on RecordIds, it is always on, so this might be something encountered when just using webpack in watch mode without disabling the CodeSplit webpack plugin.
The text was updated successfully, but these errors were encountered: