-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
jsnext:main – should we use it, and what for? #5
Comments
Also, to save @RReverser reiterating himself, I'll link to some of his earlier comments on the subject: acornjs/acorn#305. There's also some previous discussion here: rollup/rollup#106 |
What about the case of non-main entries? For example, given the following module:
It seems the following case var foo = require('foo'); is solved in a straightforward manner with a var bar = require('foo/bar')
// or
import bar from 'foo/bar'; Is there a way to specify both ES5 and ES6 versions of all the non-main modules? |
@Rich-Harris Thanks for a thoroughly written summary! |
@rtsao Good question. Since the In the past I've argued that requiring individual files in this manner should be considered an anti-pattern – it's brittle (package authors are less free to restructure their code), rarely documented or tested, and makes code less portable. To that we can add a new concern – the difficulty of selecting the right file, if a package author wishes to provide different version for different feature sets (whether that's a straightforward ES5/ES6 dichotomy or something more granularity), is multiplied. So in summary...
...I don't think there is – but I don't think that's a bad thing, either. Requiring individual files is a hack that we've tolerated up till now because it's the only way to achieve granular imports – something that ES6 modules support by their very nature (or at least will if we can figure out how to distribute them!). |
Depends. In the case with Acorn, for example, we provide separate well-documented entry points for generic and loose parser. Lodash provides documented separate entry points for each particular function so that you can import only needed ones. There are many cases, in fact, where devs make good use of structured import paths. |
I said 'rarely', not 'never'! I should explain the 'less portable' comment – if I wrote a module that imported I think we should focus on the issue of entry points though – on Twitter you mentioned the possibility of different entry points being selected based on feature detection. Can you outline how that might work? Are there any existing proposals? |
I referred to experiments like @getify's https://featuretests.io/ using which we can easily determine supported features on current runtime. Then, we can either explicitly define set of features needed for our code (or analyze it and build such set of features automatically) and compare two sets (just check whether supported features set is subset of used by our code). Thus we will be able to determine whether it can be executed natively or should we provide transpiled version (assume that modules are transpiled in any case for now). Moreover, this way we could extend to various entry points by simply providing list of |
btw, the library that powers https://featuretests.io is: https://github.com/getify/es-feature-tests That lib (and npm package of the same name) ships not only the library to do the feature testing, but also the FWIW, the way I kind of do this is:
|
Thanks @getify, that's awesome – I wasn't aware of featuretests.io, and I love the idea of being able to easily load untranspiled code in e.g. modern browsers. Do you have any example projects where you're using this? Would love to take a look at the setup. For the bundling case we're considering here, runtime feature detection is obviously a no-go – have you had any thoughts on what this process might look like from an I do worry that we're potentially adding overwhelming complexity for package authors, who already have an extraordinarily difficult job. For example, say I'm writing Whereas |
I don't understand all the context, but at glance I don't understand why not? Why couldn't the bundle you ship be built like I suggest, with tests on the end system to determine which file version to load? You could even put some caching in there, where it caches the results in the package.json file or whatever (similar to how the results are cached in-browser). |
Because if you're creating a browser bundle, all the code has to be present, unless you can assume the presence of a particular module loader that conditionally loads another file asynchronously, with all the extra configuration and complexity that implies. And that code would contain syntax errors for non-modern browsers. Having multiple entry points (i.e. multiple bundles, for different feature sets) gives developers the option to say 'I'm going to make my life easier and just lob |
Sorry, I assumed incorrectly this was about use in node, since we were talking about FWIW, I do the same sort of stuff (feature test in the browser, conditionally load diff files based on results) and yes I just dynamically load the script files. I use my LABjs loader for that. It doesn't need module loading, since I just load UMD style "module" files as normal scripts. |
Good discussion. Here are my issues with
I really would love to find a solution that doesn't involve much overhead for module authors. There's some big problems with transpiling, and it's why I still author modules in ES5 instead of using the ES6/jsnext approach.
|
Thanks for weighing in @mattdesl:
Ask @caridy – I learned about it from him 😀 (some background here). My suggestion above is that it should not refer to futuristic language features (i.e. ES6/7 source code), but only to the method of distribution, i.e.
That's exactly the plan – this is just a transitional measure that allows package authors to distribute ES6-tool-friendly code without breaking support for RequireJS/Browserify/etc.
A thousand times yes. Tool-specific config is a bad idea that has caused endless problems. The point of |
It should be renamed to |
This is not really a method of distribution, as it still always needs to be transpiled to either CommonJS/AMD/UMD/Systemjs until further spec additions & changes. Such field would just mean that file still needs to be transpiled to one of these targets, and even for ES6<->CommonJS there are too many ways to represent such transpilation to be sure that tools will be compatible with each other if they use this field. |
I like that 👍
The same is true of
Can you elaborate? We're talking about ES6-aware tools being able to use ES6 modules, and the spec is pretty unambiguous on |
Spec is unambigous on syntax, but we don't have any Loader API defined yet. Syntax is not enough to define how they will load and interact with each other (it's just that we currently transpile to filesystem actions on Node.js and to URLs in browser, while we don't know what will be allowed in spec and in which form), and the only way is to use one of existing module systems which do have loaders, which IMO decreases the value of using separate syntax just for modules. |
I believe common misconception about modules is that if they're defined, we have only one way how string sources will work - through the concepts we're used to, but people forget that 1) this is not defined yet and 2) even now we have different concepts, and filesystem / URL paths not necessarily match. |
I hear what you're saying, but given that this is an opt-in enhancement (for both package authors and consumers), it seems unnecessarily cautious. Do we really need a standards body to tell us that |
But that's the problem - there is no spec apart from syntax, and we can do is just guessing and building tooling around own assumptions. Anyway, let's go back to initial point. What I'm saying is that I don't see any reason for separating transpilation of ES6 modules from transpilation of all the other features, because usually if one already uses one feature, he uses others as well, and transpiles altogether. If you want to force authors to add custom flag, it shouldn't be "transpile everything for browsers, and transpile everything except modules for our tooling", it should be set of flags / configs which would allow us to take care of entire process without additional maintenance pain for library developers. |
Well, there's a polyfill for
There is a natural divide between module syntax and other language features –
Please. No-one is talking here about 'forcing' anyone to do anything – this is about trying to reach a community consensus so that we can benefit from ES6 modules without introducing yet more tooling-specific config. |
Ok, I used the wrong word, it had to be "spread" / "popularize" / whatever. The point is when it's just for modules for usage in specific bundlers, then it's still tooling-specific config (yes, they're specific, because as you pointed out modules can be transpiled in many ways, and there are already other bundlers/transpilers/plugins for almost any of those ways). This separation is not a natural divide from the perspective of developer - it's just another inconvenience that stays in a way between writing code and execution on any platform. Right now it's done through single transpilation step into CommonJS/AMD/SystemJS depending on target or, well, if using optimized browser builds, also dropping a webpack which would take all the hard work. What you propose is adding yet another step in the middle by customizing transpilation step so that it wouldn't transpile modules, then adding into the project tools that do understand ES6 modules and this field, and finally bundling again. This might look like it's for a good from perspective of sharing some info between those tools, but not from perspective of end-user (developer) as it's still extra steps for no apparent reasons (transpilers already generate CommonJS/AMD and bundlers already understand those formats). |
A voluntary step, which likely comprises a single extra line in your
You personally might not see the benefit, but others do! |
Size is something that can be easily decreased with custom tooling for CommonJS as well, just not really the main concern for major bundlers.
I feel I'm being ignored or you're not willing to discuss generic feature flags anymore :/ |
Far from it – I'm very eager to hear more detail of what this would look like, in terms of how we identify features, how package authors should generate the feature flags, whether they should be encouraged to generate multiple entry points for different feature sets, etc. I did raise some concerns above about practicality, since we share the goal of minimising the additional burden on package authors, but if they're misplaced then I'd be glad to be proven wrong. |
Well, I guess me & @getify provided enough implementation details above. As for which feature sets to choose, that can be done either by authors or auto-generating from analytics or on the fly with caching or just ES6/ES5 - that's up to consumer. As for which features are used in our code - as was said, |
Well, we could do both. They're not mutually exclusive, right? What do you propose as the field name in package.json? Does existing tooling support it? |
As a result, a |
Referencing srcI would like to see something like {
"main": "dist/index.js",
"main:src": "src/index.js",
} Extending pkg.mainMy idealistic approach for extensions to {
"main": {
"node": "dist/index.js",
"esmodule": "dist/index.es6.js",
"wasm": "dist/index.wasm",
"src": "src/index.js"
}
} However, this breaks most of our tooling - Npm, Editors, Node, "everything" 😄 - so perhaps the following is more realistic: {
"main": "dist/index.js",
"entry": {
"src": "src/index.js",
"esmodule": "dist/index.es6.js",
"wasm": "dist/index.wasm",
}
} |
how can I change node to use module field instead of main in some packages? |
The `/es` modules directory is typically used via `import` statements, and hence shouldn't need to be transpiled for interop with `require` (but still needs transpilation for targeted browserlist support). This PR ensures that `import` and `export` statements remain intact for better treeshaking ability with webpack and motehr module bundlers, and the Nuxt.js module plugin. In `package.json`, we specify `"module": "es/index.js", and `"jsnext:main": "es/index.js" (deprecated in favour of `"module"` jsforum/jsforum#5) which assumes the modules use standard `import` and `export` statements, rather than `require` and `exports/module.exports`. Addresses: #3323
note:
jsnext:main
has been superseded bypkg.module
, which indicates the location of a file withimport
/export
declarations.Typically, the
pkg.module
file will not have other ES2015+ features unless the package explicitly states that it doesn't support older environments — in other words, best practice is still to transpile ES2015+ features other thanimport
/export
. More info here.Original issue follows.
This is a follow-up to this Twitter conversation:
Exposition
In CommonJS packages, it's common to have a
main
field in your package.json that tells Node.js or Browserify (etc) where to find the code:Nowadays, thanks to Babel, it's increasingly common to author in ES6/7 and use a prepublish hook to generate distributable files:
So far so good. But the
dist
file must be CommonJS or UMD, otherwise nothing can use it. That means we're missing an opportunity, because ES6 modules are better in lots of important ways, particularly when it comes to making efficient bundles for use in the browser. (See The Importance ofimport
andexport
by @benjamn if you need persuading.)Unfortunately, we can't just create a better UMD block that incorporates ES6 imports and exports, because in engines that don't support ES6 modules (all of them), that's a syntax error. So package authors can't distribute ES6 modules without making their packages incompatible with everything.
Enter
jsnext:main
There is a proposed solution to this problem –
jsnext:main
– which, likemain
, indicates the package's entry point, except that it usesexport
instead ofmodule.exports
. This is great, because it means that CommonJS/UMD distributable files can co-exist with ES6 distributable files. In the future, once everyone supports ES6 imports and exports, we can simply ditch the CommonJS/UMD stuff.But does
jsnext:main
mean 'entry point written with potentially unsupported ES6/7 features', or 'entry point that runs in existing engines, except for theimport
/export
statements'? It's unclear. It sounds as though you can use ES6/7 features, the assumption being that it points to your source code, and that a consumer (such as a bundler) takes responsibility for transpiling it (e.g. with Babel):But that's problematic. What if
src/index.js
uses stage 0 features?some-package
might have a build process that transpiles those features, but does that mean that all consumers ofsome-package
have to be similarly equipped? (Yes, .babelrc makes that possible, but it's still a weird and brittle process, and it may become rather more complex with Babel 6.)A better solution:
jsnext:main
could instead refer to an ES6-module entry point that is otherwise ready to use:Prepublish hooks make this stuff very straightforward to set up, and it means that everyone can use your package, and no-one needs to worry about your source code – they're only ever dealing with code that is ready to distribute. It's a very simple solution to the problem of serving CommonJS/UMD and ES6 builds simultaneously.
But the 'jsnext' part of
jsnext:main
is confusing. I'd suggested changing it to something less ambiguous, but would that just confuse people even more?Does it go far enough?
@RReverser thinks not, and argues that engines/bundlers/whatever should be able to select different files based on feature detection. I have no idea exactly how that would work but I'm eager to hear more.
In summary
jsnext:main
is one possible solution, though it's ambiguous. Personally I think that if we use it, it should be used to point to ready-to-use (other thanimport
/export
) distributable code, not source code that still needs to be run through BabelAll thoughts welcome. Thanks
The text was updated successfully, but these errors were encountered: