Loaders & Dependency Management #378

Closed
mattdesl opened this Issue Jul 26, 2014 · 65 comments

Comments

Projects
None yet
@mattdesl

Hey, nice tool you've got here!

I am running a simple test with the json-loader, with code like this:

var foo = require('./package.json');
module.exports = foo.version;

My config:

module.exports = {
    context: __dirname,
    entry: "./index.js",
    module: {
        loaders: [
            {
                test: /\.json$/,
                loader: "json"
            }
        ]
    }
}

Let's say I was to author this package on NPM as foo. Now, I'm writing another module called bar, which depends on foo. The code for that new module:

var test = require('foo');
console.log(test)

Unfortunately, bundling the above gives me Cannot find module "package.json" errors. Presumably because it doesn't know to use the JSON loader when resolving the foo module.

Browserify handles this by putting transforms in the package.json, so that when it reaches that module, it knows to transform it. This seems like a pretty important feature for code re-use and dependency management.

Or am I simply missing some configuration?

@dashed

This comment has been minimized.

Show comment
Hide comment
@dashed

dashed Jul 27, 2014

Contributor

Just saw your post via hacker news: http://mattdesl.svbtle.com/browserify-vs-webpack


To clear up some misconceptions, json isn't a built-in loader within webpack. You'd need to do npm install json-loader --save-dev as per: http://webpack.github.io/docs/using-loaders.html#loaders-in-require

Updated config:

module.exports = {
    context: __dirname,
    entry: "./index.js",
    module: {
        loaders: [
            {
                test: /\.json$/,
                loader: "json-loader" // reference by npm module or path to loader file
            }
        ]
    }
}

To clarify, using webpack with this config means you don't need to do this: require("json!./somefile.json");, but instead require("./somefile.json");. This is due to the given test regex.

Without the module.loaders key in the config, you may need to use the loader require(..)convention.

Another option may be loader binding via CLI: http://webpack.github.io/docs/using-loaders.html

Contributor

dashed commented Jul 27, 2014

Just saw your post via hacker news: http://mattdesl.svbtle.com/browserify-vs-webpack


To clear up some misconceptions, json isn't a built-in loader within webpack. You'd need to do npm install json-loader --save-dev as per: http://webpack.github.io/docs/using-loaders.html#loaders-in-require

Updated config:

module.exports = {
    context: __dirname,
    entry: "./index.js",
    module: {
        loaders: [
            {
                test: /\.json$/,
                loader: "json-loader" // reference by npm module or path to loader file
            }
        ]
    }
}

To clarify, using webpack with this config means you don't need to do this: require("json!./somefile.json");, but instead require("./somefile.json");. This is due to the given test regex.

Without the module.loaders key in the config, you may need to use the loader require(..)convention.

Another option may be loader binding via CLI: http://webpack.github.io/docs/using-loaders.html

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Jul 27, 2014

Thanks. I mentioned the loader config briefly, but now I've updated the article to detail the additional steps needed for node/browserify compatibility.

I like the idea of --module-bind but whenever I use it from the CLI, it just prints the the usage and help info with no details on success/failure.

Thanks. I mentioned the loader config briefly, but now I've updated the article to detail the additional steps needed for node/browserify compatibility.

I like the idea of --module-bind but whenever I use it from the CLI, it just prints the the usage and help info with no details on success/failure.

@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Jul 27, 2014

Member

@mattdesl You propably missed input or output file. (CLI)

The guideline for modules is

  • Module dev: Don't use inlined loaders. Just require stuff: require("./file.png")
  • Module dev: Follow existing conventions which extension should be mapped to which loader. .png returns a url.
  • Module dev: Tell about which loader config you recommend to the app dev. Use the file-loader for .png.
  • App dev: Use the configuration to bind extensions to loaders. Follow conventions. Bind .png to the url-loader.

Some time ago I decided agains loader configuration in package.json for this reason:

The App developer should have the complete control over how files are processed. In many cases he knows the best how to process a file. Two examples:

require("./file.png") You have some choices here:

  • file-loader Just emit a file in the output dir
  • file-loader?prefix=images/ Emit a file in the images dir
  • url-loader?limit=10000 Use a Data URL for files smaller than 10kb, else emit a file.

The module developer cannot know which one is the best for your app. (And he shouldn't be optionated about that. webpack is all about giving the app dev the choice.) He can recommend one in the modules README.

require("./file.css")

  • style!css
  • style!css!autoprefixer?browsers=last 2 version, Firefox 15
  • style!css + separate css with the extract-text-webpack-plugin

The app dev should decide...

Yes we should add a list of conventions to the documentation...

Configuration in package.json makes the module dependant on webpack (or browserify). That sounds like vendor-lock-in... If you just require stuff as module author, a future new shinning module bundler could take these conventions and implement similar stuff. So in my opinion configuration in package.json is bad for module reuse.

Member

sokra commented Jul 27, 2014

@mattdesl You propably missed input or output file. (CLI)

The guideline for modules is

  • Module dev: Don't use inlined loaders. Just require stuff: require("./file.png")
  • Module dev: Follow existing conventions which extension should be mapped to which loader. .png returns a url.
  • Module dev: Tell about which loader config you recommend to the app dev. Use the file-loader for .png.
  • App dev: Use the configuration to bind extensions to loaders. Follow conventions. Bind .png to the url-loader.

Some time ago I decided agains loader configuration in package.json for this reason:

The App developer should have the complete control over how files are processed. In many cases he knows the best how to process a file. Two examples:

require("./file.png") You have some choices here:

  • file-loader Just emit a file in the output dir
  • file-loader?prefix=images/ Emit a file in the images dir
  • url-loader?limit=10000 Use a Data URL for files smaller than 10kb, else emit a file.

The module developer cannot know which one is the best for your app. (And he shouldn't be optionated about that. webpack is all about giving the app dev the choice.) He can recommend one in the modules README.

require("./file.css")

  • style!css
  • style!css!autoprefixer?browsers=last 2 version, Firefox 15
  • style!css + separate css with the extract-text-webpack-plugin

The app dev should decide...

Yes we should add a list of conventions to the documentation...

Configuration in package.json makes the module dependant on webpack (or browserify). That sounds like vendor-lock-in... If you just require stuff as module author, a future new shinning module bundler could take these conventions and implement similar stuff. So in my opinion configuration in package.json is bad for module reuse.

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Jul 27, 2014

The problem is that manually defining config files is not good for code re-use.

Take shader-school, for example, which has 50+ dependencies, many of them browserfiable front-end modules. Now imagine all of those modules had been authored with Webpack. The app author would have no way of knowing which loaders are being used by all of them; so they would actually have to go to the readmes of all 50+ and see which loaders are being used, and then manually add those to your app config file.

If any of them didn't specify the loaders in the readme (which is going to happen in practice), you are pretty screwed. And if a module changes or adds a new loader without bumping the next major version (which is also going to happen, since it may not represent a breaking API change), then it will break all the apps that are depending on it.

More importantly; many of those browserifiable modules have dependencies that you did not explicitly install. So not only will have you have to track down all 50+ main dependencies, but you will also have to walk through the rest of your dependency tree to make sure you didn't miss any others that might be using loaders. And, again, as your dependencies ebb and flow with versions, your app code could break at any time.

And there's also the problem of extensions. Some modules might use hbs, others might use handlebars, who really knows. Arguments to loaders might be another problem later down the road; but for now it seems OK since they don't lead to breaking code.

Generally, I would agree that the app author knows what's best, but the slight benefit you get from full control is outweighed by the major problems I've mentioned above. Maybe the modules could use the default (no parameter) option to that loader, and the app author could override it somehow if necessary.

EDIT: In terms of vendor lock in with browserify, I agree that it's not great.

In an ideal world it should just be bundler-transforms rather than browserify-transforms, and then different bundlers would all handle them using the same transform modules.

The problem is that manually defining config files is not good for code re-use.

Take shader-school, for example, which has 50+ dependencies, many of them browserfiable front-end modules. Now imagine all of those modules had been authored with Webpack. The app author would have no way of knowing which loaders are being used by all of them; so they would actually have to go to the readmes of all 50+ and see which loaders are being used, and then manually add those to your app config file.

If any of them didn't specify the loaders in the readme (which is going to happen in practice), you are pretty screwed. And if a module changes or adds a new loader without bumping the next major version (which is also going to happen, since it may not represent a breaking API change), then it will break all the apps that are depending on it.

More importantly; many of those browserifiable modules have dependencies that you did not explicitly install. So not only will have you have to track down all 50+ main dependencies, but you will also have to walk through the rest of your dependency tree to make sure you didn't miss any others that might be using loaders. And, again, as your dependencies ebb and flow with versions, your app code could break at any time.

And there's also the problem of extensions. Some modules might use hbs, others might use handlebars, who really knows. Arguments to loaders might be another problem later down the road; but for now it seems OK since they don't lead to breaking code.

Generally, I would agree that the app author knows what's best, but the slight benefit you get from full control is outweighed by the major problems I've mentioned above. Maybe the modules could use the default (no parameter) option to that loader, and the app author could override it somehow if necessary.

EDIT: In terms of vendor lock in with browserify, I agree that it's not great.

In an ideal world it should just be bundler-transforms rather than browserify-transforms, and then different bundlers would all handle them using the same transform modules.

@dashed

This comment has been minimized.

Show comment
Hide comment
@dashed

dashed Jul 28, 2014

Contributor

@mattdesl In practice, published modules in npm aren't supposed to be compiled in webpack/browserify; or even rely on the end-user to compile it via webpack/browserify. Ideally, if it can play well within both the browser and node.js (or similar), they should be environment agnostic, and as well as bundler agnostic as possible.

If a module author does use webpack, it's usually to publish a browser version which are usually published to appropriate channels such as bower.

Example of this is the bluebird library, where the browser version is bundled via browserify. If one uses this for their app that will be bundled by webpack/browserify, it's better to use the npm module since uglifyjs will create a leaner distributable (no require boilerplate provided by the bundler).


If a module does rely on the end-user to compile it via webpack/browserify (e.g. using webpack loader convention in require()) for integration into the end-user's app, then this is a misuse of the bundler.

Contributor

dashed commented Jul 28, 2014

@mattdesl In practice, published modules in npm aren't supposed to be compiled in webpack/browserify; or even rely on the end-user to compile it via webpack/browserify. Ideally, if it can play well within both the browser and node.js (or similar), they should be environment agnostic, and as well as bundler agnostic as possible.

If a module author does use webpack, it's usually to publish a browser version which are usually published to appropriate channels such as bower.

Example of this is the bluebird library, where the browser version is bundled via browserify. If one uses this for their app that will be bundled by webpack/browserify, it's better to use the npm module since uglifyjs will create a leaner distributable (no require boilerplate provided by the bundler).


If a module does rely on the end-user to compile it via webpack/browserify (e.g. using webpack loader convention in require()) for integration into the end-user's app, then this is a misuse of the bundler.

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Jul 28, 2014

Alright, so basically main and browser fields for NPM/Bower should point to the UMD build, rather than the CommonJS source.

Some issues I'm seeing:

  • cannot npm link -- say you are developing two modules in tandem, after each change you need to bundle one module before the other can use it
  • cannot npm dedupe -- so you might end up with lots of duplicate code in your final bundle
  • if you want it to work in bower and npm, you will need to commit the build file (lots of devs tend to avoid that)
  • source maps, although it looks like source-map-loader solves this :)

It's also worth mentioning: providing the browser bundle as your entry point (a la Bluebird) means that users lose the element of control with regards to loader options -- this seemed like the main reason @sokra opted against package configuration in the first place. 😕

Alright, so basically main and browser fields for NPM/Bower should point to the UMD build, rather than the CommonJS source.

Some issues I'm seeing:

  • cannot npm link -- say you are developing two modules in tandem, after each change you need to bundle one module before the other can use it
  • cannot npm dedupe -- so you might end up with lots of duplicate code in your final bundle
  • if you want it to work in bower and npm, you will need to commit the build file (lots of devs tend to avoid that)
  • source maps, although it looks like source-map-loader solves this :)

It's also worth mentioning: providing the browser bundle as your entry point (a la Bluebird) means that users lose the element of control with regards to loader options -- this seemed like the main reason @sokra opted against package configuration in the first place. 😕

@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Jul 30, 2014

Member

I also think that publishing a build version is not a good solution for npm (in bower this is more common).

It may be possible in extreme cases. Some of @mattdesl issues can be avoided:

  • npm dedupe issue can be avoided when you don't bundle dependencies.
  • I see no reason why you would need to commit the build file. You can gitignore it and have a npmignore file that doesn't ignore it. You can build it in prepublish. Example see webpack-dev-server, which publish a build version of the wrapper page, but don't commit it.
  • npm link issue: This complicates the workflow. watch build may works but hmm...

This topic really needs more discussion. Would love to see more input about it.

Technically it's possible to add loader configuration to package.json. But I don't like it... You would need to add this configuration per module per module bundler... Everybody who publish a module would only add the configuration for the bundler he is using... A mess...

I would like to see a solution for reuseable frontend modules that is based on a convention that modules follow. @petehunt's commonjs-asset-spec is a step in this direction. Best if they don't use a key in the package.json (so we don't depend on a package.json and don't have to read it -> performance).

My proposal would be to just require every resource and write down semantics for each file extension. This is independent of module format (CommonJs, AMD, ES6). The "spec" i. e. says: When I require a .css file nothing is returned and the style is added to the DOM. When I require a .json file the content is returned a JSON object. When I require .png file a url to the image is returned. etc. see more examples here.

For each module bundler we would write a plugin that maps file extension to a fitting configuration for this bundler. The plugin could take parameter to define the exact behavior, but this is implementation-specific (i. e. should return .png a DataUrl or a normal url and emit a file).

The spec could be splitted in multiple parts/modules so bundler don't need to implement everything at once.

The get acceptance we need at least implement it for webpack and browserify...

I'm sure that this is not a perfect solution so I'm open for more ideas/opinions...

Member

sokra commented Jul 30, 2014

I also think that publishing a build version is not a good solution for npm (in bower this is more common).

It may be possible in extreme cases. Some of @mattdesl issues can be avoided:

  • npm dedupe issue can be avoided when you don't bundle dependencies.
  • I see no reason why you would need to commit the build file. You can gitignore it and have a npmignore file that doesn't ignore it. You can build it in prepublish. Example see webpack-dev-server, which publish a build version of the wrapper page, but don't commit it.
  • npm link issue: This complicates the workflow. watch build may works but hmm...

This topic really needs more discussion. Would love to see more input about it.

Technically it's possible to add loader configuration to package.json. But I don't like it... You would need to add this configuration per module per module bundler... Everybody who publish a module would only add the configuration for the bundler he is using... A mess...

I would like to see a solution for reuseable frontend modules that is based on a convention that modules follow. @petehunt's commonjs-asset-spec is a step in this direction. Best if they don't use a key in the package.json (so we don't depend on a package.json and don't have to read it -> performance).

My proposal would be to just require every resource and write down semantics for each file extension. This is independent of module format (CommonJs, AMD, ES6). The "spec" i. e. says: When I require a .css file nothing is returned and the style is added to the DOM. When I require a .json file the content is returned a JSON object. When I require .png file a url to the image is returned. etc. see more examples here.

For each module bundler we would write a plugin that maps file extension to a fitting configuration for this bundler. The plugin could take parameter to define the exact behavior, but this is implementation-specific (i. e. should return .png a DataUrl or a normal url and emit a file).

The spec could be splitted in multiple parts/modules so bundler don't need to implement everything at once.

The get acceptance we need at least implement it for webpack and browserify...

I'm sure that this is not a perfect solution so I'm open for more ideas/opinions...

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Jul 30, 2014

Glad to hear you've also been thinking about the issue.

Good point with gitignore / prepublish. Bower is a bit different: some people commit the build, but others (like Zepto and jQuery) don't want to. This, and the lack of preinstall hooks, is the reason zepto refuses to support bower. Recently bower did add this hook, although I haven't been able to get it working yet.

One of the tricky things is that transforms are not always require overloads like in webpack. For example: brfs and glslify.

But I do agree that we should all try to find a common solution. Maybe open an issue with browserify and see what kind of insight they provide? Their "browserify-transforms" vendor lock in causes a bit of a problem with other bundlers.

Sent from my iPhone

On Jul 30, 2014, at 7:01 AM, Tobias Koppers notifications@github.com wrote:

I also think that publishing a build version is not a good solution for npm (in bower this is more common).

It may be possible in extreme cases. Some of @mattdesl issues can be avoided:

npm dedupe issue can be avoided when you don't bundle dependencies.
I see no reason why you would need to commit the build file. You can gitignore it and have a npmignore file that doesn't ignore it. You can build it in prepublish. Example see webpack-dev-server, which publish a build version of the wrapper page, but don't commit it.
npm link issue: This complicates the workflow. watch build may works but hmm...
This topic really needs more discussion. Would love to see more input about it.

Technically it's possible to add loader configuration to package.json. But I don't like it... You would need to add this configuration per module per module bundler... Everybody who publish a module would only add the configuration for the bundler he is using... A mess...

I would like to see a solution for reuseable frontend modules that is based on a convention that modules follow. @petehunt's commonjs-asset-spec is a step in this direction. Best if they don't use a key in the package.json (so we don't depend on a package.json and don't have to read it -> performance).

My proposal would be to just require every resource and write down semantics for each file extension. This is independent of module format (CommonJs, AMD, ES6). The "spec" say: When I require a .css file nothing is returned and the style is added to the DOM. When I require a .json file the content is returned a JSON object. When I require .png file a url to the image is returned. etc. see more example here.

For each module bundler we would write a plugin that maps file extension to a fitting configuration for this bundler. The plugin could take parameter to define the exact behavior, but this is implementation-specific (i. e. should return .png a DataUrl or a normal url and emit a file).

The spec could be splitted in multiple parts/modules so bundler don't need to implement everything at once.

The get acceptance we need at least implement it for webpack and browserify...

I'm open for more ideas/opinions...


Reply to this email directly or view it on GitHub.

Glad to hear you've also been thinking about the issue.

Good point with gitignore / prepublish. Bower is a bit different: some people commit the build, but others (like Zepto and jQuery) don't want to. This, and the lack of preinstall hooks, is the reason zepto refuses to support bower. Recently bower did add this hook, although I haven't been able to get it working yet.

One of the tricky things is that transforms are not always require overloads like in webpack. For example: brfs and glslify.

But I do agree that we should all try to find a common solution. Maybe open an issue with browserify and see what kind of insight they provide? Their "browserify-transforms" vendor lock in causes a bit of a problem with other bundlers.

Sent from my iPhone

On Jul 30, 2014, at 7:01 AM, Tobias Koppers notifications@github.com wrote:

I also think that publishing a build version is not a good solution for npm (in bower this is more common).

It may be possible in extreme cases. Some of @mattdesl issues can be avoided:

npm dedupe issue can be avoided when you don't bundle dependencies.
I see no reason why you would need to commit the build file. You can gitignore it and have a npmignore file that doesn't ignore it. You can build it in prepublish. Example see webpack-dev-server, which publish a build version of the wrapper page, but don't commit it.
npm link issue: This complicates the workflow. watch build may works but hmm...
This topic really needs more discussion. Would love to see more input about it.

Technically it's possible to add loader configuration to package.json. But I don't like it... You would need to add this configuration per module per module bundler... Everybody who publish a module would only add the configuration for the bundler he is using... A mess...

I would like to see a solution for reuseable frontend modules that is based on a convention that modules follow. @petehunt's commonjs-asset-spec is a step in this direction. Best if they don't use a key in the package.json (so we don't depend on a package.json and don't have to read it -> performance).

My proposal would be to just require every resource and write down semantics for each file extension. This is independent of module format (CommonJs, AMD, ES6). The "spec" say: When I require a .css file nothing is returned and the style is added to the DOM. When I require a .json file the content is returned a JSON object. When I require .png file a url to the image is returned. etc. see more example here.

For each module bundler we would write a plugin that maps file extension to a fitting configuration for this bundler. The plugin could take parameter to define the exact behavior, but this is implementation-specific (i. e. should return .png a DataUrl or a normal url and emit a file).

The spec could be splitted in multiple parts/modules so bundler don't need to implement everything at once.

The get acceptance we need at least implement it for webpack and browserify...

I'm open for more ideas/opinions...


Reply to this email directly or view it on GitHub.

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Aug 4, 2014

Member

I'm glad to see a discussion on this topic. There are multiple solutions and a lot of potential is lost as this is not addressed by CommonJS.

I guess it is time to create an adhoc spec ala Promise/A+ or @petehunt 's CommonJS asset spec. This should surely be done with the browserify community because npm modules should be consumable by both bundlers.

One problem I see is that there are so many file types out there. Some use less, others use sass, etc. It's hard to create a spec for all of these possibilities.

Another problem is that is not save to assume a specific return value for a dependency. Maybe I'm expecting to get an ArrayBuffer when requiring a png-file?

Member

jhnns commented Aug 4, 2014

I'm glad to see a discussion on this topic. There are multiple solutions and a lot of potential is lost as this is not addressed by CommonJS.

I guess it is time to create an adhoc spec ala Promise/A+ or @petehunt 's CommonJS asset spec. This should surely be done with the browserify community because npm modules should be consumable by both bundlers.

One problem I see is that there are so many file types out there. Some use less, others use sass, etc. It's hard to create a spec for all of these possibilities.

Another problem is that is not save to assume a specific return value for a dependency. Maybe I'm expecting to get an ArrayBuffer when requiring a png-file?

@petehunt

This comment has been minimized.

Show comment
Hide comment
@petehunt

petehunt Aug 4, 2014

The spec would need to map file extensions (mime types?) to return values and side effect semantics. As long as we can figure out versioning I think we can safely move fast and just stick with what webpack does already for v1 (since it's proven in production).

cc @vjeux to make sure we're moving together

petehunt commented Aug 4, 2014

The spec would need to map file extensions (mime types?) to return values and side effect semantics. As long as we can figure out versioning I think we can safely move fast and just stick with what webpack does already for v1 (since it's proven in production).

cc @vjeux to make sure we're moving together

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Aug 4, 2014

Member

As long as we can figure out versioning I think we can safely move fast and just stick with what webpack does already for v1 (since it's proven in production).

Yep. But it'd be sad if this would be a webpack-only spec 😉

@substack Are you interested in participation? browserify's transforms are close to webpack's loader.

Member

jhnns commented Aug 4, 2014

As long as we can figure out versioning I think we can safely move fast and just stick with what webpack does already for v1 (since it's proven in production).

Yep. But it'd be sad if this would be a webpack-only spec 😉

@substack Are you interested in participation? browserify's transforms are close to webpack's loader.

@petehunt

This comment has been minimized.

Show comment
Hide comment
@petehunt

petehunt Aug 4, 2014

I'd look at it instead as a general spec ala CommonJS Modules/1.1, with a reference implementation implemented with webpack (that is, webpack does not depend on the spec, the spec does not depend on webpack, but the tool that implements the spec uses webpack)

petehunt commented Aug 4, 2014

I'd look at it instead as a general spec ala CommonJS Modules/1.1, with a reference implementation implemented with webpack (that is, webpack does not depend on the spec, the spec does not depend on webpack, but the tool that implements the spec uses webpack)

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Aug 4, 2014

The spec would need to map file extensions (mime types?) to return values and side effect semantics.

In many cases a transform is not a simple "import" statement. Examples:

  • brfs
  • glslify
  • envify
  • es6ify
  • sweetify

etc..

Even with file mappings; it's really hard to predict what the module author will need. And a spec of file mappings won't allow for much innovation or flexibility. For example: say I build a transform that turns a local TTF file into a set of vector outlines using fontpath; this may be different from another transform that returns @font-face CSS as a data URI. Another transform might operate on image types, but do some level of pre-processing / unit-testing during the bundle step, and it may return an entirely different value (ArrayBuffer, ndarray, etc).

I also don't think we should be stuffing everything into require() -- it has a very clear and unambiguous meaning in Node and it was purposefully designed to be this way. I prefer the way glslify() works; it leads to cleaner and more expressive code and also gives you a nice way of providing module-specific options.

//dumb idea of a transform
var lipsum = require('lorem-ipsum-transform')

var value = lipsum({
    length: 100
})

mattdesl commented Aug 4, 2014

The spec would need to map file extensions (mime types?) to return values and side effect semantics.

In many cases a transform is not a simple "import" statement. Examples:

  • brfs
  • glslify
  • envify
  • es6ify
  • sweetify

etc..

Even with file mappings; it's really hard to predict what the module author will need. And a spec of file mappings won't allow for much innovation or flexibility. For example: say I build a transform that turns a local TTF file into a set of vector outlines using fontpath; this may be different from another transform that returns @font-face CSS as a data URI. Another transform might operate on image types, but do some level of pre-processing / unit-testing during the bundle step, and it may return an entirely different value (ArrayBuffer, ndarray, etc).

I also don't think we should be stuffing everything into require() -- it has a very clear and unambiguous meaning in Node and it was purposefully designed to be this way. I prefer the way glslify() works; it leads to cleaner and more expressive code and also gives you a nice way of providing module-specific options.

//dumb idea of a transform
var lipsum = require('lorem-ipsum-transform')

var value = lipsum({
    length: 100
})
@petehunt

This comment has been minimized.

Show comment
Hide comment
@petehunt

petehunt Aug 4, 2014

With most of the transforms you listed, the best practice is to run those transforms before pushing them into npm. Additionally like you and @sokra mentioned it's pretty hard to solve all of these so I think continuing with this best practice is great.

Static assets don't fit so well into this mold because they may need to be optimized at bundle time with the raw asset rather than the JS-ified version (think file-loader or extract-text or a future image spriting plugin).

I'm with you on avoiding overloading require(). Relative to this spec, I think require() should have the exact same semantics as node does (this is contrary to what webpack does). My spec instead uses a requireStatic() global function to introduce these new semantics.

What do you think?

petehunt commented Aug 4, 2014

With most of the transforms you listed, the best practice is to run those transforms before pushing them into npm. Additionally like you and @sokra mentioned it's pretty hard to solve all of these so I think continuing with this best practice is great.

Static assets don't fit so well into this mold because they may need to be optimized at bundle time with the raw asset rather than the JS-ified version (think file-loader or extract-text or a future image spriting plugin).

I'm with you on avoiding overloading require(). Relative to this spec, I think require() should have the exact same semantics as node does (this is contrary to what webpack does). My spec instead uses a requireStatic() global function to introduce these new semantics.

What do you think?

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Aug 4, 2014

I like the idea of standardization and having a function separate than require(), but I'm not sure how well a spec would be adopted if it only supports a small set of file types.

Bundling a module on pre-publish is not a good solution due to the problems I listed earlier (mainly dedupe and npm link). In the case of es6ify, envify, and brfs, the point of these transforms is to allow your Node code to be compatible with the browser; so that you only have a single entry point on your package.

And in the case of envify (as well as some other transforms, like glslify-optimize), it's often preferable to defer the transform to the application rather than the module.

mattdesl commented Aug 4, 2014

I like the idea of standardization and having a function separate than require(), but I'm not sure how well a spec would be adopted if it only supports a small set of file types.

Bundling a module on pre-publish is not a good solution due to the problems I listed earlier (mainly dedupe and npm link). In the case of es6ify, envify, and brfs, the point of these transforms is to allow your Node code to be compatible with the browser; so that you only have a single entry point on your package.

And in the case of envify (as well as some other transforms, like glslify-optimize), it's often preferable to defer the transform to the application rather than the module.

@petehunt

This comment has been minimized.

Show comment
Hide comment
@petehunt

petehunt Aug 4, 2014

I'd argue that envify is part of "shimming for the browser" rather than an optional transform (since process.env is a standard Node global, it must be shimmed).

es6ify and sweetify are src-to-js transforms which should be run before publishing.

brfs is an interesting case, but it's primarily meant for dealing with static resources which requireStatic() would take care of. I would imagine that the fallback behavior for an unknown extension passed to requireStatic() would be to use raw-loader.

petehunt commented Aug 4, 2014

I'd argue that envify is part of "shimming for the browser" rather than an optional transform (since process.env is a standard Node global, it must be shimmed).

es6ify and sweetify are src-to-js transforms which should be run before publishing.

brfs is an interesting case, but it's primarily meant for dealing with static resources which requireStatic() would take care of. I would imagine that the fallback behavior for an unknown extension passed to requireStatic() would be to use raw-loader.

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Aug 5, 2014

Member

So the concept behind glslify is, that instead overloading require(), every resource has "its own" require. For instance: If I'd like to have an image as ArrayBuffer I'd do:

var requireImgAsArrayBuffer = require("require-img-as-array-buffer");
// yes, it's a bit verbose, it's just an example
var imgAsArrayBuffer = requireImgAsArrayBuffer("./img.png");

In this case, the module require-img-as-array-buffer would be divided in two parts:

  • One that is called when the static analysis happened. In this phase the loader is responsible for applying all the necessary transforms on the requested resource.
  • A runtime part when requireImgAsArrayBuffer("./img.png") is actually executed

I think that's pretty close to what webpack-loaders are today. The css-loader works entirely in the analysis phase, the style-loader works in the runtime phase.

The main difference to glslify is that the require()-method isn't overloaded, so that a module author doesn't need to tell the module consumer how to configure webpack, he/she just needs to work with the loader-api (which is a great advantage in writing re-usable web modules).

However, using webpack's !-syntax isn't far from providing an own require()-method. There is no big difference between requireImgAsArrayBuffer("./img.png") and require("img-as-array-buffer!./img.png").

So, after all I think we're just searching for a loader/transform-spec we can all agree on. In order to get there, we need to find an answer for the following issue: How can we achieve predictability for a module author while preserving flexibility for an app author?

For instance: As a module author I want to load a file. So after calling require("file!./img.png") I'd expect to receive an url to that file as string. But as an app author I want to store all images under assets/img/. In the end we have two parties depending on the same loader, which is somehow contradictory to npm's philosophy.

Member

jhnns commented Aug 5, 2014

So the concept behind glslify is, that instead overloading require(), every resource has "its own" require. For instance: If I'd like to have an image as ArrayBuffer I'd do:

var requireImgAsArrayBuffer = require("require-img-as-array-buffer");
// yes, it's a bit verbose, it's just an example
var imgAsArrayBuffer = requireImgAsArrayBuffer("./img.png");

In this case, the module require-img-as-array-buffer would be divided in two parts:

  • One that is called when the static analysis happened. In this phase the loader is responsible for applying all the necessary transforms on the requested resource.
  • A runtime part when requireImgAsArrayBuffer("./img.png") is actually executed

I think that's pretty close to what webpack-loaders are today. The css-loader works entirely in the analysis phase, the style-loader works in the runtime phase.

The main difference to glslify is that the require()-method isn't overloaded, so that a module author doesn't need to tell the module consumer how to configure webpack, he/she just needs to work with the loader-api (which is a great advantage in writing re-usable web modules).

However, using webpack's !-syntax isn't far from providing an own require()-method. There is no big difference between requireImgAsArrayBuffer("./img.png") and require("img-as-array-buffer!./img.png").

So, after all I think we're just searching for a loader/transform-spec we can all agree on. In order to get there, we need to find an answer for the following issue: How can we achieve predictability for a module author while preserving flexibility for an app author?

For instance: As a module author I want to load a file. So after calling require("file!./img.png") I'd expect to receive an url to that file as string. But as an app author I want to store all images under assets/img/. In the end we have two parties depending on the same loader, which is somehow contradictory to npm's philosophy.

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Aug 5, 2014

@petehunt brfs is an interesting case, but it's primarily meant for dealing with static resources which requireStatic() would take care of. I would imagine that the fallback behavior for an unknown extension passed to requireStatic() would be to use raw-loader.

This means you are no longer writing Node code. You can't simply node test.js - you now need two entry points for your module and two different engines to run/test it. Diverging from Node API also gives us less confidence that Node modules will work in the browser.

I would rather write Node code and have it work (mostly) out of the box in the browser.

@jhnns

The ! require syntax would not really work with glslify:

var myShader = glslify({
     vertex: ' ... inline GLSL shader code ... ',
     fragment: ' ... inline GLSL shader code ... ',
     inline: true,
     sourceOnly: true
})

console.log( myShader.fragment ) // => the glslified frag shader source

Also; there is no runtime part of glslify in the above case; it just gets transformed into an object with shader strings and uniform/attribute info.

Not sure if there is a solution that encompasses all these cases. Maybe some browserify folks should chime into the discussion...

mattdesl commented Aug 5, 2014

@petehunt brfs is an interesting case, but it's primarily meant for dealing with static resources which requireStatic() would take care of. I would imagine that the fallback behavior for an unknown extension passed to requireStatic() would be to use raw-loader.

This means you are no longer writing Node code. You can't simply node test.js - you now need two entry points for your module and two different engines to run/test it. Diverging from Node API also gives us less confidence that Node modules will work in the browser.

I would rather write Node code and have it work (mostly) out of the box in the browser.

@jhnns

The ! require syntax would not really work with glslify:

var myShader = glslify({
     vertex: ' ... inline GLSL shader code ... ',
     fragment: ' ... inline GLSL shader code ... ',
     inline: true,
     sourceOnly: true
})

console.log( myShader.fragment ) // => the glslified frag shader source

Also; there is no runtime part of glslify in the above case; it just gets transformed into an object with shader strings and uniform/attribute info.

Not sure if there is a solution that encompasses all these cases. Maybe some browserify folks should chime into the discussion...

@petehunt

This comment has been minimized.

Show comment
Hide comment
@petehunt

petehunt Aug 5, 2014

That's true, but I think that anything that solves the difficult problems in this space is going to need to diverge from node for all of the use cases we'd need to support.

A good example is with stylesheets: on the client we'd want to inject it with the DOM API, and on the server we'd want to include it in the HTML document response. Using something like insert-css isn't sufficient because there's no expectation of being statically analyzable.

We can make this solution work in node by providing a requireStatic() global which does the right thing in the node environment (webpack can already do this by bundling for --target node and running the bundle in node).

I would actually argue that this is already a problem today: I doubt you ever see a coffeeify transform listed in in the browser package.json field since it can't be run in the node environment without a prebuild step.

petehunt commented Aug 5, 2014

That's true, but I think that anything that solves the difficult problems in this space is going to need to diverge from node for all of the use cases we'd need to support.

A good example is with stylesheets: on the client we'd want to inject it with the DOM API, and on the server we'd want to include it in the HTML document response. Using something like insert-css isn't sufficient because there's no expectation of being statically analyzable.

We can make this solution work in node by providing a requireStatic() global which does the right thing in the node environment (webpack can already do this by bundling for --target node and running the bundle in node).

I would actually argue that this is already a problem today: I doubt you ever see a coffeeify transform listed in in the browser package.json field since it can't be run in the node environment without a prebuild step.

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Aug 6, 2014

Member

@petehunt How would you describe the semantics of requireStatic in the browser and in node?

@mattdesl loaders don't need to have a static analysis and runtime part, but they may have. I'm not sure if I understood glslify right, but with webpack it would probably look like this:

var glslify = require("glslify");

var createShader = glslify({
    vertex: require("vertex!./vertex.glsl"),
    fragment: require("fragment!./fragment.glsl")
});
Member

jhnns commented Aug 6, 2014

@petehunt How would you describe the semantics of requireStatic in the browser and in node?

@mattdesl loaders don't need to have a static analysis and runtime part, but they may have. I'm not sure if I understood glslify right, but with webpack it would probably look like this:

var glslify = require("glslify");

var createShader = glslify({
    vertex: require("vertex!./vertex.glsl"),
    fragment: require("fragment!./fragment.glsl")
});
@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Aug 6, 2014

@jhnns The glsl parser looks at both fragment and vertex shaders together to determine uniform and attribute information. And these are not just expanded to strings; the resulting function (or object) also contains information about uniforms and attributes. This is why glslify needs AST to rewrite the original JS source. Unless I'm mistaken, webpack loaders only transform the required resources, rather than the source that is requiring those resources.

It gets more murky when you consider the inline and sourceOnly options.

This is obviously an unusual case; but it's already working really well in browserify and creating an ecosystem of GLSL modules with their own dependencies.

mattdesl commented Aug 6, 2014

@jhnns The glsl parser looks at both fragment and vertex shaders together to determine uniform and attribute information. And these are not just expanded to strings; the resulting function (or object) also contains information about uniforms and attributes. This is why glslify needs AST to rewrite the original JS source. Unless I'm mistaken, webpack loaders only transform the required resources, rather than the source that is requiring those resources.

It gets more murky when you consider the inline and sourceOnly options.

This is obviously an unusual case; but it's already working really well in browserify and creating an ecosystem of GLSL modules with their own dependencies.

@substack

This comment has been minimized.

Show comment
Hide comment
@substack

substack Aug 7, 2014

@petehunt for css that works in the browser and the server, I think it's best to start with making it work nicely in node and then just make that exact same code work browser-side with AST shenanigans. A good example of where that approach has worked well for me in the past is starting with bulk-require and then making that module work as a transform through an entirely separate package bulkify.

substack commented Aug 7, 2014

@petehunt for css that works in the browser and the server, I think it's best to start with making it work nicely in node and then just make that exact same code work browser-side with AST shenanigans. A good example of where that approach has worked well for me in the past is starting with bulk-require and then making that module work as a transform through an entirely separate package bulkify.

@kurtharriger

This comment has been minimized.

Show comment
Hide comment
@kurtharriger

kurtharriger Sep 13, 2014

Contributor

So while I agree the app developer should ultimately have full control, I think the module developer should be able to specify the recommended configuration that should be used unless the app developer explicitly says otherwise.

The file loader is a good example of something that the app development might want to change, but there are also a lot of cases where modules tend to conflict in which loaders should be used for a given extension.

I have one module jsx extension is used for react components and contains no es6 js files. Another case where react.js is used as the extension and loaded with tracuer as well. Another module where all js id processed with es6 loader. This inconsistency is a bit annoying at the app level since it's necessary to add exclusions or more specific regex patterns for specific modules.

I would rather that each module define it's recommended loaders in package.json, and if the app author wants to override the configuration they should so explicitly.

Perhaps module section should have an overrides section that allows one to specify object with module name and loaders for the named module only.

This would still give the app developer full control, simplify my configuration alot and eliminate most all modules currently defined at the app level.

Contributor

kurtharriger commented Sep 13, 2014

So while I agree the app developer should ultimately have full control, I think the module developer should be able to specify the recommended configuration that should be used unless the app developer explicitly says otherwise.

The file loader is a good example of something that the app development might want to change, but there are also a lot of cases where modules tend to conflict in which loaders should be used for a given extension.

I have one module jsx extension is used for react components and contains no es6 js files. Another case where react.js is used as the extension and loaded with tracuer as well. Another module where all js id processed with es6 loader. This inconsistency is a bit annoying at the app level since it's necessary to add exclusions or more specific regex patterns for specific modules.

I would rather that each module define it's recommended loaders in package.json, and if the app author wants to override the configuration they should so explicitly.

Perhaps module section should have an overrides section that allows one to specify object with module name and loaders for the named module only.

This would still give the app developer full control, simplify my configuration alot and eliminate most all modules currently defined at the app level.

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Sep 15, 2014

Member

Yep, it would definitely be cool if the module author was able to specify a sane default configuration.

Member

jhnns commented Sep 15, 2014

Yep, it would definitely be cool if the module author was able to specify a sane default configuration.

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 9, 2014

Agree with @kurtharriger

As a side note, I was about to create my own Webpack-like tool (with pretty much the same motivation and very similar approach to solve those problems). Now that I found Webpack there is no need to develop it any further (even though an early alpha version was working well and fast).

I think that modules should be mostly like black boxes in the first place. If I want to use some complex component that uses all sorts of CSS/Stylus/PNG assets, I just want to declare the dependency and forget about it. Likewise, when I replace that module with something else, I'd just change the dependency and all traces of the old module should be lost.

Perhaps each module can define it's own loaders/settings. Different, not related modules may even use different, incompatible settings and Webpack would honor those specifically.

The application IMHO should always be able to override settings, but it doesn't have to.

Let's say we have two modules plus the application (which to me is just another module), all of which treat PNG files differently:

  • module A just wants an URL to the PNG file, no matter if base64 encoded or a normal URL
  • module B needs access to the raw PNG data and must get an ArrayBuffer or it won't work
  • the application itself also uses PNG files and inlines all PNG files < 8kb

IMHO a good solution for a dependency management system is when one can easily convert an application into a module - in the best case without having to change a single line of code. Or, in other words, the application is a module like any other.

This also means that when APP requires A requires B requires C, a consuming module (like A is for B) can override some loaded settings for a required module, and itself can be overridden by the application.

Just sharing some thoughts...

Anyway, I think it's extremely important that modules can self-configure themselves somehow. Are there any news on this issue?

jampy commented Oct 9, 2014

Agree with @kurtharriger

As a side note, I was about to create my own Webpack-like tool (with pretty much the same motivation and very similar approach to solve those problems). Now that I found Webpack there is no need to develop it any further (even though an early alpha version was working well and fast).

I think that modules should be mostly like black boxes in the first place. If I want to use some complex component that uses all sorts of CSS/Stylus/PNG assets, I just want to declare the dependency and forget about it. Likewise, when I replace that module with something else, I'd just change the dependency and all traces of the old module should be lost.

Perhaps each module can define it's own loaders/settings. Different, not related modules may even use different, incompatible settings and Webpack would honor those specifically.

The application IMHO should always be able to override settings, but it doesn't have to.

Let's say we have two modules plus the application (which to me is just another module), all of which treat PNG files differently:

  • module A just wants an URL to the PNG file, no matter if base64 encoded or a normal URL
  • module B needs access to the raw PNG data and must get an ArrayBuffer or it won't work
  • the application itself also uses PNG files and inlines all PNG files < 8kb

IMHO a good solution for a dependency management system is when one can easily convert an application into a module - in the best case without having to change a single line of code. Or, in other words, the application is a module like any other.

This also means that when APP requires A requires B requires C, a consuming module (like A is for B) can override some loaded settings for a required module, and itself can be overridden by the application.

Just sharing some thoughts...

Anyway, I think it's extremely important that modules can self-configure themselves somehow. Are there any news on this issue?

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Oct 10, 2014

Member

Unfortunately not... 😿

It's a complicated issue ... and I don't want to enforce a webpack-only way because that will further scatter the js module environment. There are already existing approaches like:

The only way to start an open discussion is to create some sort of specification and to hope, that we will find a least common denominator. Otherwise it will end up like this 😀

Member

jhnns commented Oct 10, 2014

Unfortunately not... 😿

It's a complicated issue ... and I don't want to enforce a webpack-only way because that will further scatter the js module environment. There are already existing approaches like:

The only way to start an open discussion is to create some sort of specification and to hope, that we will find a least common denominator. Otherwise it will end up like this 😀

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Oct 10, 2014

Member

Btw normalize.io is by far the most promising because it embraces new web stuff like ES6 modules, HTML imports and SPDY/HTTP2 push (Further reading). But it also seems far from production ready...

Member

jhnns commented Oct 10, 2014

Btw normalize.io is by far the most promising because it embraces new web stuff like ES6 modules, HTML imports and SPDY/HTTP2 push (Further reading). But it also seems far from production ready...

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 10, 2014

I see.

But isn't webpack already there with it's custom/enhanced loader API?

Or is there a (AMD?) loader API standard or even de-facto standard that defines query strings, this.cacheable(), etc... for loaders?

The RequireJS loader plugin API seems incompatible with webpack to me, but I could be wrong...

So, if above is correct, a module that uses loaders directly in it's require() calls (like require("vertex!./vertex.glsl") ) is effectively assuming a webpack build - or any comparable system, but in any case it's specific to a single system (more precisely, to a specific loader plugin).

That is true as long as there is no external definition (external to the source code, but part of the module) that tells the build system what "vertex" is exactly. And in that case a module could support a variety of specific systems. For Webpack that could be a webpack.config.js for the module itself, for other systems other files may be used (in parallel).

jampy commented Oct 10, 2014

I see.

But isn't webpack already there with it's custom/enhanced loader API?

Or is there a (AMD?) loader API standard or even de-facto standard that defines query strings, this.cacheable(), etc... for loaders?

The RequireJS loader plugin API seems incompatible with webpack to me, but I could be wrong...

So, if above is correct, a module that uses loaders directly in it's require() calls (like require("vertex!./vertex.glsl") ) is effectively assuming a webpack build - or any comparable system, but in any case it's specific to a single system (more precisely, to a specific loader plugin).

That is true as long as there is no external definition (external to the source code, but part of the module) that tells the build system what "vertex" is exactly. And in that case a module could support a variety of specific systems. For Webpack that could be a webpack.config.js for the module itself, for other systems other files may be used (in parallel).

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Oct 10, 2014

Member

There is a loader extension to the CommonJS specification. But as you can tell by the last modified date the CommonJS specification has been outdated by the de-facto standard of node's module system (which slightly differs in some points).

That's probably why AMD picked up a similar syntax as webpack. But since the specification is very vague you can't say they're compatible in any way.

But we should separate the problem in two parts:

  • A syntax to define which loaders/transforms/you-name-it should be applied on the source in which order and with which parameters
  • An api for that loader/transform/you-name-it to tell the build system what to do with the module (output, caching, source-map, side-effects?)
Member

jhnns commented Oct 10, 2014

There is a loader extension to the CommonJS specification. But as you can tell by the last modified date the CommonJS specification has been outdated by the de-facto standard of node's module system (which slightly differs in some points).

That's probably why AMD picked up a similar syntax as webpack. But since the specification is very vague you can't say they're compatible in any way.

But we should separate the problem in two parts:

  • A syntax to define which loaders/transforms/you-name-it should be applied on the source in which order and with which parameters
  • An api for that loader/transform/you-name-it to tell the build system what to do with the module (output, caching, source-map, side-effects?)
@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Oct 10, 2014

Member

It's ok to have competing module bundlers ... but it'd be awesome if we could find a least common denominator so module authors on npm and bower are able to write true web modules with HTML, JS and CSS.

Member

jhnns commented Oct 10, 2014

It's ok to have competing module bundlers ... but it'd be awesome if we could find a least common denominator so module authors on npm and bower are able to write true web modules with HTML, JS and CSS.

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 10, 2014

So, you're saying you want to abandon the current webpack mechanism (which after all already has an answer to both parts of the problem you mentioned) in favor of a different yet-to-be-defined solution? ;-)

jampy commented Oct 10, 2014

So, you're saying you want to abandon the current webpack mechanism (which after all already has an answer to both parts of the problem you mentioned) in favor of a different yet-to-be-defined solution? ;-)

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Oct 10, 2014

Member

No, I don't want to abandon the webpack mechanism because it works surprisingly well 😀. But I want modules on npm or bower that also come with css and html files which don't require the application to use webpack.

Member

jhnns commented Oct 10, 2014

No, I don't want to abandon the webpack mechanism because it works surprisingly well 😀. But I want modules on npm or bower that also come with css and html files which don't require the application to use webpack.

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 10, 2014

Me too 😄

Well, in the long term I'd really like to use Webpack for practically web-project in my company (migrate existing ones over time, start right away with new ones). Due to the size of the projects and high grade of core-sharing we definitely need to split up things (like a web component that needs CSS, images and such) into modules. Those modules should work right out-of-the-box in different applications. Those applications may be targeted to the desktop and/or to mobile, and as such need slightly different build options (like different url-loader limits, for example). OTOH the modules may be written using Stylus or LESS and have static assets that need to be converted/optimized in a very specific way - before publishing. That means that loaders need to be configured partly in the application and partly in the module.

To go further, a module may depend on another module and, ideally, webpack should take care of all necessary steps to make that work.

I don't think that for webpack this would be difficult to handle. 😃

Is there any way to realize such an ecosystem with webpack now?

I assume that, more or less, this is what this issue is about. Apologies, if not.

jampy commented Oct 10, 2014

Me too 😄

Well, in the long term I'd really like to use Webpack for practically web-project in my company (migrate existing ones over time, start right away with new ones). Due to the size of the projects and high grade of core-sharing we definitely need to split up things (like a web component that needs CSS, images and such) into modules. Those modules should work right out-of-the-box in different applications. Those applications may be targeted to the desktop and/or to mobile, and as such need slightly different build options (like different url-loader limits, for example). OTOH the modules may be written using Stylus or LESS and have static assets that need to be converted/optimized in a very specific way - before publishing. That means that loaders need to be configured partly in the application and partly in the module.

To go further, a module may depend on another module and, ideally, webpack should take care of all necessary steps to make that work.

I don't think that for webpack this would be difficult to handle. 😃

Is there any way to realize such an ecosystem with webpack now?

I assume that, more or less, this is what this issue is about. Apologies, if not.

@sokra

This comment has been minimized.

Show comment
Hide comment
@sokra

sokra Oct 10, 2014

Member

@jampy You concluded the challenge very good. That's exactly the problem we want to solve.

The hard part is that the "thing" works (and is implemented) for webpack and browserify. And it must not conflict with existing systems.

I have an idea in my mind that could solve it. It would look like that:

Three parts:

  1. a per module package.json mapping from file to module type
  2. a per modules system mapping from module type to internal processing (loaders resp. transforms)
  3. a database for common module type to internal processing mappings

1. file to module type

{
  "name": "my-module",
  // ...
  "magic-name": {
    "modules": {
      "*.css": "text/css",
      "*.less": "text/less",
      "*.png": "image/png",
      "*.jsx": "text/jsx+harmony",
      "*.coffee": "text/coffeescript",
      "./lib/views/*/*.html": "text/html+handlebars",
      "*.html": "text/html",
      "*.xyz": "x-my-company/own-type"
    }
  }
}

2. module type to internal processing

// webpack
new MagicNamePlugin({
  "text/css": "style-loader!css-loader?sourceMap",
  "text/less": "[text/css]!less-loader",
  "image/png": "url-loader?limit=20000",
  "image/*": "file-loader"
})

// browserify
{
  "text/less": lessify(),
  // ...
}

3. database

"image/*": "file-loader",
"text/css": "style-loader!css-loader"
// ...
"text/less": lessify(),

The database would be published a module and the MagicNamePlugin could use it to provide default values.

challenges

Are the loaders/transforms installed on application level? As the module are independend of build system yes should be the anwser. How to install them? It that the job for the user? Or the job of the database? The user should not care about loaders/transforms required by a module. The module should not care about loaders/transforms used by the module system.

What happens when a module type changes over time? i. e. new jsx versions. New module type (text/jsx+harmony2)? Version ranges?

"*.jsx": "text/jsx+harmony@^2.0.0"
"text/jsx+harmony@2.3.4": "jsx-loader?harmony"
"text/jsx+harmony@1.8.9": "old-jsx-loader?harmony"

When the loader is installed on application level: How to ensure that the correct version is installed.

Maybe loader/transform installation should be part of the database. It knows the correct version and can keep a folder of installed loaders/transforms. Or the loaders/transforms can be installed into the module. But this could result in side-effects.

How to apply multiply preprocess steps to a file? Should this be supported? Is there a use case? Is this bad style? I think it may be bad style and we should disallow it.

We need a name for this "thing". Replace magic-name with that name. The name should not be webpack related.

Member

sokra commented Oct 10, 2014

@jampy You concluded the challenge very good. That's exactly the problem we want to solve.

The hard part is that the "thing" works (and is implemented) for webpack and browserify. And it must not conflict with existing systems.

I have an idea in my mind that could solve it. It would look like that:

Three parts:

  1. a per module package.json mapping from file to module type
  2. a per modules system mapping from module type to internal processing (loaders resp. transforms)
  3. a database for common module type to internal processing mappings

1. file to module type

{
  "name": "my-module",
  // ...
  "magic-name": {
    "modules": {
      "*.css": "text/css",
      "*.less": "text/less",
      "*.png": "image/png",
      "*.jsx": "text/jsx+harmony",
      "*.coffee": "text/coffeescript",
      "./lib/views/*/*.html": "text/html+handlebars",
      "*.html": "text/html",
      "*.xyz": "x-my-company/own-type"
    }
  }
}

2. module type to internal processing

// webpack
new MagicNamePlugin({
  "text/css": "style-loader!css-loader?sourceMap",
  "text/less": "[text/css]!less-loader",
  "image/png": "url-loader?limit=20000",
  "image/*": "file-loader"
})

// browserify
{
  "text/less": lessify(),
  // ...
}

3. database

"image/*": "file-loader",
"text/css": "style-loader!css-loader"
// ...
"text/less": lessify(),

The database would be published a module and the MagicNamePlugin could use it to provide default values.

challenges

Are the loaders/transforms installed on application level? As the module are independend of build system yes should be the anwser. How to install them? It that the job for the user? Or the job of the database? The user should not care about loaders/transforms required by a module. The module should not care about loaders/transforms used by the module system.

What happens when a module type changes over time? i. e. new jsx versions. New module type (text/jsx+harmony2)? Version ranges?

"*.jsx": "text/jsx+harmony@^2.0.0"
"text/jsx+harmony@2.3.4": "jsx-loader?harmony"
"text/jsx+harmony@1.8.9": "old-jsx-loader?harmony"

When the loader is installed on application level: How to ensure that the correct version is installed.

Maybe loader/transform installation should be part of the database. It knows the correct version and can keep a folder of installed loaders/transforms. Or the loaders/transforms can be installed into the module. But this could result in side-effects.

How to apply multiply preprocess steps to a file? Should this be supported? Is there a use case? Is this bad style? I think it may be bad style and we should disallow it.

We need a name for this "thing". Replace magic-name with that name. The name should not be webpack related.

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 10, 2014

Sounds good to me.

Another approach, just to get another idea on the table:

Both Browserify and Webpack are based more or less on NPM. A core property of NPM is, that it uses nested dependencies.

So, when installing a (web component) module, NPM would also happily install any loader module that is absolutely required by the module (say, Stylus preprocessor) to result into something a browser could understand (CSS, in that case).

That already solves the loader module version problem, as the necessary version is already embedded in the module's package.json and each module could depend on a different loader version.

Further post-processing (autoprefixer, minify, style-loader, ...) that are more like optimizations should reside in the application config, as the module probably doesn't care about that. This means the application (or, generally, any consuming module) just needs to be able to add some more loaders to the pipeline, which are used for the consuming module and it's parents?.

Example.

Application => requires A => requires B => requires C
but also...
A => requires Z

  • C is just a Stylus mixin library, that is intended to be used by other Stylus files
  • B is a web component module implementing a fancy date picker with some PNG files and CSS made up as Stylus files, that rely on the mixins provided by C
  • Z is a web component implementing a toolbar, that comes with 50 tiny PNG files that by some awesome loader are combined into a giant PNG that's used for CSS sprites
  • A is a full featured calendar component that needs the date picker and the tool bar

Note that:

  • C comes with Stylus files, but does not want any postprocessing for itself
  • B comes with Stylus files that should be converted to CSS, but the PNG files are nothing special as long as they are available
  • Z needs post-processing for PNG files, so that's (once more) a different loader configuration
  • A cares about loading speed, and thus optimizes all PNG files, minifies CSS+JS etc...

A module that's intended to be ready-for-use in the browser (like A, B and Z but not C) should always result into classic JS + CSS + remaining static assets. Optimization is another step that's up to the application.

Perhaps module B comes with some SVG vector graphic. That could work out-of-the-box for many browsers, but not all. I see different possibilities:

  • the build fails because no loader is defined for SVG graphics. The application developer is forced to decide what to do with them (convert them to PNG or serve them as-is let the browser render them)
  • everything is fine and by default the SVG is served as-is
  • the module comes with some sort of pre-configuration that converts the SVG to PNG, but the application may override this behavior (really?)

I am myself biased a bit to the first solution, since I think that these situations won't happen too often.

So, any module for which some sort of post-processing is necessary could itself load it's dependencies and configure the loader.

But, yes, this would be still be tied to one or more specific build systems (webpack or alike). Build systems not supported by a module would either require specific configuration in the depending (webpack-aware) module or one would create an intermediary module, as it's often the case.

BTW, does browserify care about non-JS assets at all? In the end, isn't this just about RequireJS and Webpack?

(wheew, difficult topic 😟 )

jampy commented Oct 10, 2014

Sounds good to me.

Another approach, just to get another idea on the table:

Both Browserify and Webpack are based more or less on NPM. A core property of NPM is, that it uses nested dependencies.

So, when installing a (web component) module, NPM would also happily install any loader module that is absolutely required by the module (say, Stylus preprocessor) to result into something a browser could understand (CSS, in that case).

That already solves the loader module version problem, as the necessary version is already embedded in the module's package.json and each module could depend on a different loader version.

Further post-processing (autoprefixer, minify, style-loader, ...) that are more like optimizations should reside in the application config, as the module probably doesn't care about that. This means the application (or, generally, any consuming module) just needs to be able to add some more loaders to the pipeline, which are used for the consuming module and it's parents?.

Example.

Application => requires A => requires B => requires C
but also...
A => requires Z

  • C is just a Stylus mixin library, that is intended to be used by other Stylus files
  • B is a web component module implementing a fancy date picker with some PNG files and CSS made up as Stylus files, that rely on the mixins provided by C
  • Z is a web component implementing a toolbar, that comes with 50 tiny PNG files that by some awesome loader are combined into a giant PNG that's used for CSS sprites
  • A is a full featured calendar component that needs the date picker and the tool bar

Note that:

  • C comes with Stylus files, but does not want any postprocessing for itself
  • B comes with Stylus files that should be converted to CSS, but the PNG files are nothing special as long as they are available
  • Z needs post-processing for PNG files, so that's (once more) a different loader configuration
  • A cares about loading speed, and thus optimizes all PNG files, minifies CSS+JS etc...

A module that's intended to be ready-for-use in the browser (like A, B and Z but not C) should always result into classic JS + CSS + remaining static assets. Optimization is another step that's up to the application.

Perhaps module B comes with some SVG vector graphic. That could work out-of-the-box for many browsers, but not all. I see different possibilities:

  • the build fails because no loader is defined for SVG graphics. The application developer is forced to decide what to do with them (convert them to PNG or serve them as-is let the browser render them)
  • everything is fine and by default the SVG is served as-is
  • the module comes with some sort of pre-configuration that converts the SVG to PNG, but the application may override this behavior (really?)

I am myself biased a bit to the first solution, since I think that these situations won't happen too often.

So, any module for which some sort of post-processing is necessary could itself load it's dependencies and configure the loader.

But, yes, this would be still be tied to one or more specific build systems (webpack or alike). Build systems not supported by a module would either require specific configuration in the depending (webpack-aware) module or one would create an intermediary module, as it's often the case.

BTW, does browserify care about non-JS assets at all? In the end, isn't this just about RequireJS and Webpack?

(wheew, difficult topic 😟 )

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 17, 2014

Practical example that illustrates the "problem":
https://github.com/jampy/kendo-ui-webpack#installing

jampy commented Oct 17, 2014

Practical example that illustrates the "problem":
https://github.com/jampy/kendo-ui-webpack#installing

@mattdesl

This comment has been minimized.

Show comment
Hide comment
@mattdesl

mattdesl Oct 17, 2014

BTW, does browserify care about non-JS assets at all? In the end, isn't this just about RequireJS and Webpack?

It seems like assets are currently beyond the scope of browserify.

One approach is to get it working in Node first (so that it can be tested and used server-side), and then make a transform that inlines the asset (so that it works in the browser and across modules).

And then, at the application level, use a tool or bundler plugin that optimizes the bundle specifically for front-end use. This way we get the benefits of webpack's url-loader, without sacrificing Node compatibility or modularity.

https://github.com/mattdesl/urify-example

Something similar could done for CSS, so that an app-level tool emits a single compressed CSS file, but even without that tool the bundle and Node code "just works." CSS is harder because there are so many approaches to preprocessing, and because it's not really modular to begin with (rule collisions, etc).

I think this is a better approach than overloading a require statement and defining tedious loader configuration per module, and the (statically analyzable) options are also nicer than query strings. But that's just me.. :)

BTW, does browserify care about non-JS assets at all? In the end, isn't this just about RequireJS and Webpack?

It seems like assets are currently beyond the scope of browserify.

One approach is to get it working in Node first (so that it can be tested and used server-side), and then make a transform that inlines the asset (so that it works in the browser and across modules).

And then, at the application level, use a tool or bundler plugin that optimizes the bundle specifically for front-end use. This way we get the benefits of webpack's url-loader, without sacrificing Node compatibility or modularity.

https://github.com/mattdesl/urify-example

Something similar could done for CSS, so that an app-level tool emits a single compressed CSS file, but even without that tool the bundle and Node code "just works." CSS is harder because there are so many approaches to preprocessing, and because it's not really modular to begin with (rule collisions, etc).

I think this is a better approach than overloading a require statement and defining tedious loader configuration per module, and the (statically analyzable) options are also nicer than query strings. But that's just me.. :)

@jampy

This comment has been minimized.

Show comment
Hide comment
@jampy

jampy Oct 18, 2014

BTW, does browserify care about non-JS assets at all? In the end, isn't this just about RequireJS and Webpack?

It seems like assets are currently beyond the scope of browserify.

That sounds to me that Webpack could really become a leader and offer some advanced asset management.

Packages targeted mainly at Node (or, packages without additional assets like CSS / images) are not influenced by this discussion IMHO. Packages targeted to frontend only (ie. web components) would still benefit from a truly modular ... build system.

jampy commented Oct 18, 2014

BTW, does browserify care about non-JS assets at all? In the end, isn't this just about RequireJS and Webpack?

It seems like assets are currently beyond the scope of browserify.

That sounds to me that Webpack could really become a leader and offer some advanced asset management.

Packages targeted mainly at Node (or, packages without additional assets like CSS / images) are not influenced by this discussion IMHO. Packages targeted to frontend only (ie. web components) would still benefit from a truly modular ... build system.

jauco added a commit to jauco/jester that referenced this issue Dec 30, 2014

- webpack/webpack#378
   - Loaders & Dependency Management · Issue #378 · webpack/webpack

                                                                Mon Dec 22 14:40
================================================================================

Je kan geen dependency hebben op een module die ook als entrypoint wordt gebruikt.

Dus ik moet voor de testscenario's een andere webpack build doen.

bijvoorbeeld: test: addDependencies?resolve=[path-2]/[name].test.*

arch:

jester-watch
  - start een webpack dev server op
  - served een webpagina /status
  - webpack dev-server bouwt de standaard entry points
  - jester-watch kijkt naar de files die worden gebouwd.
  - als je testruns hebt gedefinieerd start jester-watch webpack-dev-servers voor die testruns met als entrypoint een bestandje dat met correcte hot-code rel

  karma is overbodig, want: hot code reloading (launchers zijn aardig, maar not worth it. eventueel te extraheren)
@sompylasar

This comment has been minimized.

Show comment
Hide comment
@sompylasar

sompylasar Jan 26, 2015

Related: http://spmjs.io/ (was mentioned in a comment to the npm blog post)

Let's hope concord will make its way in the nearest future. Very promising.

Related: http://spmjs.io/ (was mentioned in a comment to the npm blog post)

Let's hope concord will make its way in the nearest future. Very promising.

jauco added a commit to jauco/jester that referenced this issue Mar 18, 2015

- webpack/webpack#378
   - Loaders & Dependency Management · Issue #378 · webpack/webpack

                                                                Mon Dec 22 14:40
================================================================================

Je kan geen dependency hebben op een module die ook als entrypoint wordt gebruikt.

Dus ik moet voor de testscenario's een andere webpack build doen.

bijvoorbeeld: test: addDependencies?resolve=[path-2]/[name].test.*

arch:

jester-watch
  - start een webpack dev server op
  - served een webpagina /status
  - webpack dev-server bouwt de standaard entry points
  - jester-watch kijkt naar de files die worden gebouwd.
  - als je testruns hebt gedefinieerd start jester-watch webpack-dev-servers voor die testruns met als entrypoint een bestandje dat met correcte hot-code rel

  karma is overbodig, want: hot code reloading (launchers zijn aardig, maar not worth it. eventueel te extraheren)

jauco added a commit to jauco/jester that referenced this issue Mar 18, 2015

- webpack/webpack#378
   - Loaders & Dependency Management · Issue #378 · webpack/webpack

                                                                Mon Dec 22 14:40
================================================================================

Je kan geen dependency hebben op een module die ook als entrypoint wordt gebruikt.

Dus ik moet voor de testscenario's een andere webpack build doen.

bijvoorbeeld: test: addDependencies?resolve=[path-2]/[name].test.*

arch:

jester-watch
  - start een webpack dev server op
  - served een webpagina /status
  - webpack dev-server bouwt de standaard entry points
  - jester-watch kijkt naar de files die worden gebouwd.
  - als je testruns hebt gedefinieerd start jester-watch webpack-dev-servers voor die testruns met als entrypoint een bestandje dat met correcte hot-code rel

  karma is overbodig, want: hot code reloading (launchers zijn aardig, maar not worth it. eventueel te extraheren)

@calvinmetcalf calvinmetcalf referenced this issue in pouchdb/pouchdb Mar 18, 2015

Closed

Fails with Webpack #3647

nolanlawson added a commit to pouchdb/pouchdb that referenced this issue May 4, 2015

(#3319) - set leveldown to false to fix webpack
Webpack does not like that `leveldown` is trying to use
`fs`. We can harmlessly set it to `false` in the package.json.

For `levelup`, the issue is that it's trying to directly load
its own package.json file, which is apparently a big contentious
issue in Webpack, and the standard solution is that app devs add their
own json loader, which is what Webpack devs themselves advocate (see
[here](webpack/webpack#378 (comment)))
and [here](http://mattdesl.svbtle.com/browserify-vs-webpack)).

We can add a note about this Webpack config fix to our own wiki,
or we can just trust that Webpack-using devs will find it themselves
after 5 minutes of googling the JSON issue (I found it in about
as much time). In any case, the bare minimum we need to do is
set `leveldown` to `false`.

nolanlawson added a commit to pouchdb/pouchdb that referenced this issue May 4, 2015

(#3319) - set leveldown to false to fix webpack
Webpack does not like that `leveldown` is trying to use
`fs`. We can harmlessly set it to `false` in the package.json.

For `levelup`, the issue is that it's trying to directly load
its own package.json file, which is apparently a big contentious
issue in Webpack, and the standard solution is that app devs add their
own json loader, which is what Webpack devs themselves advocate (see
[here](webpack/webpack#378 (comment)))
and [here](http://mattdesl.svbtle.com/browserify-vs-webpack)).

We can add a note about this Webpack config fix to our own wiki,
or we can just trust that Webpack-using devs will find it themselves
after 5 minutes of googling the JSON issue (I found it in about
as much time). In any case, the bare minimum we need to do is
set `leveldown` to `false`.

nolanlawson added a commit to pouchdb/pouchdb that referenced this issue May 4, 2015

(#3319) - set leveldown to false to fix webpack
Webpack does not like that `leveldown` is trying to use
`fs`. We can harmlessly set it to `false` in the package.json.

For `levelup`, the issue is that it's trying to directly load
its own package.json file, which is apparently a big contentious
issue in Webpack, and the standard solution is that app devs add their
own json loader, which is what Webpack devs themselves advocate (see
[here](webpack/webpack#378 (comment)))
and [here](http://mattdesl.svbtle.com/browserify-vs-webpack)).

We can add a note about this Webpack config fix to our own wiki,
or we can just trust that Webpack-using devs will find it themselves
after 5 minutes of googling the JSON issue (I found it in about
as much time). In any case, the bare minimum we need to do is
set `leveldown` to `false`.
@trusktr

This comment has been minimized.

Show comment
Hide comment
@trusktr

trusktr Jul 28, 2015

Suggestion: move concord out of the webpack org and into a new org for modules specs, since it should apply to all build systems. f.e. github.com/concord/spec

Otherwise it seems like a Webpack thing.

trusktr commented Jul 28, 2015

Suggestion: move concord out of the webpack org and into a new org for modules specs, since it should apply to all build systems. f.e. github.com/concord/spec

Otherwise it seems like a Webpack thing.

@bebraw bebraw added the question label Nov 15, 2015

mohsen1 added a commit to contacts-mvc/data that referenced this issue Mar 13, 2016

Move everything to index.js
Due to this bug in webpack: webpack/webpack#378
JSON loaders will not work from outside of a package.
@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 19, 2016

I'm working on packaging ogv.js and its audio output component audio-feeder in npm, and this static assets problem is definitely biting me.

audio-feeder (and thus ogv.js) needs to ship a Flash .swf file for audio support on IE 11, which must be served from the actual web server -- you can't cheat with a data: or blob: URI. There are also several emscripten-compiled .js files meant to be loaded from web workers, for which blobs might work though I'm less sure about IE support.

As long as I declare 'file-loader' in my dependencies and specify my loader options inline like require("file?name=audio-feeder/[name].[ext]!./dist/dynamicaudio.swf") or require("file?name=ogv.js/[name].[ext]!./dist/ogv-worker-video.js") everything seems to work ok in a test project that does a require('ogv').

However having to repeat the full loader setup on every invocation gets a little verbose if I want my files in a particular directory structure like that... currently I need to have consistent filenames within a base directory, though I can probably rewrite things to pass full hashable paths down to the Flash loader and the worker threads.

Has there been any progress on providing loader configuration defaults per-module in the package.json, or an alternate interface versus overloading require()?

brion commented Apr 19, 2016

I'm working on packaging ogv.js and its audio output component audio-feeder in npm, and this static assets problem is definitely biting me.

audio-feeder (and thus ogv.js) needs to ship a Flash .swf file for audio support on IE 11, which must be served from the actual web server -- you can't cheat with a data: or blob: URI. There are also several emscripten-compiled .js files meant to be loaded from web workers, for which blobs might work though I'm less sure about IE support.

As long as I declare 'file-loader' in my dependencies and specify my loader options inline like require("file?name=audio-feeder/[name].[ext]!./dist/dynamicaudio.swf") or require("file?name=ogv.js/[name].[ext]!./dist/ogv-worker-video.js") everything seems to work ok in a test project that does a require('ogv').

However having to repeat the full loader setup on every invocation gets a little verbose if I want my files in a particular directory structure like that... currently I need to have consistent filenames within a base directory, though I can probably rewrite things to pass full hashable paths down to the Flash loader and the worker threads.

Has there been any progress on providing loader configuration defaults per-module in the package.json, or an alternate interface versus overloading require()?

@sompylasar

This comment has been minimized.

Show comment
Hide comment
@sompylasar

sompylasar Apr 19, 2016

@brion The core idea of webpack is that the app developer, not module developer, decides how the assets should be packaged and served. In your case, you should make your static resources published into your npm package as raw files, and your module should accept a URL for each of them. The app developer who does the bundling would require these assets and will provide your module with every needed URL. They can be hosted differently, e.g. on a CDN, and you never know what app dev would need. You can make the API of your module so that it demands specifying the URLs.

@brion The core idea of webpack is that the app developer, not module developer, decides how the assets should be packaged and served. In your case, you should make your static resources published into your npm package as raw files, and your module should accept a URL for each of them. The app developer who does the bundling would require these assets and will provide your module with every needed URL. They can be hosted differently, e.g. on a CDN, and you never know what app dev would need. You can make the API of your module so that it demands specifying the URLs.

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 19, 2016

Sure, all the paths are configurable. But it'd be nice if webpack could pack the assets for a web app, both reliably and configurably.

Reliable: default case should Just Work. Assets belonging to dependencies and dependencies of dependencies and dependencies of dependencies of dependencies should be packed/bundled/copied along with the other output.

Configurable: all paths should be customizable in the app's webpack config.

Both: Things like inlining images should be configurable in general, but module developer still needs to be able to specify "this .swf file cannot ever be inlined as a data URI because Flash won't load it that way" or "this .PNG file needs to be embedded as data, not a URL".

It seems to me that having defaults in package.json that can be overridden in the app config globally and on a per-module basis would go a long way. Webpack is supposed to help the app developer, and I'm all in favor of that!

brion commented Apr 19, 2016

Sure, all the paths are configurable. But it'd be nice if webpack could pack the assets for a web app, both reliably and configurably.

Reliable: default case should Just Work. Assets belonging to dependencies and dependencies of dependencies and dependencies of dependencies of dependencies should be packed/bundled/copied along with the other output.

Configurable: all paths should be customizable in the app's webpack config.

Both: Things like inlining images should be configurable in general, but module developer still needs to be able to specify "this .swf file cannot ever be inlined as a data URI because Flash won't load it that way" or "this .PNG file needs to be embedded as data, not a URL".

It seems to me that having defaults in package.json that can be overridden in the app config globally and on a per-module basis would go a long way. Webpack is supposed to help the app developer, and I'm all in favor of that!

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 19, 2016

Also I'm going to go on record as saying that requiring the app developer to read readmes of every module in the dependency graph to get a list of every static asset they have, and manually require all those resources individually, and pass every URL through into the configuration of every affected module, seems like a very developer-unfriendly configuration system.

Not only is it poorly scalable to an initial setup, it would require every package update in the dependency graph to be checked for additional asset files. Maintenance would be ... Unpleasant in this model.

This is literally what computers are for: put the list of assets in a manifest file and have the computer manage them but with the human's oversight.

brion commented Apr 19, 2016

Also I'm going to go on record as saying that requiring the app developer to read readmes of every module in the dependency graph to get a list of every static asset they have, and manually require all those resources individually, and pass every URL through into the configuration of every affected module, seems like a very developer-unfriendly configuration system.

Not only is it poorly scalable to an initial setup, it would require every package update in the dependency graph to be checked for additional asset files. Maintenance would be ... Unpleasant in this model.

This is literally what computers are for: put the list of assets in a manifest file and have the computer manage them but with the human's oversight.

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Apr 19, 2016

Member

Also I'm going to go on record as saying that requiring the app developer to read readmes of every module in the dependency graph to get a list of every static asset they have, and manually require all those resources individually, and pass every URL through into the configuration of every affected module, seems like a very developer-unfriendly configuration system.

Yep, that's true. The situation has not improved – still it's the best way to configure things on app level instead of on library level.

@brion I think it's currently the best solution to separate actual source files that should be added to the webpack bundle from static assets that should just be emitted to the output folder. In your case, I'd place the dynamicaudio.swf and the ogv-worker-video.js into a dedicated assets folder and then configure the file-loader to be applied on all these assets by default.

Member

jhnns commented Apr 19, 2016

Also I'm going to go on record as saying that requiring the app developer to read readmes of every module in the dependency graph to get a list of every static asset they have, and manually require all those resources individually, and pass every URL through into the configuration of every affected module, seems like a very developer-unfriendly configuration system.

Yep, that's true. The situation has not improved – still it's the best way to configure things on app level instead of on library level.

@brion I think it's currently the best solution to separate actual source files that should be added to the webpack bundle from static assets that should just be emitted to the output folder. In your case, I'd place the dynamicaudio.swf and the ogv-worker-video.js into a dedicated assets folder and then configure the file-loader to be applied on all these assets by default.

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 19, 2016

@jhnns yeah, the trick with the worker .js assets is that default require('foo.js') behavior is to execute, not to return a URL. So if I just write this:

var worker = new Worker(require('./dist/ogv-worker-video.js'));

it won't actually work unless the webpack configuration includes a suitable binding, maybe:

{ test: /ogv-.*.js$/, loader: "file?name=/some-dir/ogv.js/[name].[ext]" },

I'm also a bit unclear from reading https://webpack.github.io/docs/using-loaders.html on whether I can specify an inline loader type like these to guarantee I get a URL to an actual file from require():

flashObject.src = require('file!dynamicaudio.swf');
var worker = new Worker(require('file!ogv-worker-video.js'));

while still allowing the loader options to be overridden in the app's config. Do inline loader options override the regex-configured ones, or the other way around?

I think I'm going to do an initial package release without the external webpack exposure -- just manual documentation of the dist-ready assets and how to set the asset paths at runtime -- and keep refactoring and testing until I'm happier with the resulting output.

(I am now using webpack to build my pre-packed distribution bundle for use outside the npm universe -- it's pretty awesome at that level! Still getting the hang of it though, and really looking forward to more module composability. :)

brion commented Apr 19, 2016

@jhnns yeah, the trick with the worker .js assets is that default require('foo.js') behavior is to execute, not to return a URL. So if I just write this:

var worker = new Worker(require('./dist/ogv-worker-video.js'));

it won't actually work unless the webpack configuration includes a suitable binding, maybe:

{ test: /ogv-.*.js$/, loader: "file?name=/some-dir/ogv.js/[name].[ext]" },

I'm also a bit unclear from reading https://webpack.github.io/docs/using-loaders.html on whether I can specify an inline loader type like these to guarantee I get a URL to an actual file from require():

flashObject.src = require('file!dynamicaudio.swf');
var worker = new Worker(require('file!ogv-worker-video.js'));

while still allowing the loader options to be overridden in the app's config. Do inline loader options override the regex-configured ones, or the other way around?

I think I'm going to do an initial package release without the external webpack exposure -- just manual documentation of the dist-ready assets and how to set the asset paths at runtime -- and keep refactoring and testing until I'm happier with the resulting output.

(I am now using webpack to build my pre-packed distribution bundle for use outside the npm universe -- it's pretty awesome at that level! Still getting the hang of it though, and really looking forward to more module composability. :)

@sompylasar

This comment has been minimized.

Show comment
Hide comment
@sompylasar

This comment has been minimized.

Show comment
Hide comment
@sompylasar

sompylasar Apr 21, 2016

whether I can specify an inline loader type

This is technically possible but highly discouraged in the webpack world.

whether I can specify an inline loader type

This is technically possible but highly discouraged in the webpack world.

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 21, 2016

@sompylasar webworkify-webpack looks like it does the blob loader trick I
would need to support cross domain loading of the modules, but doesn't seem
like it would handle my use case correctly, in that it appears to pack the
worker modules into the main source and provides no asynchronous load
mechanism.

(Also, I need to confirm that IE 10/11 can load workers from blob URLs,
which I have not tested.)

I'm supporting multiple codec modules in a media player widget, and need to
avoid loading codecs that are not needed for whatever files get played (or
if playback never is started, at all). I also want to avoid parsing
JavaScript for those modules -- large, dense code bases cross-compiled from
C libraries -- on the main thread, which would happen with
webworkify-webpack as it seems to fetch live code modules and turn their
source back into strings. So even if I loaded them asynchronously, it would
need me to load them into the main thread's webpack_require world as
live modules first, wasting CPU time and memory.

brion commented Apr 21, 2016

@sompylasar webworkify-webpack looks like it does the blob loader trick I
would need to support cross domain loading of the modules, but doesn't seem
like it would handle my use case correctly, in that it appears to pack the
worker modules into the main source and provides no asynchronous load
mechanism.

(Also, I need to confirm that IE 10/11 can load workers from blob URLs,
which I have not tested.)

I'm supporting multiple codec modules in a media player widget, and need to
avoid loading codecs that are not needed for whatever files get played (or
if playback never is started, at all). I also want to avoid parsing
JavaScript for those modules -- large, dense code bases cross-compiled from
C libraries -- on the main thread, which would happen with
webworkify-webpack as it seems to fetch live code modules and turn their
source back into strings. So even if I loaded them asynchronously, it would
need me to load them into the main thread's webpack_require world as
live modules first, wasting CPU time and memory.

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 21, 2016

Yeah, looks like async loading with code splitting per http://webpack.github.io/docs/code-splitting.html would work for the on-demand loading, but would still parse on main thread and then again on the worker. I can play games with turning source files into modules that eval strings to reduce that perhaps, but that feels a bit ugly when I have to pack into a string anyway for the worker blob load trick.

Back to the original topic...

I think it would be very useful to expose a list of static asset files in the package.json, which would be machine-readable versus just a mention in the read me. At the least I would love as an app packager to have an automated report generated every time I run webpack.

Better still would be a common runtime registry interface allowing modules packed using webpack to ask for the app-packager-configured URLs.

This sounds a lot like require('foo.png') but with the specificity that it always returns a URL, rather than only returning a URL when file-loader or url-loader is configured and trying to import JavaScript code otherwise.

The app packager could then set all the loader options they want by making a giant mapping of registered asset keys to require() invocations, with inline loaders or with configuration in the webpack config object.

Going one step further, an initial such mapping can be auto generated by a build tool, with overrides specified by the app packager.

This would I think pretty much do what I want, would allow for both automatic and manual configuration of the asset loader details, as long as dealing specifically with URLs to static assets. And it remains much less ambitious than the concord ideas linked from discussin last year -- packages are responsible for prebuilding their static assets if they want to make js from coffee script or css from sass.

I think his could be done pretty straightforwardly as a plugin to webpack, in a way that could still allow for full manual configuration of everything if you didn't want to use the build tool plugin.

Eg something like this in usage:

var staticLoader = new require('static-asset-loader')(
    'audio-feeder', // package name
    'assets' // base dir for assets in package
);

function AudioFeeder(options) {
    if (options.base) {
        staticLoader.base = options.base;
    }
    flashObject.src = staticLoader.url('dynamicaudio.swf');
    // ...
}

with added to the package.json:

"assets": ["dynamicaudio.swf"],

and in the app's calling code, choose either:

// paths configured by build tool
var audioFeeder = new require('audio-feeder');

or

// dont forget to copy the assets
var audioFeeder = new require('audio-feeder', {
    'manually/specified/path/to/copied/assets'
});

The build plugin would scan through the package manifests and build a default map, like:

require('static-asset-loader').override({
    'audio-feeder': {
        'dynamicaudio.swf': require('audio-feeder/assets/dynamicaudio.swf')
    }
});

which would get processed for loaders and included in with the other modules, with a call forced to make sure it runs before app code tries to use the assets. The tool could even suggest mappings for your config file as defaults.

Suitable static analysis could probably avoid the need for adding a list of files to the package.json.

Thoughts?

brion commented Apr 21, 2016

Yeah, looks like async loading with code splitting per http://webpack.github.io/docs/code-splitting.html would work for the on-demand loading, but would still parse on main thread and then again on the worker. I can play games with turning source files into modules that eval strings to reduce that perhaps, but that feels a bit ugly when I have to pack into a string anyway for the worker blob load trick.

Back to the original topic...

I think it would be very useful to expose a list of static asset files in the package.json, which would be machine-readable versus just a mention in the read me. At the least I would love as an app packager to have an automated report generated every time I run webpack.

Better still would be a common runtime registry interface allowing modules packed using webpack to ask for the app-packager-configured URLs.

This sounds a lot like require('foo.png') but with the specificity that it always returns a URL, rather than only returning a URL when file-loader or url-loader is configured and trying to import JavaScript code otherwise.

The app packager could then set all the loader options they want by making a giant mapping of registered asset keys to require() invocations, with inline loaders or with configuration in the webpack config object.

Going one step further, an initial such mapping can be auto generated by a build tool, with overrides specified by the app packager.

This would I think pretty much do what I want, would allow for both automatic and manual configuration of the asset loader details, as long as dealing specifically with URLs to static assets. And it remains much less ambitious than the concord ideas linked from discussin last year -- packages are responsible for prebuilding their static assets if they want to make js from coffee script or css from sass.

I think his could be done pretty straightforwardly as a plugin to webpack, in a way that could still allow for full manual configuration of everything if you didn't want to use the build tool plugin.

Eg something like this in usage:

var staticLoader = new require('static-asset-loader')(
    'audio-feeder', // package name
    'assets' // base dir for assets in package
);

function AudioFeeder(options) {
    if (options.base) {
        staticLoader.base = options.base;
    }
    flashObject.src = staticLoader.url('dynamicaudio.swf');
    // ...
}

with added to the package.json:

"assets": ["dynamicaudio.swf"],

and in the app's calling code, choose either:

// paths configured by build tool
var audioFeeder = new require('audio-feeder');

or

// dont forget to copy the assets
var audioFeeder = new require('audio-feeder', {
    'manually/specified/path/to/copied/assets'
});

The build plugin would scan through the package manifests and build a default map, like:

require('static-asset-loader').override({
    'audio-feeder': {
        'dynamicaudio.swf': require('audio-feeder/assets/dynamicaudio.swf')
    }
});

which would get processed for loaders and included in with the other modules, with a call forced to make sure it runs before app code tries to use the assets. The tool could even suggest mappings for your config file as defaults.

Suitable static analysis could probably avoid the need for adding a list of files to the package.json.

Thoughts?

@sompylasar

This comment has been minimized.

Show comment
Hide comment
@sompylasar

sompylasar Apr 21, 2016

@brion

Better still would be a common runtime registry interface allowing modules packed using webpack to ask for the app-packager-configured URLs.

This sounds a lot like require('foo.png') but with the specificity that it always returns a URL, rather than only returning a URL when file-loader or url-loader is configured and trying to import JavaScript code otherwise.

The thing is, you don't always have a URL for an arbitrary module if you don't use file-loader or url-loader. The module can be embedded into a chunk of JS next to other modules.

packages are responsible for prebuilding their static assets

This part looks reasonable, and even looks like almost what I suggested: a package exports its static assets for the app to use (it may bundle or just manually expose at a URL). This is how ES6 npm modules are made compatible with the surrounding ecosystem: they are transpiled into ES5 CommonJS which are consumable by Node.js natively.

with added to the package.json:

This looks exactly what I suggested, but with added automation.

Eg something like this in usage:

This looks even better. But I'd make something like:

// In a module that provides static assets.
var url = require('static-asset-provider')('./assets/dynamicaudio.swf');  // Can be found and resolved statically by webpack
if (!url) {
    throw new Error('Cannot get URL of `dynamicaudio.swf`. Check your bundler configuration.');
}
// In an app that uses webpack to bundle.
plugins: [
    // Captures all `require('static-asset-provider')`,
    // and replaces it with e.g `require('/full/path/to/assets/dynamicaudio.swf')`
    new StaticAssetProviderPlugin()
]

No need to scan through some manifests, every dependency is expressed with the code itself (that's one of the core ideas and benefits of webpack which I extremely like).

sompylasar commented Apr 21, 2016

@brion

Better still would be a common runtime registry interface allowing modules packed using webpack to ask for the app-packager-configured URLs.

This sounds a lot like require('foo.png') but with the specificity that it always returns a URL, rather than only returning a URL when file-loader or url-loader is configured and trying to import JavaScript code otherwise.

The thing is, you don't always have a URL for an arbitrary module if you don't use file-loader or url-loader. The module can be embedded into a chunk of JS next to other modules.

packages are responsible for prebuilding their static assets

This part looks reasonable, and even looks like almost what I suggested: a package exports its static assets for the app to use (it may bundle or just manually expose at a URL). This is how ES6 npm modules are made compatible with the surrounding ecosystem: they are transpiled into ES5 CommonJS which are consumable by Node.js natively.

with added to the package.json:

This looks exactly what I suggested, but with added automation.

Eg something like this in usage:

This looks even better. But I'd make something like:

// In a module that provides static assets.
var url = require('static-asset-provider')('./assets/dynamicaudio.swf');  // Can be found and resolved statically by webpack
if (!url) {
    throw new Error('Cannot get URL of `dynamicaudio.swf`. Check your bundler configuration.');
}
// In an app that uses webpack to bundle.
plugins: [
    // Captures all `require('static-asset-provider')`,
    // and replaces it with e.g `require('/full/path/to/assets/dynamicaudio.swf')`
    new StaticAssetProviderPlugin()
]

No need to scan through some manifests, every dependency is expressed with the code itself (that's one of the core ideas and benefits of webpack which I extremely like).

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 22, 2016

@sompylasar

Definitely agree that static analysis beats explicit manifests in this case; I like your plugin config example. :)

I guess what I want is the power of static analysis in the packing system that webpack already provides with require() and readFileSync(), plus more flexible machine-readable annotations about the return type and the sync-ness of the load that will allow working default behavior that is also reconfigurable.

The thing is, you don't always have a URL for an arbitrary module if you don't use file-loader or url-loader. The module can be embedded into a chunk of JS next to other modules.

In many cases it doesn't matter to the library whether you use a local URL, remote URL, or pre-pack them into data: URLs, or even runtime-convert a string literal into a blob: URL. What you care about as a library author is you need to feed a URL into some HTML or CSS or JS API.

At other times, that packing simply will not work, such as the Flash .swf file example -- you specifically need a URL, but you also specifically need an externally-referenced file. For this reason I would consider them separate data types ('static URL' versus just 'URL').

Similarly, strings and array buffers can be either embedded as modules returning literals in the packed JS, or they can be loaded from static files at runtime via XHR. Supporting both effectively requires using an async interface for the asset loading -- something not needed for URL references because the loading is done in someone else's code in the browser engine.

To extend your version of the example a bit with types and sync-ness:

var assets = require('static-asset-provider');


// These always return URLs, but we don't care if you pre-packed the file data or not.

img.src = assets.url('./assets/loading-spinner.svg');

worker = new Worker(assets.url('./assets/prepacked-worker.js'));


// When you need a URL that *must* be served as a separate asset to work,
// but you still don't care what it's named or what directory it's in etc.

flashObject.src = assets.staticUrl('./assets/dyamicaudio.swf');

We could extend this further to allow string & ArrayBuffer imports to be either separate files loaded via XHR or strings/arrays packed into the source, at the app packager's desire:

// Async loader interface allows the files to be either loaded via XHR or packed in JS.

assets.string('./assets/example.js').then(function(str) {
  textarea.text = str;
});

assets.buffer('./assets/codec-setup.mem').then(function(buf) {
   audioCodecInitializerData = buf;
});


// When you need a string or buffer and you need it NOW
// sync XHR may be unsafe or difficult to use, as browser makers hate it.
// Possibly redundant to readFileSync()

textarea.text = assets.stringSync('./assets/example.js');

audioCodecInitializerData = assets.bufferSync(./assets/codec-setup.mem');

I think these can all fit with the static analysis plugin model you suggest in a way that leaves the packaging location/method up to to the app packager, while the library author remains in control of what datatypes they'll get back.

So if the app packager tried to configure the .swf file in a way that couldn't be loaded as a real remote URL, the asset loader plugin would yell and scream at build time that this would not work because it's the wrong data type.

Similarly, the default behavior of static-asset-provider when not rewritten could be configurable with a runtime interface to supply URLs or pre-packed string/buffer data. That helps avoid the 'webpack vendor lock-in' problem described in a few places earlier on this thread -- a library that needs static assets using this system would benefit from webpack + a plugin, but would not require it.

brion commented Apr 22, 2016

@sompylasar

Definitely agree that static analysis beats explicit manifests in this case; I like your plugin config example. :)

I guess what I want is the power of static analysis in the packing system that webpack already provides with require() and readFileSync(), plus more flexible machine-readable annotations about the return type and the sync-ness of the load that will allow working default behavior that is also reconfigurable.

The thing is, you don't always have a URL for an arbitrary module if you don't use file-loader or url-loader. The module can be embedded into a chunk of JS next to other modules.

In many cases it doesn't matter to the library whether you use a local URL, remote URL, or pre-pack them into data: URLs, or even runtime-convert a string literal into a blob: URL. What you care about as a library author is you need to feed a URL into some HTML or CSS or JS API.

At other times, that packing simply will not work, such as the Flash .swf file example -- you specifically need a URL, but you also specifically need an externally-referenced file. For this reason I would consider them separate data types ('static URL' versus just 'URL').

Similarly, strings and array buffers can be either embedded as modules returning literals in the packed JS, or they can be loaded from static files at runtime via XHR. Supporting both effectively requires using an async interface for the asset loading -- something not needed for URL references because the loading is done in someone else's code in the browser engine.

To extend your version of the example a bit with types and sync-ness:

var assets = require('static-asset-provider');


// These always return URLs, but we don't care if you pre-packed the file data or not.

img.src = assets.url('./assets/loading-spinner.svg');

worker = new Worker(assets.url('./assets/prepacked-worker.js'));


// When you need a URL that *must* be served as a separate asset to work,
// but you still don't care what it's named or what directory it's in etc.

flashObject.src = assets.staticUrl('./assets/dyamicaudio.swf');

We could extend this further to allow string & ArrayBuffer imports to be either separate files loaded via XHR or strings/arrays packed into the source, at the app packager's desire:

// Async loader interface allows the files to be either loaded via XHR or packed in JS.

assets.string('./assets/example.js').then(function(str) {
  textarea.text = str;
});

assets.buffer('./assets/codec-setup.mem').then(function(buf) {
   audioCodecInitializerData = buf;
});


// When you need a string or buffer and you need it NOW
// sync XHR may be unsafe or difficult to use, as browser makers hate it.
// Possibly redundant to readFileSync()

textarea.text = assets.stringSync('./assets/example.js');

audioCodecInitializerData = assets.bufferSync(./assets/codec-setup.mem');

I think these can all fit with the static analysis plugin model you suggest in a way that leaves the packaging location/method up to to the app packager, while the library author remains in control of what datatypes they'll get back.

So if the app packager tried to configure the .swf file in a way that couldn't be loaded as a real remote URL, the asset loader plugin would yell and scream at build time that this would not work because it's the wrong data type.

Similarly, the default behavior of static-asset-provider when not rewritten could be configurable with a runtime interface to supply URLs or pre-packed string/buffer data. That helps avoid the 'webpack vendor lock-in' problem described in a few places earlier on this thread -- a library that needs static assets using this system would benefit from webpack + a plugin, but would not require it.

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 22, 2016

I'll try putting together a plugin based on this model and see how it works out...

brion commented Apr 22, 2016

I'll try putting together a plugin based on this model and see how it works out...

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Apr 22, 2016

Member

Haha, you've summoned a monster by reviving this thread @brion 😉

Your quote summarizes the essence of the previous discussion:

I think these can all fit with the static analysis plugin model you suggest in a way that leaves the packaging location/method up to to the app packager, while the library author remains in control of what datatypes they'll get back.

Writing require("./some/img.png") is like requesting a hook from the build system to do something. The build system cannot know what the library author is expecting from this require(): It could return a url, a data url or just the size of the image. So this should be up to the library author. The library author, on the other hand, should not have any assumptions about how these things end up into the final bundle. It should "just work"™.

The proposed concord spec tries to solve this with a declarative solution which maps paths to type definitions and then type definitions to build system transformations.

Example: require("./some/img.png") is referenced by using *.png, then mapped to url/image which is then mapped to url-loader!./some/img.png in a webpack environment.

The concept sounds really nice, but it requires a lot of community effort. And I'm still unsure if a declarative approach is flexible enough. Perhaps it is better to move more responsibility to the library author (like you should compile your ES2015 stuff before pushing it to npm).


@substack 's approach is different and very node-focused: You would not require("./some/img.png"), but write something like this:

var sizeOf = require("image-size");
var dimensions = sizeOf("images/funny-cats.png");

Then, your build system would need a transform/loader which "anticipates" the API calls by reading the AST. Thus, you can do things at build time what you cannot do at runtime (like reading files from disk). This approach is very isomorphic friendly because you don't have to do anything in node.js to make this code work.

On the other hand, it's not very scalable because you'd need a image-sizeify for browserify, an image-size-plugin for webpack, etc. It basically pushes responsibility to the eco-system.


@brion @sompylasar your approach is similar to @substack 's one, but it sounds really interesting. I'm curious about your prototype...

Member

jhnns commented Apr 22, 2016

Haha, you've summoned a monster by reviving this thread @brion 😉

Your quote summarizes the essence of the previous discussion:

I think these can all fit with the static analysis plugin model you suggest in a way that leaves the packaging location/method up to to the app packager, while the library author remains in control of what datatypes they'll get back.

Writing require("./some/img.png") is like requesting a hook from the build system to do something. The build system cannot know what the library author is expecting from this require(): It could return a url, a data url or just the size of the image. So this should be up to the library author. The library author, on the other hand, should not have any assumptions about how these things end up into the final bundle. It should "just work"™.

The proposed concord spec tries to solve this with a declarative solution which maps paths to type definitions and then type definitions to build system transformations.

Example: require("./some/img.png") is referenced by using *.png, then mapped to url/image which is then mapped to url-loader!./some/img.png in a webpack environment.

The concept sounds really nice, but it requires a lot of community effort. And I'm still unsure if a declarative approach is flexible enough. Perhaps it is better to move more responsibility to the library author (like you should compile your ES2015 stuff before pushing it to npm).


@substack 's approach is different and very node-focused: You would not require("./some/img.png"), but write something like this:

var sizeOf = require("image-size");
var dimensions = sizeOf("images/funny-cats.png");

Then, your build system would need a transform/loader which "anticipates" the API calls by reading the AST. Thus, you can do things at build time what you cannot do at runtime (like reading files from disk). This approach is very isomorphic friendly because you don't have to do anything in node.js to make this code work.

On the other hand, it's not very scalable because you'd need a image-sizeify for browserify, an image-size-plugin for webpack, etc. It basically pushes responsibility to the eco-system.


@brion @sompylasar your approach is similar to @substack 's one, but it sounds really interesting. I'm curious about your prototype...

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Apr 22, 2016

Member

One thing I wanted to add to substack's approach:

It looks really nice, because you don't have to compile your stuff before feeding it to node. But there are also disadvantages: Depending on the use-case it could really slow down your application startup because every require() is synchronous. What if you'd required several LESS files? In node, compilation could only be done sequentially. This is basically why enhanced-require is deprecated.

Although it looks like an overhead, in practice it's better to streamline the client and the server environment by always having a build step before execution.

Member

jhnns commented Apr 22, 2016

One thing I wanted to add to substack's approach:

It looks really nice, because you don't have to compile your stuff before feeding it to node. But there are also disadvantages: Depending on the use-case it could really slow down your application startup because every require() is synchronous. What if you'd required several LESS files? In node, compilation could only be done sequentially. This is basically why enhanced-require is deprecated.

Although it looks like an overhead, in practice it's better to streamline the client and the server environment by always having a build step before execution.

@brion

This comment has been minimized.

Show comment
Hide comment
@brion

brion Apr 22, 2016

@jhnns hehe, monster indeed! :)

I want to avoid overriding require() in part because files of the same file type may need to be referenced and used in different ways... But I also don't want to go too crazy on transformations. For instance an image-size build time plugin would be awesome! But... As a module author I could use a webpack loader, or some similar transform system, in my pre-publish build stages to do that.

I'd like to concentrate on an interface for that last step, where my module code gets packed into someone else's project either manually or via webpack or browserify. At that point I already know the image's size from my build, so I just need either its contents or a URL reference to it.

So, hoping to keep it unambitious, which may make it easier to get working...

brion commented Apr 22, 2016

@jhnns hehe, monster indeed! :)

I want to avoid overriding require() in part because files of the same file type may need to be referenced and used in different ways... But I also don't want to go too crazy on transformations. For instance an image-size build time plugin would be awesome! But... As a module author I could use a webpack loader, or some similar transform system, in my pre-publish build stages to do that.

I'd like to concentrate on an interface for that last step, where my module code gets packed into someone else's project either manually or via webpack or browserify. At that point I already know the image's size from my build, so I just need either its contents or a URL reference to it.

So, hoping to keep it unambitious, which may make it easier to get working...

@jhnns

This comment has been minimized.

Show comment
Hide comment
@jhnns

jhnns Apr 22, 2016

Member

So, hoping to keep it unambitious, which may make it easier to get working...

That's a valid point. In this case, it's certainly better to solve the asset referencing issue then to catch all potential edge-cases. That's what I was trying to say about pushing responsibility to the library authors. They should do as much as possible without having any assumptions about the build system.

Member

jhnns commented Apr 22, 2016

So, hoping to keep it unambitious, which may make it easier to get working...

That's a valid point. In this case, it's certainly better to solve the asset referencing issue then to catch all potential edge-cases. That's what I was trying to say about pushing responsibility to the library authors. They should do as much as possible without having any assumptions about the build system.

@sompylasar

This comment has been minimized.

Show comment
Hide comment
@sompylasar

sompylasar Apr 22, 2016

@brion

In many cases it doesn't matter to the library whether you use a local URL, remote URL, or pre-pack them into data: URLs, or even runtime-convert a string literal into a blob: URL. What you care about as a library author is you need to feed a URL into some HTML or CSS or JS API.

Yes, it doesn't matter which URL you get, but sometimes you really don't have a URL, neither local, nor remote, nor data, nor blob. What you have is, e.g. base64 of the file contents. Yes, you can somehow transform it into a data: or blob: URI.

To extend your version of the example a bit with types and sync-ness:

Yes, I like your approach. It will be harder to statically extract, but nonetheless it's possible.

We could extend this further to allow string & ArrayBuffer imports to be either separate files loaded via XHR or strings/arrays packed into the source, at the app packager's desire:

Yes, great!

Similarly, the default behavior of static-asset-provider when not rewritten could be configurable with a runtime interface to supply URLs or pre-packed string/buffer data. That helps avoid the 'webpack vendor lock-in' problem described in a few places earlier on this thread -- a library that needs static assets using this system would benefit from webpack + a plugin, but would not require it.

Exactly! One thing is that if 'static-asset-provider' is not rewritten by something like webpack or require-hacker, there cannot be any URL, because this library knows nothing about URLs, and how the assets are served, and where this server will be located (the 'L' for locator in 'URL').

I'll try putting together a plugin based on this model and see how it works out...

Wow would be very cool! Please link from here as soon a something is pushed.

@brion

In many cases it doesn't matter to the library whether you use a local URL, remote URL, or pre-pack them into data: URLs, or even runtime-convert a string literal into a blob: URL. What you care about as a library author is you need to feed a URL into some HTML or CSS or JS API.

Yes, it doesn't matter which URL you get, but sometimes you really don't have a URL, neither local, nor remote, nor data, nor blob. What you have is, e.g. base64 of the file contents. Yes, you can somehow transform it into a data: or blob: URI.

To extend your version of the example a bit with types and sync-ness:

Yes, I like your approach. It will be harder to statically extract, but nonetheless it's possible.

We could extend this further to allow string & ArrayBuffer imports to be either separate files loaded via XHR or strings/arrays packed into the source, at the app packager's desire:

Yes, great!

Similarly, the default behavior of static-asset-provider when not rewritten could be configurable with a runtime interface to supply URLs or pre-packed string/buffer data. That helps avoid the 'webpack vendor lock-in' problem described in a few places earlier on this thread -- a library that needs static assets using this system would benefit from webpack + a plugin, but would not require it.

Exactly! One thing is that if 'static-asset-provider' is not rewritten by something like webpack or require-hacker, there cannot be any URL, because this library knows nothing about URLs, and how the assets are served, and where this server will be located (the 'L' for locator in 'URL').

I'll try putting together a plugin based on this model and see how it works out...

Wow would be very cool! Please link from here as soon a something is pushed.

@wmhilton wmhilton referenced this issue in npm/marky-markdown Jul 14, 2016

Merged

Browserify #211

@TheLarkInn TheLarkInn closed this Feb 18, 2017

@ptbrowne ptbrowne referenced this issue in cozy/cozy-ui Jun 12, 2017

Closed

Need webpack to import cozy-ui ? #139

@jhnns jhnns referenced this issue in JSKongress/JS-Kongress-Munich-In-Deep-Track Nov 2, 2017

Closed

[webpack team proposal] Is it a good idea to overload require()/import? #12

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment