Skip to content
This repository was archived by the owner on Jul 13, 2020. It is now read-only.

More informative load errors #404

Closed
wants to merge 11 commits into from

Conversation

nyarly
Copy link

@nyarly nyarly commented Jun 23, 2015

Additions:

ENOENT errors are rebuilt as Error()s, so there's a stack from within ML.

Errors during load report the immediate source of the error and the pre-normalization name that triggered the error.

Meant to address #399 and #397

nyarly added 3 commits June 22, 2015 17:22
The direct requestor of a module is reported, and the name it uses to
refer to the problem module (e.g. pre-pathing) is reported as well, which should make
correcting issues much easier
…load_errors

Conflicts:
	dist/es6-module-loader-dev.src.js
	dist/es6-module-loader.src.js
	src/wrapper-start.js
@guybedford
Copy link
Member

Thanks for looking into this. In my testing, err.message was not showing up in NodeJS-thrown errors, only err.stack hence https://github.com/ModuleLoader/es6-module-loader/blob/master/src/wrapper-start.js#L45. Any ideas on the Travis test failure?

@nyarly
Copy link
Author

nyarly commented Jun 23, 2015

It... looks like on Travis, the node generated ENOENTs are being produced as proper errors, although I also notice that the names of files haven't been normalized to file:///

It's entirely possible I was too conservative in my filtering (https://github.com/ModuleLoader/es6-module-loader/pull/404/files#diff-1037cb44c9524a2a57ee4bd6dd086837R70) and the thing to do is look for err.message? My impression locally has been that Node ENOENTs come back with a stack that is only the message, which is why I was creating a new error when they bubble into module-loader.

The behavior on Travis seems to be different. I've got node 0.10.36 and npm 2.5.0, vs. Travis's 0.10.39 and 1.4.28. If it weren't JavaScript I'd be surprised that those versions would make a difference in the stack of a basic error but ¯_(ツ)_/¯

Let me take a whirl at checking for lack-of-message in the FS error instead of an presence-of-errno.

custom loader spec was checking that "e.stack" was what "e.message"
should be
@nyarly
Copy link
Author

nyarly commented Jun 23, 2015

Seems to fix it: the custom loader tests hadn't been updated, so they were comparing to the e.stack instead of e.message. I don't know why I didn't catch that before push - I did do a merge which clobbered changed files in dist/

if(!load.requests)
load.requests = [];

load.requests.unshift({as: request, from: refererName});
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we not avoid adding more metadata to the load record?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I couldn't see how to do it. I looked into the suggestion to trace-after-fail, but the trace itself fails and the after-trace code (which would report the trace) doesn't run in that case. I couldn't figure out how to get a sensible report out of a partial (i.e. failed) trace, even if the after-run code were used after a fail.

Possibly we could add a build option to enable load request tracking? Since the error report checks for the presence of the requests property, it'd fall back to the old behavior in its absence (that error message might refer to the option, though.)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Expanding on this: if there were a request tracking option, I'd think that systemjs-builder would set it by default, but systemjs wouldn't (or wouldn't in browser, for instance.) It's very useful during development, but I can see the concern about wasted memory/weedy values in production.

@guybedford
Copy link
Member

Also I wonder if it is actually useful to throw non-errors in order to hide the stack as well. The fact that the stack originates from es6-module-loader internals is completely un-useful. Controversial I know though.

@nyarly
Copy link
Author

nyarly commented Jun 24, 2015

For a library like es6-module-loader, I'd argue strongly that having a error trace back to its actual source (even within a concatenated file) is extremely useful. Especially with dynamic typing, it's almost impossible to properly enforce the library's interface, and violations show up as errors.

Case in point: I was driven to trace this error reporting back from systemjs-builder because the error messages I got were unhelpful. Even a stacktrace into es6-module-loader would have saved me a couple of hours.

Client applications like systemjs can catch errors and present them better e.g. by raising their own errors or throwing strings or whatever.

@nyarly
Copy link
Author

nyarly commented Jul 6, 2015

@guybedford just checking in on what direction to take with this?

@guybedford
Copy link
Member

@nyarly thanks for your patience - just got back from some time off.

Just thinking about the loading trace issue, the point is that in general we can't know who loaded module X, because a given module can be loaded by any number of parents. So which parent chain do we report? And how to make this work with circular references? Coming up with a well-defined solution to this is half the battle surely?

It would help if you could perhaps clarify how your implementation would deal with these cases?

Thanks so much for your efforts here, it's really appreciated.

@nyarly
Copy link
Author

nyarly commented Jul 14, 2015

Sorry - I had a comment here and it got lost somehow.

Reviewing a reflecting on this: the answer is "this first one" - these errors aren't holistic (although that might be a further goal: to report all load errors), they're immediate. What's reported is the first module that cannot be loaded, the immediate parent that was loading it and the name that parent is trying to use. The resulting exception crashes the whole load process - which is what happens as is.

@guybedford
Copy link
Member

Right, so I'm more than happy with that, and I think we might have the metadata already available (indirectly) to be able to construct such a graph at debugging time. I don't want to add new graph meta as mentioned above, so if possible it would be great to use the existing private API to work this out. Just let me know if you need any further help with this, very much appreciate the efforts.

@nyarly
Copy link
Author

nyarly commented Jul 15, 2015

I'm more than happy to take that on, but I haven't been able to find how to do that with any private API. As mentioned above, if the load fails, running a trace will fail as well, and you don't get any new information. Is there another interface I'm not seeing that might provide this information?

@guybedford
Copy link
Member

It should be possible to iterate over the loads in the linkSet, looking for the first load that is the direct parent of the load that has errored. This would be done via checking parentLoad.dependencies contains a dependency entry with a value equal to load.name. Let me know if that sounds workable here?

@nyarly
Copy link
Author

nyarly commented Jul 17, 2015

I'll certainly look into it - meant to have a answer before responding, but there have been distractions.

@nyarly
Copy link
Author

nyarly commented Jul 18, 2015

That worked great - I hadn't been able to pick apart the linkSet and load objects until now. New code doesn't build the request objects until an error is reported.

I think that the number of requests for a failed load should always be at most 1, since as soon as it fails, the load process is interrupted to report the failure. All of the requests should be found by the current code, so if there's a concrete failure case with more than one request, extension should be easy. I just don't know what the error should look like in that case.

@guybedford
Copy link
Member

Excellent, this is looking good to go soon! Just let me know about the error number detection?

Do you think you could you remove the build files from the PR? Just trying to avoid commit churn.

@nyarly
Copy link
Author

nyarly commented Jul 22, 2015

Okay: pulled the errno check and tests pass without it, so 🤷. Reverted the dist/ files to master.

@@ -516,13 +516,32 @@ function logloads(loads) {

// 15.2.5.2.4
function linkSetFailed(linkSet, load, exc) {
function requestsForLoad(){
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Space after ()

}
else {
newErr = err + '\n\t' + msg;
newErr = msg + '\n\t' + err;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this order still needs to be reversed again to match the same semantics?

@guybedford
Copy link
Member

Thanks, I'm still seeing some diffs for the dist files though? Perhaps it could be worth squashing at this point.

else {
// node errors only look correct with the stack modified
newErr.message = err.message;
newErr.stack = err.stack + '\n\t' + msg;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this might have been here for a good reason.

Currently if I throw errors in Node from this branch, they look like:

[Error: ENOENT, open '/Users/guybedford/Projects/es6-module-loader/asdf.js'
    Error loading file:///Users/guybedford/Projects/es6-module-loader/asdf.js]

Which looks like an array not a stack.

Any ideas?

@guybedford
Copy link
Member

Thanks again. Sorry this isn't just any code so do need to be very careful about the details.

@nyarly
Copy link
Author

nyarly commented Jul 24, 2015

I'm working through these changes. I understand the "not just any code" point - which is why I was surprised when I started that there's no linting step (which would save some of this runaround) and that brace-less ifs are tolerated.

I see now that there actually is jshint in the project. Maybe JSCS too?

@nyarly
Copy link
Author

nyarly commented Jul 24, 2015

I think I figured out the dist file issue (my own idiosyncratic git configs... which I think I've corrected for this case.) Do you still want this squashed to master?

@guybedford
Copy link
Member

Thanks, yes please do squash the commits. Did you try out the error reporting in NodeJS? Would be interested to hear your opinions on that.

@nyarly
Copy link
Author

nyarly commented Jul 27, 2015

I did the squash, and wound up rolling back some of the error inspection stuff while I was there. The result is a bunch of fails in test:node - I'm digging in now to see what the details are.

For reference, which versions of node are you testing against?

@guybedford
Copy link
Member

Sorry I'm talking rubbish - it was actually just the way I was testing it via the fact that:

> Promise.reject(new Error('asdf')).catch(console.error.bind(console))

Gives an array-looking wrapper around the error. Wrapping a throw in a setTimeout makes the error look fine again.

In short, ignore my comments re Node errors. I can confirm 100% that the tests are all passing for me.

I think that just leaves squashing for merge then!

@guybedford
Copy link
Member

Hoping to do a release here soon. Let me know if you're able to clear this up, otherwise happy to jump in and merge this through manually as well.

@nyarly
Copy link
Author

nyarly commented Jul 28, 2015

So, I've re-merged master into this branch, but I can't get a rebase to work without wrecking the code. Part of that is the dist files in the tree, and part is my own inexperience with rebasing. Rather than hold this up, can I turn the rebase over to you?

guybedford added a commit that referenced this pull request Jul 29, 2015
@guybedford
Copy link
Member

Sure, no problem, thanks for your work here - I've manually merged this in 3201bb4.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants