Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bubble-up mechanism for exceptions #3

Closed
ghislainfourny opened this issue Jan 11, 2018 · 0 comments
Closed

Bubble-up mechanism for exceptions #3

ghislainfourny opened this issue Jan 11, 2018 · 0 comments

Comments

@ghislainfourny
Copy link
Member

In the future (beta, RC, release), the clean approach will be to catch exceptions in the closure, and if there is an exception or error, have the Spark task return some special internal sequence of items or tuple stream that carries (encapsulates) the exception or error. Then you can test for that special value received on the caller side, and if it is an exception you can "unwrap" it and print it nicely, as if it had been executed locally (which is the "ideal" way the engine should feel like to the user).

The detection of these special values can be done lazily, meaning that you don't need to consume sequences eagerly. As soon as, upon materializing a sequence of items, you notice that there is an encapsulation exception at any depth and stage, you can "bubble it up up up" all the way, and even across multiple level of Spark jobs, to the main part of the program (with this "special item" mechanism") where the exception is finally printed.

Of course, fatal errors will still be fatal errors. But everything non-fatal we should be able to catch, encapsulate and print nicely.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant