You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the future (beta, RC, release), the clean approach will be to catch exceptions in the closure, and if there is an exception or error, have the Spark task return some special internal sequence of items or tuple stream that carries (encapsulates) the exception or error. Then you can test for that special value received on the caller side, and if it is an exception you can "unwrap" it and print it nicely, as if it had been executed locally (which is the "ideal" way the engine should feel like to the user).
The detection of these special values can be done lazily, meaning that you don't need to consume sequences eagerly. As soon as, upon materializing a sequence of items, you notice that there is an encapsulation exception at any depth and stage, you can "bubble it up up up" all the way, and even across multiple level of Spark jobs, to the main part of the program (with this "special item" mechanism") where the exception is finally printed.
Of course, fatal errors will still be fatal errors. But everything non-fatal we should be able to catch, encapsulate and print nicely.
The text was updated successfully, but these errors were encountered:
In the future (beta, RC, release), the clean approach will be to catch exceptions in the closure, and if there is an exception or error, have the Spark task return some special internal sequence of items or tuple stream that carries (encapsulates) the exception or error. Then you can test for that special value received on the caller side, and if it is an exception you can "unwrap" it and print it nicely, as if it had been executed locally (which is the "ideal" way the engine should feel like to the user).
The detection of these special values can be done lazily, meaning that you don't need to consume sequences eagerly. As soon as, upon materializing a sequence of items, you notice that there is an encapsulation exception at any depth and stage, you can "bubble it up up up" all the way, and even across multiple level of Spark jobs, to the main part of the program (with this "special item" mechanism") where the exception is finally printed.
Of course, fatal errors will still be fatal errors. But everything non-fatal we should be able to catch, encapsulate and print nicely.
The text was updated successfully, but these errors were encountered: