Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP

Loading…

Kill job when killing mapreduce #35

Open
piccolbo opened this Issue · 1 comment

1 participant

@piccolbo
Owner

for maximum integration it would be nice to have that a mapreduce call, on exit, clean up the related hadoop job. This would happen only if mapreduce terminates abnormally, for instance gets interrupted. This should work exactly the same at the prompt or in a script. It's not clear what the implementation approach could be, other than a) the kill command is included in the job output so capturing that (capture.otuput?) and calling system is one way of doing it b)the on.exit function is the way to add this clean up action
This is still not a plan because the information on how to kill a job is available after on exit can be called

@piccolbo
Owner

Went through a number of experiments today using mostly tryCatch and I can confirm the statement that the output can be captured only if the job completes. If anyone has ideas I am listening.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.