Skip to content

no error reported by Hadoop in case of immediate failure #4

klbostee opened this Issue Feb 21, 2010 · 4 comments

3 participants


As originally reported by Elias Pampalk:

The following scripts demonstrate a failure to fail when executed on a hadoop cluster (fails fine if executed locally):

import dumbo

def mapper(k, v):
    yield 1, 1

if __name__ == "__main__":, dumbo.sumsreducer, combiner=dumbo.sumsreducer)

The test uses dumbo.sumsreducer where dumbo.sumreduce should be used. A TypeError should be thrown by dumbo.sumsreducer (in the combiner). Instead the Hadoop reports show no error and zero output from the mapper.

cap commented Jun 21, 2010

I encounter this non-failure whenever the script fails before calling This happens most frequently when a top-level import statement fails because of unfulfilled dependencies.


@cap: Think that's actually a slightly different problem. When a dumbo script fails very quickly (i.e. basically immediately, instead of after having run for a while) it often happens that Hadoop Streaming's stderr catching mechanism hasn't been set up properly yet to catch the error. In this case you indeed won't see the error, but unfortunately there's not much that can be done about this in Dumbo itself.


On second thought this actually is the same problem. It does fail fine, but it happens so quick that the Hadoop Streaming logging doesn't catch it (presumably because it hasn't been initialized properly yet). Can't immediately think of a clean Dumbo-side fix for this problem, but I'll leave the ticket open for future reference...

dangra commented Jun 24, 2011

I was bitten by this bug, a possible workaround is to delay the propagation of the exception by a sensible time so hadoop has a chance to setup streaming logging before terminating python interpreter. At least it will happen less often and will make dumbo more reliable at a low price.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Something went wrong with that request. Please try again.