Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

EBADF with streams and pipeline #267

Closed
rijnhard opened this issue Jan 14, 2021 · 7 comments
Closed

EBADF with streams and pipeline #267

rijnhard opened this issue Jan 14, 2021 · 7 comments

Comments

@rijnhard
Copy link

See #260

to address the comments: no you cannot catch the error using .catch or try/catch
that was the point of this answer #260 (comment)

Basically (I realise there is a language barrier) you cannot catch the error, at least nothing I did could.

@rijnhard
Copy link
Author

I also don't see how detachDescriptor will solve this problem @benjamingr.

The problem is that I get an UNCATCHABLE EBADF error if the descriptor is already closed and I close it.

Especially since it's not possible to tell if the descriptor is already closed (at least I could not find a way).

@silkentrance
Copy link
Collaborator

detachDescriptor will cause tmp to not try to close the already closed file.

@rijnhard
Copy link
Author

But it will also mean that I have to close it manually, correct?

that may be a valid workaround to my case.
But the issue here is that the error thrown is not catchable at all, which should be rectified

@benjamingr
Copy link

The problem is that I get an UNCATCHABLE EBADF error if the descriptor is already closed and I close it.

Create a (short!) repro of the issue?

@benjamingr
Copy link

There is no such thing as an "uncatchable EBADF" in Node.js as far as I know - the only way that could happen is if someone (either node-tmp or your code) is not propagating the error.

@rijnhard
Copy link
Author

rijnhard commented Jan 20, 2021 via email

@rijnhard
Copy link
Author

So firstly, I apologise... it turns out the problem was caused by the magic handling of file descriptors with streams.

const destination = await tmp.fileSync();
const stream = fs.createWriteStream(destination.path, { fd:destination.fd });

const abort = () => {
    destination.cleanup();
}

// later on in another function
await ppipeline( // just a promisified pipeline function
    source,
    async function* (source) {
        yield source.read(); // just a random transform function that can break at any time.
    },
    destination.stream
)

So let me just put this down if anyone else has an issue with node-tmp and streams.

  1. FileStream behaviour: if you use the util.pipeline function with a file stream, it will automatically close that stream, on completion or failure.
  2. ppipeline actually throws the error, but because of how node handles this it has no stack trace.

I only figured this out when trying to reproduce this because the abort function can actually be triggered at any point by a number of other functions in my code and the pipeline is also actually engaged somewhere else.

@rijnhard rijnhard changed the title Follow up: better error handling EBADF with streams and pipeline Jan 22, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants