When I/O errors occur the dataset writer handles them when the next batch would be written (or at teardown time if it was the last batch).
If the dataset writer has applied backpressure then this doesn't work because there is no "next batch".
This was encountered in the python backpressure test which is a bit artificial. To recreate this change the GatingFs::open_output_stream method to return None after its unlocked.
This is fairly minor because it doesn't really happen in practice. It takes many concurrent writes to get into backpressure and unless all of the fail there will be some that succeed and eventually get the writer out of backpressure enough to discover the failure.
Still, it would be nice to fix this to clean things up.
Reporter: Weston Pace / @westonpace
Note: This issue was originally created as ARROW-16259. Please see the migration documentation for further details.