Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

More helpful warning message than "Closed while Pending/Unready" #2689

Closed
palacsint opened this issue Jul 2, 2018 · 25 comments
Closed

More helpful warning message than "Closed while Pending/Unready" #2689

palacsint opened this issue Jul 2, 2018 · 25 comments
Assignees

Comments

@palacsint
Copy link

I run into a not too helpful "Closed while Pending/Unready" warning message from Jetty which takes a while to debug and fix (details are here: https://stackoverflow.com/q/51014946/843804). It turned out that (in my case) Jetty closes the output stream, so "Writing after AsyncContext.complete()" (or "Closed while Pending/Unready or writing after AsyncContext.complete()") error message would be more appropriate here.

Furthermore, I've not found too much real world examples on the web for async servlets which get data from a non-Jetty thread. So, an official one also would be great. (See my answer on the linked Stack Overflow question above for our code: https://stackoverflow.com/a/51104868/843804).

@joakime
Copy link
Contributor

joakime commented Jul 2, 2018

Thanks for filing this.
Do you mind if we copy/paste the details into the issue tracking here?

@palacsint
Copy link
Author

You're welcome! Feel free to use my Stack Overflow question and answer them as you wish.

@joakime
Copy link
Contributor

joakime commented Jul 2, 2018

From the stackoverflow.com question ( copied here with permission from @palacsint )

Question: Understanding Jetty's “Closed while Pending/Unready” warning

We have an asynchronous servlet which produces the following warning log from Jetty:

java.io.IOException: Closed while Pending/Unready

After enabling debug logs I got the following stacktrace:

WARN [jetty-25948] (HttpOutput.java:278)   - java.io.IOException: Closed while Pending/Unready
DEBUG [jetty-25948] (HttpOutput.java:279)   - 
java.io.IOException: Closed while Pending/Unready
        at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:277) ~[jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.Response.closeOutput(Response.java:1044) [jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:488) [jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:293) [jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:708) [jetty-util.jar:9.4.8.v20171121]
        at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:626) [jetty-util.jar:9.4.8.v20171121]
        at java.lang.Thread.run(Thread.java:748) [na:1.8.0_151]

It did not help too much.

The warning comes only after Jetty calls the onTimeout() method of our AsyncListener. QA sometimes could reproduce it by using kill -9 on the client side application.

How can I reproduce this warning with a sample servlet-client code? I would like to understand this issue in a simpler environment than our production code to be able to fix the production one afterwards. How should a sample servlet behave? Is it possible to reproduce that with an Apache Commons HttpClient client side in the same JUnit test? (That would be great for writing an integration test without complicated network hacking, like kill -9.)

I have tried a few things to implement a sample async servlet and client without success. I don't think that attaching this code would help too much but I can do that if anyone interested.

Jetty version: 9.4.8.v20171121

update (2018-06-27):

Reflecting to @joakim Erdfelt's helpful answer, I have not found any close() call in our code, but found a suspicious missing synchronization. Here is the base of our async poll servlet:

public class QueuePollServlet extends HttpServlet {

    public QueuePollServlet() {
    }

    @Override
    protected void doPost(final HttpServletRequest req, final HttpServletResponse resp)
            throws ServletException, IOException {
        resp.setContentType(MediaType.OCTET_STREAM.type());
        resp.setStatus(HttpServletResponse.SC_OK);
        resp.flushBuffer();
        final AsyncContext async = req.startAsync();
        async.setTimeout(30_000);
        final ServletOutputStream output = resp.getOutputStream();
        final QueueWriteListener writeListener = new QueueWriteListener(async, output);
        async.addListener(writeListener);
        output.setWriteListener(writeListener);
    }

    private static class QueueWriteListener implements AsyncListener, WriteListener {

        private final AsyncContext asyncContext;
        private final ServletOutputStream output;

        public QueueWriteListener(final AsyncContext asyncContext, final ServletOutputStream output) {
            this.asyncContext = checkNotNull(asyncContext, "asyncContext cannot be null");
            this.output = checkNotNull(output, "output cannot be null");
        }

        @Override
        public void onWritePossible() throws IOException {
            writeImpl();
        }

        private synchronized void writeImpl() throws IOException {
            while (output.isReady()) {
                final byte[] message = getNextMessage();
                if (message == null) {
                    output.flush();
                    return;
                }
                output.write(message);
            }
        }

        private void completeImpl() {
            asyncContext.complete();
        }

        public void dataArrived() {
            try {
                writeImpl();
            } catch (IOException e) {
                ...
            }
        }

        public void noMoreBuffers() {
            completeImpl();
        }

        @Override
        public void onTimeout(final AsyncEvent event) throws IOException {
            completeImpl();
        }

        @Override
        public void onError(final Throwable t) {
            logger.error("Writer.onError", t);
            completeImpl();
        }


        ...
    }
}

A probable race condition:

  1. DataFeederThread: calls dataArrived() -> writeImpl(), then it gets that output.isReady() is true.
  2. Jetty calls onTimeout() which completes the context.
  3. DataFeederThread: calls output.write() in the while loop but found the completed context.

Could this scenario cause the Closed while Pending/Unready warning or is it another issue?
I am right that making completeImpl() synchronized solves the problem or there is something else to care about?

update (2018-06-28):

We also have a similar onError implementation in QueueWriteListener as the following snippet:

@Override
public void onError(final Throwable t) {
    logger.error("Writer.onError", t);
    completeImpl();
}

Anyway, there is no onError error log around the Closed while Pending/Unready log message (looking at a two hour timeframe for each), just EOFs like the following ones from our DataFeederThread:

DEBUG [DataFeederThread] (HttpOutput.java:224) - 
org.eclipse.jetty.io.EofException: null
        at org.eclipse.jetty.server.HttpConnection$SendCallback.reset(HttpConnection.java:704) ~[jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpConnection$SendCallback.access$300(HttpConnection.java:668) ~[jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpConnection.send(HttpConnection.java:526) ~[jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpChannel.sendResponse(HttpChannel.java:778) ~[jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpChannel.write(HttpChannel.java:834) ~[jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:234) [jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:218) [jetty-server.jar:9.4.8.v20171121]
        at org.eclipse.jetty.server.HttpOutput.flush(HttpOutput.java:392) [jetty-server.jar:9.4.8.v20171121]
        at com.example.QueuePollServlet$QueueWriteListener.writeImpl()
        at com.example.QueuePollServlet$QueueWriteListener.dataArrived()


DEBUG [DataFeederThread] (QueuePollServlet.java:217) - messageArrived exception
org.eclipse.jetty.io.EofException: Closed
        at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:476) ~[jetty-server.jar:9.4.8.v20171121]
        at com.example.QueuePollServlet$QueueWriteListener.writeImpl()
        at com.example.QueuePollServlet$QueueWriteListener.dataArrived()

Troubleshooting / Resolution

It is possible to reproduce the Closed while Pending/Unready warning with some manual debugging and a plain curl client. I have tested it with Jetty 9.4.8.v20171121 and Jetty 9.4.11.v20180605. You need two breakpoints, so it's not easy to reliably reproduce in a test environment.

  1. The first breakpoint is in the HttpOutput.write() method right after it changes its state from READY to PENDING:
case READY:
    if (!_state.compareAndSet(OutputState.READY, OutputState.PENDING))
        continue;
    // put a breakpoint here
  1. The second one in Response.closeOutput():
case STREAM:
    // put a breakpiont here
    if (!_out.isClosed())
        getOutputStream().close();
    break;

Steps to reproduce:

  1. [JettyThread1] Calls QueueWriteListener.onWritePossible() which writes a few bytes to the output then returns (as its input buffer is empty).
  2. Wait for the onTimeout event.
  3. [JettyThread2] HttpChannelState.onTimeout() calls QueueWriteListener.onTimeout() which calls asyncContext.complete().
  4. [JettyThread2] HttpChannelState.onTimeout() schedules a dispatch after the async timeout.
  5. [JettyThread2] Pauses at the second breakpoint
  6. [DFT] DataFeederThread calls writeImpl()
  7. [DFT] DataFeederThread calls HttpOutput.write() (it's the output stream)
  8. [DFT] HttpOutput.write() changes its state from READY to PENDING
  9. [DFT] DataFeederThread pauses here, due to the breakpoint above
  10. [JettyThread2] The scheduled dispatch closes the output stream and produces the Closed while Pending/Unready warning.

So, actually it's Jetty who closes the output stream on this (Jetty 9.4.8.v20171121) stack:

Thread [jetty-19] (Suspended)	
    Response.closeOutput() line: 1043	
    HttpChannelOverHttp(HttpChannel).handle() line: 488	
    HttpChannelOverHttp(HttpChannel).run() line: 293	
    QueuedThreadPool.runJob(Runnable) line: 708	
    QueuedThreadPool$2.run() line: 626	
    Thread.run() line: 748	

Making onTimeout() synchronized (as well as writeImpl() is also synchronized) in the listener does not help since the scheduled closing still be able to overlap with writeImpl (from DataFeederThread). Consider this case:

  1. [JettyThread1] Calls QueueWriteListener.onWritePossible() which writes a few bytes to the output then returns (as its input buffer is empty).
  2. Wait for the onTimeout event.
  3. [JettyThread2] HttpChannelState.onTimeout() calls QueueWriteListener.onTimeout() which calls asyncContext.complete().
  4. [DFT] DataFeederThread calls writeImpl() (it's blocked since onTimeout has not finished yet)
  5. [JettyThread2] QueueWriteListener.onTimeout() finishes
  6. [DFT] writeImpl() can run
  7. [JettyThread2] HttpChannelState.onTimeout() schedules a dispatch after the async timeout.
  8. [JettyThread2] Pauses at the second breakpoint
  9. [DFT] DataFeederThread calls HttpOutput.write() (it's the output stream)
  10. [DFT] HttpOutput.write() changes its state from READY to PENDING
  11. [DFT] DataFeederThread pauses here, due to the breakpoint above
  12. [JettyThread2] The scheduled dispatch closes the output stream and produces the Closed while Pending/Unready warning.

Unfortunately, after asnyContext.complete() it is not enough to check output.isReady(). It returns true since Jetty reopens the HttpOutput (see the stack below), so you need a separate flag for that in the listener.

Thread [jetty-19] (Suspended)   
    Response.closeOutput() line: 1043   
    HttpChannelOverHttp(HttpChannel).handle() line: 488 
    HttpChannelOverHttp(HttpChannel).run() line: 293    
    QueuedThreadPool.runJob(Runnable) line: 708 
    QueuedThreadPool$2.run() line: 626 
    Thread.run() line: 748  

Furthermore, isReady() also returns true when the output is still closed (before recycle/reopen). Related question: isReady() returns true in closed state - why?

The final implementation is something similar:

@Override
protected void doPost(final HttpServletRequest req, final HttpServletResponse resp)
        throws ServletException, IOException {
    resp.setContentType(MediaType.OCTET_STREAM.type());
    resp.setStatus(HttpServletResponse.SC_OK);
    resp.setBufferSize(4096);
    resp.flushBuffer();
    final AsyncContext async = req.startAsync();
    async.setTimeout(5_000); // millis
    final ServletOutputStream output = resp.getOutputStream();
    final QueueWriteListener writeListener = new QueueWriteListener(async, output);
    async.addListener(writeListener);
    output.setWriteListener(writeListener);
}

private static class QueueWriteListener implements AsyncListener, WriteListener {

    private static final Logger logger = LoggerFactory.getLogger(QueueWriteListener.class);

    private final AsyncContext asyncContext;
    private final ServletOutputStream output;

    @GuardedBy("this")
    private boolean completed = false;

    public QueueWriteListener(final AsyncContext asyncContext, final ServletOutputStream output) {
        this.asyncContext = checkNotNull(asyncContext, "asyncContext cannot be null");
        this.output = checkNotNull(output, "output cannot be null");
    }

    @Override
    public void onWritePossible() throws IOException {
        writeImpl();
    }

    private synchronized void writeImpl() throws IOException {
        if (completed) {
            return;
        }
        while (output.isReady()) {
            final byte[] message = getNextMessage();
            if (message == null) {
                output.flush();
                return;
            }
            output.write(message);
        }
    }

    private synchronized void completeImpl() {
        // also stops DataFeederThread to call bufferArrived
        completed = true;
        asyncContext.complete();
    }

    @Override
    public void onError(final Throwable t) {
        logger.error("Writer.onError", t);
        completeImpl();
    }

    public void dataArrived() {
        try {
            writeImpl();
        } catch (RuntimeException | IOException e) {
            ...
        }
    }

    public void noMoreData() {
        completeImpl();
    }

    @Override
    public synchronized void onComplete(final AsyncEvent event) throws IOException {
        completed = true; // might not needed but does not hurt
    }

    @Override
    public synchronized void onTimeout(final AsyncEvent event) throws IOException {
        completeImpl();
    }

    @Override
    public void onError(final AsyncEvent event) throws IOException {
        logger.error("onError", event.getThrowable());
    }

    ...
}

@joakime
Copy link
Contributor

joakime commented Jul 27, 2018

See new stackoverflow issue (from same OP) ...

https://stackoverflow.com/questions/51560975/closed-while-pending-unready-warnings-from-jetty

@joakime
Copy link
Contributor

joakime commented Jul 27, 2018

@gregw this issue could use your eyes (when you get back from vacation)

@palacsint
Copy link
Author

I patched our Jetty lib with the following to make sure I'm debugging the right servlet:

--- jetty-server/src/main/java/org/eclipse/jetty/server/HttpOutput.java	(.../HttpOutput.java)	(revision 41ed9f29f41263c9facede20bddfaafa669b43c5)
+++ jetty-server/src/main/java/org/eclipse/jetty/server/HttpOutput.java	(.../HttpOutput.java)	(revision 81f438ed81b52cfdde91dbed505aeba5bd773493)
@@ -280,7 +280,9 @@
                     // abort the response.
                     if (!_state.compareAndSet(state, OutputState.CLOSED))
                         continue;
-                    IOException ex = new IOException("Closed while Pending/Unready");
+                    Request request = _channel.getRequest();
+                    StringBuffer requestUrl = request.getRequestURL();
+                    IOException ex = new IOException("Closed while Pending/Unready, requestUrl=" + requestUrl);
                     LOG.warn(ex.toString());
                     LOG.debug(ex);
                     _channel.abort(ex);

(Feel free to use it for anything.)

@joakime
Copy link
Contributor

joakime commented Jul 27, 2018

If you are using the Request object like that, you might want to make sure that the request wasn't recycled (ie: you are not looking at an old/invalid request).
If the Request.getTimeStamp() == 0 or !Request.hasMetaData() then it's a recycled request that was returned to the Request pool.

@joakime
Copy link
Contributor

joakime commented Jul 27, 2018

Perhaps you want to just dump like this ...

Request request = _channel.getRequest();
Response response = _channel.getResponse();
IOException ex = new IOException("Closed while Pending/Unready, channel=" + _channel + ", request=" + request + ", response=" + response + ", HttpOutput=" + this);

That will show ...

  • The HttpChannel state
  • The HttpChannel committed state
  • The Request header fields
  • The HttpChannel timestamp
  • The Request handled state
  • The Request method
  • The Request URI
  • The Response status code
  • The Response Http version
  • The Response header fields
  • The HttpOutput state

@gregw
Copy link
Contributor

gregw commented Jul 28, 2018

Having a quick look at this whilst on a train....

Essentially it is a race between a jetty managed thread completing the request due to a AsyncContext timeout vs a non jetty managed thread calling write on the output stream. This is just a natural race that will exist in asynchronous IO programming and we cannot avoid the race, but just have to handle it as best as possible.

Best handling is to ensure that we are atomic in the decision if the write or complete call "wins", which I think we are.... but I will triple check the code just in case when on a real machine... but nothing I see in this report nor elsewhere suggests that we have a problem.

In this case, the timeout code "won" the race, so the write was correct in failing. But is there anything we can do to help the writer avoid the problem or get a better message? Well we can't fix in isReady as because even if there was a fix that could be done there, the code would still have a test then act race, where the complete could happen after the isReady but before the write.

So could we have a better exception message? Probably... but the issue is that this code can be miss-used in so many different ways in many different calling patterns with many different races etc. So it is hard for the code to look at the illegalState and from the trenches diagnose exactly which of the nasty race conditions it is in.

I think best practise around this is for implementations to be written in a way to acknowledge that there is an onError or onTimeout race. Ie any thread that wants to write output or otherwise access/mutate the request/response needs to be mutually excluded from those two callbacks, which should check some state so that a writer thread will know that they have been called and correctly abort.

@palacsint
Copy link
Author

palacsint commented Jul 28, 2018

@joakime: Thank you, I'll try that!
@gregw: If I understand correctly, I should set the completed flag in onErrormethod too, for example:

@Override
public synchronized void onError(final AsyncEvent event) throws IOException {
    logger.error("onError", event.getThrowable());
    completed = true;
}

Anyway, I have not found any call of this onError method in our logs, so that should not be the cause of my second Stack Overflow question.

@gregw
Copy link
Contributor

gregw commented Jul 28, 2018 via email

@palacsint
Copy link
Author

onWritePossible(), onComplete() and onTimeout() (called by Jetty) are effectively synchronized methods, as well as our writeImpl() (called by our data feeder thread and Jetty too). Isn't that enough?

I could think about a case when our DFT (data feeder thread) runs writeImpl() and owns the lock while Jetty's thread handles something, blocks on onWritePossible(), onComplete() or onTimeout() but before reaching the synchronized block Jetty changes some state which interferes with the running write/flush operations. Is that a possible case?

If yes, how can I fix that? Our DFT waits for events, so it's usual that it does not have any data for a while. Blocking for data in onWritePossible() could be a, um... "solution" - with the obvious drawback that it makes async servlet to a sync one.

So could we have a better exception message? Probably... but the issue is that this code can be miss-used in so many different ways in many different calling patterns with many different races etc.

I think the current or the more detailed error message (suggested by @joakime) is fine but some documentation and code examples about how to do it correctly which I really miss. (Unfortunately I have not found any on the web.)

This makes me wonder whether our case with the separate data feeder thread is a valid usecase for async servlets at all.

@palacsint
Copy link
Author

I've just found the AsyncContext.dispatch() method which seems relevant here. Is it a good direction if my data feeder thread only calls asyncContext.dispatch()? So it only schedules a new dispatch and let Jetty to handle it and call the writeListener.

In this way writing always will happen on a Jetty thread and I probably get rid of the "Closed while" warnings completely. Or does it have similar race conditions?

@gregw
Copy link
Contributor

gregw commented Jul 31, 2018

Firstly, it's not really a good idea to put the request URL into any exception throw. Exception messages often get put into error page responses and URLs are user supplied data, so suddenly you have a user able to directly provide content for a response page. Jetty's error pages protect against cross site scripting, but an application supplied error page might not be as rigorous... hence we rarely put user supplied data in the exception method.. rather we put it in logged debugs or warnings instead.

For the mutual exclusion, the jetty called methods are all mutually excluded from each other, but not from an arbitrary application thread.

AsyncContext.dispatch() can be used to essentially reply the request in a call that is also mutually exclusive, so that would be one way to achieve mutual exclusion... however re-entering the whole servlet request can be a bit heavy weight... specially if you have a lot of filters.

Instead, you could simply synchronize or have a lock on all the methods, so that you hold that lock while in a while(out.isReady()) { out.write(...); } loop, and thus will exclude any calls to onError or onTimeout. Then when onError or onTimeout do get the lock and execute, they need to change some state so that the writer thread will abort once it has taken the lock and checked the state.

@palacsint
Copy link
Author

Firstly, it's not really a good idea to put the request URL into any exception throw.

Good point, thanks!

Instead, you could simply synchronize or have a lock on all the methods, so that you hold that lock while in a while(out.isReady()) { out.write(...); } loop, and thus will exclude any calls to onError or onTimeout. Then when onError or onTimeout do get the lock and execute, they need to change some state so that the writer thread will abort once it has taken the lock and checked the state.

If I'm right the code above does exactly this. It checks the complete flag in the synchronized onWritePossible() method and sets it in onTimeout() and onComplete() methods which are also synchronized. (onError() does not use this flag, but its warning log has never shown up in our logs.)
Anyway, we still get the "Closed while Pending/Unready" warnings from Jetty.

AsyncContext.dispatch() can be used to essentially reply the request in a call that is also mutually exclusive, so that would be one way to achieve mutual exclusion... however re-entering the whole servlet request can be a bit heavy weight... specially if you have a lot of filters.

In this setup we only have a small filter, so I'm playing with that but it will take a while to get feedback from production.

@palacsint
Copy link
Author

Copied from Stack Overflow:

I was able to reproduce the warning with this code:

@Override
protected void doPost(final HttpServletRequest req, final HttpServletResponse resp)
		throws ServletException, IOException {
	resp.setContentType(MediaType.OCTET_STREAM.type());
	resp.setStatus(HttpServletResponse.SC_OK);
	resp.setBufferSize(8192);
	resp.flushBuffer();
	final AsyncContext async = req.startAsync();
	async.setTimeout(10_000); // millis
	final ServletOutputStream output = resp.getOutputStream();
	final QueueWriteListener writeListener = new QueueWriteListener(output);
	async.addListener(writeListener);
	output.setWriteListener(writeListener);
}

private static class QueueWriteListener implements AsyncListener, WriteListener {

	private static final Logger logger = LoggerFactory.getLogger(QueueWriteListener.class);

	private final ServletOutputStream output;

	public QueueWriteListener(final ServletOutputStream output) {
		this.output = checkNotNull(output, "output cannot be null");
	}

	@Override
	public void onWritePossible() throws IOException {
		logger.info("onWritePossible()");
		long written = 0;
		while (output.isReady()) {
			final byte[] data = new byte[8 * 1024 * 1024];
			Arrays.fill(data, (byte) 'W');
			output.write(data);
			written += data.length;
			logger.info("write OK, written: {} KB", written / 1024);
		}
		logger.info("onWritePossible() end");
	}

	@Override
	public void onError(final Throwable t) {
		logger.info("Writer.onError -Error: {}, Message: {}", t.getClass().getName(), t.getMessage());
	}

	@Override
	public void onComplete(final AsyncEvent event) throws IOException {
		logger.warn("onComplete: {}", event);
	}

	@Override
	public void onTimeout(final AsyncEvent event) throws IOException {
		logger.warn("onTimeout()");
	}

	@Override
	public void onError(final AsyncEvent event) throws IOException {
		logger.error("onError: {}", event, event.getThrowable());
	}

	@Override
	public void onStartAsync(final AsyncEvent event) throws IOException {
		logger.info("onStartAsync: {}", event);
	}
}

And with a slow curl client:

curl --limit 16 -XPOST http://localhost:35419/example2

Result:

10:39:29,063  INFO [jetty-16] {Example2Servlet.java:52} - onWritePossible()
10:39:29,084  INFO [jetty-16] {Example2Servlet.java:59} - write OK, written: 8192 KB
10:39:29,084  INFO [jetty-16] {Example2Servlet.java:61} - onWritePossible() end
10:39:39,085  WARN [jetty-17] {Example2Servlet.java:76} - onTimeout()
10:39:39,088  WARN [jetty-17] {HttpOutput.java:286} - java.io.IOException: Closed while Pending/Unready, requestUrl=http://localhost:35419/example2
10:39:39,090  INFO [jetty-17] {HttpOutput.java:287} - 
java.io.IOException: Closed while Pending/Unready, requestUrl=http://localhost:35419/example2
    at org.eclipse.jetty.server.HttpOutput.close(HttpOutput.java:285)
    at org.eclipse.jetty.server.Response.closeOutput(Response.java:1044)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:493)
    at org.eclipse.jetty.server.HttpChannel.run(HttpChannel.java:293)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:762)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:680)
    at java.lang.Thread.run(Thread.java:748)
10:39:39,091  WARN [jetty-17] {Example2Servlet.java:71} - onComplete: org.eclipse.jetty.server.AsyncContextEvent@3b51901

Things to notice:

  • It does not use any third party threads.

  • In the original code there was a complete() call in onTimeout(), but it does not influence the behavior, so I removed it.

  • I also removed the synchronized keywords from method declarations - they do not influence the behavior.

  • I was able to reproduce the same behavior with different buffer and array sizes, the warning came sooner or later. (For example, buffer size of 8192 bytes and array size of 1024 bytes also reproduces the warning but needs a little bit more time.) This usually results additional debug logs, like this one:

     DEBUG [jetty-12] {HttpOutput.java:1271} - EOF of org.eclipse.jetty.server.HttpOutput$AsyncWrite@4b8dff34[PROCESSING]
    

@palacsint
Copy link
Author

So, the warning is reasonable. Is there way I can fix this as a Jetty user? At least log a more descriptive warning message for ops people or increment a monitored JMX counter?

Shouldn't Jetty call AsyncListener.onError() or WriteListener.onError() in this case?

@gregw
Copy link
Contributor

gregw commented Aug 2, 2018

@palacsint Yes indeed Jetty should be calling the onError callbacks before trying the close. This definitely now looks like a Jetty bug... looking some more...

@gregw gregw added Bug For general bugs on Jetty side and removed Enhancement labels Aug 2, 2018
@gregw
Copy link
Contributor

gregw commented Aug 2, 2018

@sbordet Need to hangout on this one.
So far, we have kept the timeout lifecycle of AsyncContext and async IO independent. I think this is not a problem if the IO idle timeout (or min data rate limit) fire before the AsyncContext timeout, as they will error the pending write.

But in this case, the context times out and there is a pending write, I think we should error the write.... but it is really unclear when we should do this? Before calling the AsyncContext Listener(s) or only if the listeners do nothing and the container has to act?

I think it has to be before the listeners, as they can't really do anything if there are outstanding reads or writes.

@gregw
Copy link
Contributor

gregw commented Aug 2, 2018

@palacsint for now, you might want to configure a connector idleTimeout to be shorter than your async timeout.

gregw added a commit that referenced this issue Aug 2, 2018
Signed-off-by: Greg Wilkins <gregw@webtide.com>
@gregw
Copy link
Contributor

gregw commented Aug 2, 2018

@palacsint I have created a branch and a PR with a fix for this issue. Any AsyncContext errors (including timeout) are now firstly delivered to any outstanding async IO operation.

It would be good if you could checkout and built that branch to confirm it fixes the problem as you see it.

@palacsint
Copy link
Author

@palacsint for now, you might want to configure a connector idleTimeout to be shorter than your async timeout.

Thanks, that fixed the PoC code above, it does not log "Closed while Pending/Unready" and calls my WriteListener.onError() properly.

@palacsint
Copy link
Author

@palacsint I have created a branch and a PR with a fix for this issue. Any AsyncContext errors (including timeout) are now firstly delivered to any outstanding async IO operation.

It would be good if you could checkout and built that branch to confirm it fixes the problem as you see it.

I have also tested that pull request and it seems good. I got this log at the end of the request:

18:27:15,325  INFO [jetty-14]  - onWritePossible() end
18:27:15,336  INFO [jetty-12]  - onTimeout()
18:27:15,336  INFO [jetty-17]  - Writer.onError -Error: java.util.concurrent.TimeoutException, Message: AsyncContext Timeout
18:27:15,336  INFO [jetty-17]  - onComplete: AsyncContextEvent@6d5b40ec{c=ServletContext@o.e.j.s.ServletContextHandler@4516af24{/,null,AVAILABLE},d=false,t=null,ex=java.util.concurrent.TimeoutException: AsyncContext Timeout}

(I was able to test it only with the PoC code above. Pushing into production - and testing there - probably will not happen until a new Jetty release which contains the fix.)

@gregw
Copy link
Contributor

gregw commented Aug 14, 2018

I think our thinking has come full circle and that we are now no longer inclined to link the AsyncContext timeout to failing asyncIO operations.

There is just no meaningful way to link these events in all circumstances. Essentially the problem is that if AsyncContext does timeout with an outstanding async write, then the exception is correct! There is a pending write!

Thus we now think this is really a verbosity problem. If the application attempts an operation, they should see that exception. If the container is doing the close, at the very least it should suppress the bulk of that exception in logging, but it could also error the write during container close rather than throw and log the exception. New PR coming...

@gregw gregw removed the Bug For general bugs on Jetty side label Sep 11, 2019
@gregw
Copy link
Contributor

gregw commented Sep 11, 2019

Error handling significantly reworked with #3912 so closing this. Please reopen if verbosity still not right.

@gregw gregw closed this as completed Sep 11, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants