Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

REPL hangs when closing a JFrame in Seesaw #84

Closed
ska2342 opened this Issue Sep 11, 2012 · 16 comments

Comments

Projects
None yet
3 participants

ska2342 commented Sep 11, 2012

Hi,

on the seesaw-clj group we described an issues which seems to be related to leiningen2. Please see the thread at https://groups.google.com/d/msg/seesaw-clj/ts0gb9yOOeg/illUPsUrm3gJ for details.

In short: if you create a JFrame which exits the JVM on the window-close-event, the leiningen2 REPL becomes unusable. Dave Ray found, that it works in leiningen1.

This issue was raised per request by Colin in the leiningen group: https://groups.google.com/d/msg/leiningen/QLcZIK2e5C0/zDhO6yV_inYJ

Kind regards,
Stefan

Owner

trptcolin commented Sep 12, 2012

OK, so as expected, it looks like we end up just hanging waiting on responses to come back from the nREPL server that no longer exists.

From nREPL's perspective, the client & server are distinct, and can only communicate over a Transport interface (a socket in many cases, but doesn't necessarily need to be).

It would be great if we could be guaranteed an Exception to be thrown on unsuccessful sends to nREPL, but that's not currently the case. I'm actually not sure whether it's possible, based on things like http://stackoverflow.com/questions/7049558/socket-output-stream-with-dataouputstream-what-are-the-guarantees - my socket-programming know-how is clearly limited.

I wonder if @cemerick has any ideas on that, or if there are other ideas of paths we might try to go down?

Contributor

cemerick commented Sep 14, 2012

Going to be looking at this shortly. I had thought a disconnected socket would bubble up straight away, but others have reported otherwise.

Worst case scenario: nREPL might add an option to send keepalives at some interval. This will be helpful for certain transports, as well as take care of providing an indication that the connection is down.

Contributor

cemerick commented Sep 14, 2012

BTW, tracking the nREPL side of this @ http://dev.clojure.org/jira/browse/NREPL-30

Owner

trptcolin commented Sep 14, 2012

Thanks Chas!

Contributor

cemerick commented Oct 3, 2012

A fix for this is available on nREPL master (see the issue linked above for the actual commit, and notes on it). It appears to fix related issues for me, but I'd love to have confirmation of that elsewhere.

Just add this to your project.clj to test:

:dependencies [[org.clojure/tools.nrepl "0.2.0-SNAPSHOT"]]
:repositories {"sonatype-oss-public" "https://oss.sonatype.org/content/groups/public/"}

I'll be making a final 0.2.0-beta10 release soon that will hopefully roll up into reply and leiningen shortly afterwards…

Owner

trptcolin commented Oct 4, 2012

Awesome, thanks Chas. I'll bang on it soon.

Owner

trptcolin commented Oct 5, 2012

It appears to work great for me with a couple of tweaks in REPLy/leiningen, so release away. The one change that I needed to do in lein locally that I'm not sure whether it should be in nREPL or not was:

-          :handler (nrepl.ack/handle-ack nrepl.server/unknown-op))))
+          :handler #(try ((nrepl.ack/handle-ack nrepl.server/unknown-op) %)
+                      (catch java.io.IOException e)))))

This prevents a noisy (but otherwise innocuous-seeming) stacktrace when lein is deciding what port the repl runs on.

Contributor

cemerick commented Oct 6, 2012

Is this new? I haven't seen it with -preview10 or lein master…odd since I don't think the ack stuff has been changed for ages.

Owner

trptcolin commented Oct 6, 2012

Yeah, I'm pretty sure I had never seen the stacktrace I mentioned before pulling the latest nREPL in locally today. My theory was that it's printing because the future added here causes a bit of a race in receiving the ack, but I didn't look into it that deeply.

FWIW, the local change I pasted was to Leiningen, src/leiningen/repl.clj.

Contributor

cemerick commented Oct 6, 2012

Can you paste the stacktrace you're seeing? I still can't replicate, and would like to track down the issue.

Owner

trptcolin commented Oct 6, 2012

Sure, sorry. Note: this only appears to happen when running outside of a project context.

colin:/tmp/ $ lein repl
nREPL server started on port 56163
ERROR: Unhandled REPL handler exception processing message {:id 00f0882f-6452-4322-b20d-5de2d2395860, :op ack, :port 56163}
java.net.SocketException: Socket closed
    at java.net.SocketOutputStream.socketWrite(SocketOutputStream.java:99)
    at java.net.SocketOutputStream.write(SocketOutputStream.java:136)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
    at clojure.tools.nrepl.transport$bencode$fn__1124.invoke(transport.clj:103)
    at clojure.tools.nrepl.transport.FnTransport.send(transport.clj:28)
    at clojure.tools.nrepl.ack$handle_ack$fn__1264.invoke(ack.clj:42)
    at clojure.tools.nrepl.server$handle_STAR_.invoke(server.clj:18)
    at clojure.tools.nrepl.server$handle$fn__1536.invoke(server.clj:27)
    at clojure.core$binding_conveyor_fn$fn__3989.invoke(core.clj:1819)
    at clojure.lang.AFn.call(AFn.java:18)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
    at java.util.concurrent.FutureTask.run(FutureTask.java:138)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
    at java.lang.Thread.run(Thread.java:680)
Contributor

cemerick commented Oct 7, 2012

I see what's happening now; the new server comes up and calls nrepl.ack/send-ack, which promptly closes the transport. The client's ack server's (woeful terminology here :-P) transport then chokes on the remote side having shut down, which tosses the exception.

I want to add some default error handling for SocketExceptions in clojure.tools.nrepl.server/handle* to annotate stack traces where it looks like the remote side just went away, but that seems like an overspecialization for Socket-based transports…

Any opinions/suggestions?

Owner

trptcolin commented Oct 7, 2012

The first thing that came to mind (which doesn't seem great) was to handle SocketException's superclass, java.io.IOException, in the hopes that other remote-side-of-the-transport-went-away problems will surface as subclasses of that as well.

I guess another alternative would be to have each transport take an error-handling function/multimethod, and wrap its send/recv calls in whatever try/catch mechanism that's applicable, but that seems like it'd be pretty invasive API-wise, and I haven't mulled it over enough to be confident it'd solve the problem.

Contributor

cemerick commented Oct 8, 2012

After thinking a bit, I think there's really no real solution here. Looking for an explicit disconnect op is just wrong (no such thing as gracefully shutting down connections in general), and it's not clear that there's any reason to prevent exceptions prompted by disconnections from percolating up into a log. In the rare case where the server happens to be user-facing (like the ack-only server that reply sets up), then making sure such exceptions are dropped on the floor makes perfect sense.

My only suggestion might be to check that the root cause of those dropped exceptions is a SocketException, re-throwing anything else.

Contributor

cemerick commented Oct 9, 2012

There was actually an easy fix for the benign disconnection exception: clojure/tools.nrepl@aa3eba4

Should be in the next snapshot, available in an hour or so.

@trptcolin trptcolin closed this in bd61a48 Oct 19, 2012

Owner

trptcolin commented Oct 19, 2012

Looks great on master - thanks @cemerick!

cemerick added a commit to cemerick/nREPL that referenced this issue Oct 9, 2017

cemerick added a commit to cemerick/nREPL that referenced this issue Nov 5, 2017

cemerick added a commit to cemerick/nREPL that referenced this issue Nov 6, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment