Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.Sign up
Streaming values #117
Generally I've never had issues with values being printed in
That won't be an issue for the new rich printer, but it'd be nice if we improve the situation for older clients as well. I don't really have any ideas here, but I hope that @cgrand might have something up his sleeve.
I'm not sure how big of a problem is this in practice, as you should still be able to interrupt an infinite print, but you also have to realize this before the server runs out of memory.
I think this was the killer feature of cider-nrepl's pretty printing (and in fact my main motivation for developing it). To me this is about network transparency – when I'm using a REPL I just expect values to be printed incrementally, and the evaluation to be cancellable while printing is happening.
I'm working on this branch which repurposes
Please re-read my comment here:
I'm not proposing adding streaming value output to nREPL. I'm not proposing changing the RPC model in any way. I'm proposing adding a maximum string length to the returned payload and enforcing that limit on any/all printers. The implementation is backwards compatible for string-returning print functions and is pretty straightforward. This would solve the DoS problem, which has affected me many times.
(with-out-head n (print (str (printer result)))
(defmacro with-out-head [n & body] `(binding [*out* (TruncatingBuffer. n)] (try ~@body (catch BufferFullError e#)) (.string *out*)))
I understood your suggestion, but this is essentially the same as setting
(repeat 100 (repeat 100 (range 100)))
Whoops! 1 million items printed.
Or even just:
(apply str (repeat 101 \x))
I've run in to excessive printing and OOM from nREPL in both classes of problems. When this occurs, I tend to fallback to direct evaluation in a streaming repl so that at least interrupts work correctly.
Ah, I keep forgetting about multi-dimensional data structures. Yeah, for this kind of problems your proposed solution certain makes sense, although if we switch to a streaming model it would only serve to protect the clients as the server itself it will become "immune" to the OOM problem.
Still, it's not a big change, so it's probably worth adding this as something optional.