Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

REPL is Slow and Performance Degrades as the Output Grows #942

Open
klauswuestefeld opened this issue Jan 11, 2021 · 7 comments
Open

REPL is Slow and Performance Degrades as the Output Grows #942

klauswuestefeld opened this issue Jan 11, 2021 · 7 comments
Labels
output/repl window ux User experience related

Comments

@klauswuestefeld
Copy link

Evaluating this takes over 30 seconds to finish all output to the REPL:
(doseq [n (range 1000)]
(println n))

And performance degrades gradually as the output to .calva/output-window/output.calva-repl grows. :/

@bpringe
Copy link
Member

bpringe commented Jan 12, 2021

Thanks for reporting, I can reproduce (though with a cleared output window, it's less than 30 seconds, but still way too high 😄), and in a terminal repl created with clj -r the output is virtually instant.. It's true that the output window lags far behind something like a terminal repl in speed of printing output.

I'm not sure this can be fixed with the current solution of using an editor / file buffer as the output window. This is because of the way VS Code applies edits to documents asynchronously. We have to queue them up and not start the next edit until the previous one has completed, otherwise issues may occur (I believe we implemented the queue due to an issue of not all output printing when multiple edits ran around the same time). Though, I'm not sure the queuing is the issue or if it's the way applyEdit and related code is implemented in VS Code.

I think #804 is also related. We'll need to apply some hammock time to figure out a good solution.

@bpringe
Copy link
Member

bpringe commented Jan 12, 2021

I also happened to see this post on Reddit just today that was posted recently, that's related: https://www.reddit.com/r/Clojure/comments/krdihe/calva_with_reveal_repl/

@bpringe
Copy link
Member

bpringe commented Jan 12, 2021

So, I tried this with Clover as well which uses a webview for its output window (which Calva used to use, and it came with a lot of maintenance problems trying to make it do what we wanted, especially with using it for input), and it still took maybe 1 or 2 seconds to show all output, and if I increased the number to 10000 it took maybe 5+ seconds, so while that implementation shows output much faster, it's still nowhere near the speed of terminal output.

I wonder if we can get the terminal speed in VS Code, at least for stdout like this, aside from results. Maybe @PEZ knows some options. I'm not suggesting a large change to the output window, but perhaps redirecting stdout to elsewhere would be something to try, or an option to give to the user via a setting (not seeing it there by default might pose more questions than its worth - "where's my output?").

@klauswuestefeld
Copy link
Author

klauswuestefeld commented Jan 12, 2021 via email

@PEZ
Copy link
Collaborator

PEZ commented Jan 12, 2021

If there is a queue, can't we read all pending output events from the
queue, concat them and apply them as a single batch to the output file?

PR welcome! But it will be a bit tricky to know when to flush the queue. Pinting can be asynchronous and there is really no way of knowing when it is ”done”.

We'll look into an option for sending stdout to Calva says again. It wouldn't have helped you really, since you still need to figure out that the output window is behaving the way it does before you had reached for the option, but, yeah, it makes sense to have the option.

@klauswuestefeld
Copy link
Author

klauswuestefeld commented Jan 13, 2021 via email

@bpringe
Copy link
Member

bpringe commented Jan 13, 2021

I think a combination of a setting to send all stdout to a standard VS Code output channel (like the current "Calva says"), as well as documenting the slowness of printing stdout to the output/repl window, in the docs, as well as maybe in the welcome message of the output window as you say, would be the ideal solution here. You bring up a good point about run-all-tests, which makes this issue even more likely to be experienced than I thought.

@PEZ PEZ removed the enhancement label Feb 10, 2021
brdloush added a commit to brdloush/calva that referenced this issue Jun 16, 2021
…ding output events

Basically does what klauswuestefeld suggested here:

BetterThanTomorrow#942 (comment)

If there is a queue, can't we read all pending output events from the
queue, concat them and apply them as a single batch to the output file?
That would maybe make the console jumpy but at least it would not be
absurdly slow.
brdloush added a commit to brdloush/calva that referenced this issue Jun 16, 2021
…ding output events

Basically does what klauswuestefeld suggested here:

BetterThanTomorrow#942 (comment)

If there is a queue, can't we read all pending output events from the
queue, concat them and apply them as a single batch to the output file?
That would maybe make the console jumpy but at least it would not be
absurdly slow.
brdloush added a commit to brdloush/calva that referenced this issue Jun 16, 2021
…ding output events

Basically does what klauswuestefeld suggested here:

BetterThanTomorrow#942 (comment)

If there is a queue, can't we read all pending output events from the
queue, concat them and apply them as a single batch to the output file?
That would maybe make the console jumpy but at least it would not be
absurdly slow.
brdloush added a commit to brdloush/calva that referenced this issue Jun 16, 2021
…ding output events

Basically does what klauswuestefeld suggested here:

BetterThanTomorrow#942 (comment)

If there is a queue, can't we read all pending output events from the
queue, concat them and apply them as a single batch to the output file?
That would maybe make the console jumpy but at least it would not be
absurdly slow.
brdloush added a commit to brdloush/calva that referenced this issue Jun 16, 2021
…ding output events

Basically does what klauswuestefeld suggested here:

BetterThanTomorrow#942 (comment)

If there is a queue, can't we read all pending output events from the
queue, concat them and apply them as a single batch to the output file?
That would maybe make the console jumpy but at least it would not be
absurdly slow.
@bpringe bpringe added the ux User experience related label Jun 16, 2021
brdloush added a commit to brdloush/calva that referenced this issue Jul 11, 2021
PEZ added a commit that referenced this issue Aug 4, 2021
improve performance of rangeForDefun (cause of repl slowness) #942
julienvincent added a commit to julienvincent/calva that referenced this issue Jan 8, 2023
When an excessive amount of output is produced by the repl (for example
as a result of a rogue loop that is printing to stdout) Calva can
sometimes hang while trying to write all the output to the output
window/file.

The only way to resolve is to restart/reload the VSCode window.

This commit introduces a new config entry `replOutputThrottleRate` which
when set to a non-0 number will throttle output from the repl
connection. If more output items are received than the throttle rate in
a 500ms window then they will just be dropped.

Addresses BetterThanTomorrow#942
Fixes BetterThanTomorrow#2010
julienvincent added a commit to julienvincent/calva that referenced this issue Jan 8, 2023
When an excessive amount of output is produced by the repl (for example
as a result of a rogue loop that is printing to stdout) Calva can
sometimes hang while trying to write all the output to the output
window/file.

The only way to resolve is to restart/reload the VSCode window.

This commit introduces a new config entry `replOutputThrottleRate` which
when set to a non-0 number will throttle output from the repl
connection. If more output items are received than the throttle rate in
a 500ms window then they will just be dropped.

Addresses BetterThanTomorrow#942
Fixes BetterThanTomorrow#2010
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
output/repl window ux User experience related
Projects
None yet
Development

No branches or pull requests

3 participants