Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

naively limit history size #14

Closed
wants to merge 1 commit into from
Closed

Conversation

pfitzseb
Copy link
Member

TODO:

TODO:
 - setting for history size?
 - force the creation of new items after a certain length. No clue how to do this, but it is necessary to keep the console from displaying to many lines and crash Atom (ref JunoLab/atom-julia-client#40)
@@ -53,6 +55,10 @@ class ConsoleView extends ScrollView
else
@addBeforeInput item, opts

limitHistory: (maxSize = 1000) ->
if @items.childNodes.length > maxSize
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe make this a while instead of if, so that if something happens that causes multiple messages to get inserted for a single call to limitHistory it will still work to keep the list limited. I don't think that can happen now, but it's the sort of thing that could easily happen due to error or future refactoring.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, will do.

@pfitzseb
Copy link
Member Author

Alright. So I just tried breaking out of the Timeout with no luck at all... :D
What we need -- I think -- is to limit the buffer size by some amount of elements and then break out of the timer to render the element in it's own div. The problem I've been trying to solve here is that the buffer can get huge (> 20k lines) which will pretty much kill Atom if there's a few of those. This is due to node.js not guaranteeing the Timeout will happen in time and you can print a ton of lines in the >>10ms it takes Atom to flush the buffer.

On a side note: I don't understand why the buffering should be implemented in this way. Wouldn't it be more natural and closer to every REPL ever to buffer until a \n is encountered and then print? Sure, that'd create a lot of items, but since that number should be limited anyways I don't see all that much of a problem. As an additional upside, any line breaks (that are introduced by splitting the output into a different item) happen where they should.

So, unless there's something seriously wrong with that idea, I'll have a go at implementing that tomorrow.

Also, cc @one-more-minute since the buffering was your idea and you probably had something in mind when creating the current implementation ;)

@ssfrr
Copy link
Contributor

ssfrr commented Sep 22, 2015

👍 to newline splitting. That would also fix places where long output can get split in unexpected ways. one issue is that if there's a longish-running process that's outputting without printing a newline (like dots printed during test runs or something) then we wouldn't see any output. Not sure what the best solution to that is, except maybe splitting the output on every newline but also setting a timeout that adds to the current cell instead of starting a new one?

TTYs are hard.

@pfitzseb
Copy link
Member Author

So I got this basically working today. The output would pretty much look like in the REPL with every new line in a seperate .cell. It's hideously slow though...
Also I'm having a couple of problems with scoping (aaand my inexperience with coffescript shows)...

So if someone has any ideas on how to get fast splitting on new lines (and only there, ideally) here, feel free! :)

@pfitzseb pfitzseb mentioned this pull request Dec 15, 2015
@pfitzseb pfitzseb closed this Mar 6, 2016
@pfitzseb pfitzseb deleted the va-limit-console-history branch April 22, 2016 14:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants