New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow limiting stdout/err capture #344

Open
bitprophet opened this Issue Apr 9, 2016 · 1 comment

Comments

Projects
None yet
1 participant
@bitprophet
Member

bitprophet commented Apr 9, 2016

Description

This is Invoke's version of fabric/fabric#800. tl;dr allow limiting the stdout/err captured, for users who a) experience severe mismatches between the size of their subprocess' output streams and their system's available memory; and b) don't need to examine that output in whole afterwards.

Unlike the notes at the bottom of that ticket, where we're using Fabric 1's RingBuffer, here in Invoke land we can just use collections.deque(xxx, maxlen=yyy).

Also, because we have a somewhat richer (and publicly documented) Result class, we can (and should) add a flag to Result noting when the ring buffer is being limited, so users have at least a little warning when shooting themselves in the foot ("hey, I turned on this memory saving feature I saw on StackOverflow, but now I only see the last 1024 bytes of stdout!")

Specific TODO

  • Replace the list objects used for capture buffers, with deque objects
  • Add the usual config/kwarg structures for a new e.g. capture_buffer_size (or similarly named) option
  • Make use of that in setting maxlen for the deque
  • Probably emit a warning of some kind if it's being set to a too-small value (i.e. one insufficient to handle any keys in the prompt configuration).
    • Maybe even just raise an exception - having a non-empty prompt config implies expectation of a specific functionality, which the user has just probably-accidentally broken...
@bitprophet

This comment has been minimized.

Show comment
Hide comment
@bitprophet

bitprophet Jun 5, 2017

Member

While rereading this after receiving #451, I had a realization that did not occur to me at time of posting - namely, are there any downsides to switching to a deque, for the common case of not limiting buffer size? I.e. what do deques sacrifice vs lists?

Going by the 2.6 docs, the only downside seems to be that random access in the middle of the structure trends towards O(n), and nearly everything else (appends, pops, etc) is O(1) and/or otherwise an improvement over list performance.

So, sounds like the answer is "nope, deques are better in every way unless you're indexing into the middle", which is not necessary in this particular case (and by the time users get their paws on the result, we've concatenated it into a string anyways). But wanted to get this on record.

Member

bitprophet commented Jun 5, 2017

While rereading this after receiving #451, I had a realization that did not occur to me at time of posting - namely, are there any downsides to switching to a deque, for the common case of not limiting buffer size? I.e. what do deques sacrifice vs lists?

Going by the 2.6 docs, the only downside seems to be that random access in the middle of the structure trends towards O(n), and nearly everything else (appends, pops, etc) is O(1) and/or otherwise an improvement over list performance.

So, sounds like the answer is "nope, deques are better in every way unless you're indexing into the middle", which is not necessary in this particular case (and by the time users get their paws on the result, we've concatenated it into a string anyways). But wanted to get this on record.

@bitprophet bitprophet added this to the 1.2 milestone Jun 28, 2018

lmiphay added a commit to lmiphay/invoke that referenced this issue Jul 13, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment