Prompt/etc autoresponder #289

bitprophet opened this Issue Dec 18, 2015 · 30 comments


None yet

2 participants

bitprophet commented Dec 18, 2015 edited

This represents porting Fabric 1.x's limited ability to autorespond to sudo prompts, to Invoke, and making it Better™.

For now, I've gone with a basic "when you see X, always respond with Y" functionality that can be easily configured via a {pattern: response} dict.

This isn't the most powerful implementation; e.g. one might want "only respond to patterns once", "respond to X pattern the first 3 times only", "respond to pattern X but only if you've already responded to pattern Y", etc etc.

However it's arguably more flexible than using e.g. pexpect (which requires you to anticipate exactly what prompts will happen, in what order; Invoke/Fabric use cases tend to have prompts appear arbitrarily) and I expect it will be enhanced in the future, perhaps by ripping out the responder aspects into their own class or something.

The functionality is already pretty much complete after a ~week of hacking, fucking up, getting confused, becoming less confused, and more hacking. I'm making this ticket just so I have something to link in the changelog & a checklist for the random minor tasks left to perform.

Left to do:

  • Deal with recently-discovered funkiness in interactive stdin use case
  • Make sure the basic functionality works for Fabric 2 as well as with local/subprocess
  • Reinstate the Windows support for handle_input, which mirrors real stdin to the subprocess' stdin.
  • Doublecheck whether we actually need Windows getch as per #304 (comment)
  • Figure out whether to retain the sleep call in handle_input; git-blame the Fab 1 line and doublecheck the rationale. Probably do want it back tho, probably chews more CPU than it really needs to as-is.
  • Reinstate the "print read stdin data to stdout when pty=False" behavior from fab 1 - it is necessary
  • Ditto the "halt stdin mirroring while the local codebase is explicitly trying to prompt for shit it needs" functionality from Fab 1.
    • ...does this mean Fab 1 literally can't deal with users manually executing things like raw_input and getpass? Cuz if it can then that implies said halting/locking might not have been necessary? Or is a corner case?
    • See notes in desc of #294 - probably not going to go down the same road as in Fab 1 which means we wouldn't have to worry about this wrinkle.
  • Centralize & normalize the encoding related noise, as per comments by @pfmoore in comments below.
    • Quite possibly via work/discussion being done over on #274 and its linked tickets
  • Try to replicate fabric/fabric#1339 (both in Fabric 1, and here, and in Fabric 2)
  • Figure out if it's worth implementing a(n easily disabled and well documented!) convenience "add \n to response values if not present" feature.
    • Ye olde "helpfulness can be evil"...
    • but in MY OWN TESTING I've fucked up and assumed an implicit \n multiple times already, so there's no way tons of users wouldn't make the same assumption.
    • Spun off as #368
  • Determine what's up with swallowed exceptions in the out/err/in threads when the main thread gets an eg KeyboardException. Feels like a thing we need to at least partially port from fab v1.
  • Python 3 test pass so Travis doesn't get mad at me
  • Ascertain why the test suite got slower again and how much of it's due to this branch
  • Double-check the doc situation; may want to make a "feature list!" section on index page?
  • Changelog entry
@bitprophet bitprophet added this to the 1.0 milestone Dec 18, 2015

Looks like I traded autoresponding for interactive stdin; now, stdin from the terminal user a) doesn't actually appear in Invoke's sys.stdin (& thus to the subprocess, after being replayed) until Ctrl-D (EOF), and b) that input is echoed automatically by the local terminal and by the subprocess' stdout, at least in some cases (Python subprocess running raw_input yes, regular call to eg rm -i no).

Reasonably sure I've just neglected to port additional terminal manipulation bits from fabric v1.

Poking more:

  • Noninteractive stdin works correctly, not really surprisingly, e.g. echo 'blah' | python - no double echo, etc (and multiline input shows up with 1st line eaten by e.g. raw_input).
  • Looked around fab v1 and sure 'nuff, we've been wrapping stdin reads with a "please be byte instead of line buffered" wrapper forever (using a combo of termios and tty libs).
    • The initial add of this was in fabric/fabric@b412733
    • No real info there besides "started using select". Meh.
    • It was made into a context manager, yet we only ever use it in the one place; reasonably sure there's no actual need to expose it as its own public method. (But having it as a contextmanager would save us some indent levels, so...yea.)

After a bunch of dumb futzing around, realized that we also need to read from stdin 1 byte at a time, as fab v1 does, for that to work. I'd actually been wondering earlier if changing the stdin reader to the same 'chunked' size as the output streams, was wise -- apparently now it isn't!


Diddled around some more; I now have things looking pretty good under Python 2:

  • Can run a vim session reasonably well...
    • I can do editing, mode changes, navigation, etc
    • Can use plugins which redraw lots of stuff on the fly, like CtrlP (though trying to mash Enter to select the file doesn't work - but ^X to split-window on the file selected works).
    • Signals still need love. E.g. ^C which is what I tend to use in vim instead of Esc, isn't passed through, neither is ^] which is useful for columnar editing.
      • Though whether we transmit signals down the pipe or hoard them for ourselves has always been a nontrivial question to answer. Think there's tickets open for that. Offhand thinking we can afford to always submit downpipe and then also honor if they have meaning for us too...but this can wait. Alpha.
  • Simple raw_input based prompts in subprocesses now work fine both via human interaction and the autoresponder.
  • getpass based prompts also work (when pty=True, as noted in the FAQ now).

Naturally, almost none of this shit works anymore under Python 3, and it's not immediately clear why. Hoping Ye Olde You Gotta Wrap A List Around An Iterator or similar easily fixed bullshit.

EDIT: ah yes, it wants bytes, not str. Duh. Of course. Does remind me that I have an exception-swallowing problem...


Exception issue is pretty funny (and yet again there's echoes from Fabric v1 past & some parts of its present...):

  • Main execution body spins off threads, waits for completion of subprocess, then joins the threads & checks them for stored exceptions.
  • The exception-storing is effected by swallowing BaseException and storing sys.exc_info(). I don't remember exactly why we do this.
    • IIRC it's so one has the ability to see all exceptions that happen, instead of the first one to pop off "winning" and potentially hiding the rest.
    • But that seems mildly specious to me right now; sure, it's nice? But as we've seen, it means they get swallowed forever if anything prevents the stored exception from being dug up later...
  • Normally, a subprocess will end of its own volition, so wait() completes, the threads are joined and the exceptions raise -- all good.
  • But when the subprocess is waiting for input, it sits there -- forever. Our local process sits in wait() until killed, and the exception-raising never happens since it's performed later on.
  • Because the bug(s) causing these exceptions have been happening in the stdin-handler, it means the user's attempt to satisfy the subprocess' prompt cannot succeed. Thus it's literally impossible to exit without just killing the process.

So, it's kind of a perfect-storm, but still annoying / has made hacking on this stdin functionality a PITA. I'm guessing the exception handling is there for a reason. EDIT: yea, a code comment mentions fabric/fabric#208, tl;dr "thread exceptions don't bubble up to the main thread, so they print out but cause the rest of the system to hang/get gummed up/time out". Makes sense.

I also note that this specific comment there is an eerie echo of our current problem.

For now, I think the best solution is to do something that seems sensible anyways - log the exceptions to the debug log when they occur, as well as saving for later raising. This means somebody paying close attention will see that it's going on.

I don't see any other way out of this that isn't reverting to the apparently-problematic "just let exceptions happen" behavior.


With that bug fixed, vim works better under Python 3 but there's still some oddball stuff going on at startup, some terminal chimes & refusal to honor the first few keystrokes; but after that, works as well as Python 2. Might poke, but at some point I need to exit this rabbit hole...

Simpler use cases like input and getpass now work totally fine, too.


Shamelessly borrowing (and tweaking into a contextmanager) the approach from for measuring CPU in an integration test - this way we can actually A) prove that adding an IO sleep has a measurable difference and B) protect against regressions.

Downside, of course, is that the % of a CPU core used for literally anything, will vary system to system, but the difference between sleeping and not sleeping (and just for the stdin mirroring at present) is currently an order of magnitude (0.98% usage vs 98.84% usage) so I'm thinking just setting the test to assume <2%, <5% or similar will suffice.


Checking on just how slow I can get that test to run while yielding measurable results (because while it is the integration suite, I'd still rather not have a test chew cud for 5 seconds when 1 second will do)...

Starting with an arbitrary run of 100 cycles of "print to stdout and stderr and sleep 0.1s" (which means running for about 10 seconds of course):

  • With our stdin sleep enabled, CPU usage is: 0.91%, 0.71%, 0.73%
  • With it disabled: 97.9%, 96.3%, 99.1%

Shrunk to 10 cycles or ~1s:

  • Enabled: 1.17%, 1.26%, 1.33%
  • Disabled: 99.5%, 99.5%, 99.6%

Probably unsurprising is that the longer runtime makes both figures a tiny bit lower (probably as it amortizes startup time or something).

I figure for now, just saying "fail if it took more than 5% CPU" should more than do us, and do that 10-cycle/1-second run.


While poking briefly at how these changes affect #263 I notice that vim is actually semi borked in Python 2 as well. Seems almost intermittent (which is why I didn't catch it originally) - some sessions there's a single terminal bell and then it works "well", others everything seems lagged by one character.


Under Python 2, at least, the initial console alarm seems caused by one byte of read/written stdin, despite zero manual interaction performed after launching Invoke: \x1b, aka the character encoded by hex 1b, aka ascii 27, aka: Escape. I certainly was't hitting Escape but that would yield a terminal bell in vim when it starts out and is in normal mode.

So, first, something is inserting an Escape, somewhere. Fun.

Second, when I hit a single keystroke afterwards - say, i for insert mode - I get another terminal bell and vim does not enter insert mode. On a subsequent i, insert mode occurs and things work pretty normally.

Debugging, I find a bunch more mirrored stdin occurs when I press i: one read/write byte action (due to how our loop is constructed) for each of: [>84;0;0ci (with the i being the character I really typed; e.g. if I repeat the process with a, it's [>84;0;0ca).

This looks distressingly familiar but I can't put my finger on it.

Normal troubleshooting steps to try and narrow down the source/cause of this mystery stdin:

  • Changing vim to less ~/.vimrc doesn't exhibit any of this oddity; only my real keystrokes appear in the debug log.
  • Ditto nano - everything works great, no extra input, things like ^X work, etc.
  • So this behavior is specific to vim, or rather, something vim is doing (that presumably other apps might do, or at least, if I can figure it out it'll give a hint to what we're doing "wrong" here). E.g. perhaps this is its terminal autodetection at work, or something like that.
  • vim /a/file behaves identically to vanilla, no-arguments vim
  • vim -T xterm-256color (aka, try overriding terminal detection) - no dice, same deal
  • :help terminal, under *raw-terminal-mode*, mentions that Vim sets the terminal into raw mode, sending strings t_ti and t_ks to the terminal. So I'm wondering if the data spat back is my actual terminal device or application responding to this automatically.
  • Certainly, on startup, my debugging notices a whole slew of escape code laden gobbledygook coming out of stdout, even when one sees and accounts for the actual stuff being drawn (e.g. the left-hand column of tildes)

While snooping around vim's settings trying to see if I can lessen the amount of data spat out on startup, I notice other mention of terminfo settings, which are all ^[ followed by various characters. Seems to back up my suspicion that this data is terminfo-related junk. Going to educate myself further in this area.


Also of note: Fabric v1 doesn't actually have this particular problem; if one runs fab -H localhost --set="output_prefix=" -- vim, vim works right away with no fussing. Also worth investigating why, since at this point I thought we were doing pretty much everything Fabric v1 does wrt stdin mirroring.


Definitely not happening with Fabric, the only stdin received/mirrored is what I type, both on startup and on keypress.

Tried a few other obvious differences found from scanning code & output:

  • Disabled the self.encode on stdout-writing in Invoke - noticed my debugging was showing Fabric writing regular strings and Invoke writing unicode strings. Figured perhaps this was somehow garbling those questions Vim asks of my terminal or something. No diff.
  • Ran my shell env without tmux. No diff.
  • Ran things in a Linux VM in its vmWare terminal (i.e.: not over ssh from my local terminal app); behavior was differentish yet similar:
    • No bell on startup (tho probably as no visual OR audible bell enabled for the VM) - but with no keypresses, logging shows the initial \x1b as with my local system.
    • Hitting i doesn't get mad, and does seem to trigger INSERT mode - but then it also types out another i too! And when I type a few more is they appear "normally" but upon hitting Escape, INSERT mode clears but another bunch of is appear that I didn't type (guessing as if I'd hit a number beforehand in Vim parlance).
      • This is also intermittent, sometimes things work totally normally and there's no initial i written out nor any duplication.
    • Log output upon that initial i keypress shows, after the Esc, [2;4Ri. As with my localhost, this is a repeatable cycle, and the final character is whichever key was actually pressed.

Offhand, this seems to continue to implicate something regarding the terminfo dialogue and then vim's attempted behavior afterwards. If I can't find something obvious after actually reading up on terminfo/etc, I'll punt for now - this is not worth rabbit-holing when there's an alpha to be finished.


I found which includes the exact sequence my stdin is auto-sending on that initial keypress ([>84;0;0c). However I have no bloody idea how to interpret the file apparently intended to generate that ( Git blame is useless as those and a bunch of others were all added wholesale with a boring commit message (tmux/tmux-tests@491b5cf).

Otherwise, nothing obvious is showing up; I'd need to spend a lot of time deep diving into terminal escape codes, terminfo, etc, to really grok what is going on here. Giving up for now. YES I'M DUMB SHUT UP 😡


Also as per my thoughts above and then having them mirrored in a recent comment on #263, chances are we should just allow disabling all of the 'middleware' (or at least stdin) when users are in this kind of highly interactive, won't care about autoresponding or capturing, use case.


Confirmed this works just fine for sudo prompt purposes via Fabric 2 (i.e. a subclassed, differently-implemented Runner):

prompt = "sudo you love me?"                                             
    "sudo -S -p '{0}' whoami".format(prompt),
    responses={prompt: "mysudopasshere\n"},                               


» fab -H myvm sudotest
sudo you love me?root

And works with pty=True, albeit with a nicer newline after the prompt.

And I can just drop-in use of and poof, it Just Works. Good sign for all the users wanting code that can be run identically locally or remote (granted, things like put need more love, but that should still be as easy as providing another Context subclass that does the needful there...)

What remains for sudo support is: how to actually implement the "store your password both globally (and in Fabric, per-server)" config, and how that ties into the vanilla run.responses configuration.

For example, it feels intuitive to make the config + runtime kwarg reconciliation merging instead of overwriting as it currently is - i.e. configuring run.echo = True serves as a default for run() when no echo kwarg is given. Overwriting for bools makes sense.

But one might expect that if they configure run.responses = {'something-to-always-respond-to': 'the-response'}, that calling run(command, responses={'a-specific-pattern': 'its-response'}) would seek out both something-to-always-respond-to and a-specific-pattern. Right now, that's not the case, only a-specific-pattern would be responded to.

For example it might be useful to implement sudo password support by just stitching e.g. sudo.prompt and sudo.password into run.responses in Context's setup, and rely on such a merging of configs to ensure the sudo prompt is automatically responded to.

However, that relies on each user knowing what their local (or remote) sudo prompt looks like, and configuring it (and in the remote case, each target may well be different). It's probably easier/cleaner to continue the approach of Fabric 1 and explicitly tell sudo what password prompt to use. Likewise, it's easier to have the sudo helper perform the merging of its runtime responses and the prompt/password key/value pair, such that its internal call to run is correct.

We could always implement actual merging later too, it would have no impact on the above.


Been poking at cleanup for this so I can merge it & kick 0.12 out the door.

Noting FTR that I shaved ~0.5-0.7s off the test suite run time (this sounds stupid as hell until you realize the runtime was ~2.7s and is now ~2.0s - almost a third savings) by making the IO sleep in stdin handling more configurable, then setting it much lower in the test suite.

Sadly 2.0s is still annoying but I checked and it was around that prior to the work on this branch, meaning Not This Ticket's Fault. and realistically that's still pretty decent if not Gary Bernhardt levels of speed.


Re: the vim thing: a former colleague noted today that, in fabric v1, run("vim") has roughly the same behavioral quirks as this implementation here, but open_shell("vim && exit") works much better.

I can't actually reproduce this on my end, insofar as I still have terminal bells post vim startup and it takes a few mashings of i before insert mode appears (though afterwards, things do work pretty well). But it may be useful to go look at what Paramiko's Channel.invoke_shell does vs .exec_command, since that is the core difference between the two use cases in Fabric.

E.g. I wonder if invoke_shell does different terminally things.


Added a checklist item re: pfmoore's commentary in #304 (see link just above this comment, thanks github!)

pfmoore commented Jan 23, 2016

I'm reviewing the bytes vs text handling in the code, as prompted by the discussion on #304. One question I have, in Runner.encode() there is a comment "Sometimes gives us bytes under Python 3...and sometimes it doesn't". I've never seen return anything other than bytes under Python 3, and it's documented explicitly as returning a bytestring. @bitprophet can you confirm when or how you saw return something other than a bytestring? If it did, that'd be a bug in Python.

Ideally, I'd like to rely on returning bytes, as it takes some bug-prone guesswork out of the code. I'd like to see the subclassing interface (write_stdin, read_stdout, read_stderr and default_encoding) to be defined strictly in terms of bytes encoded in the given encoding[1]. We could then centralise all of the conversion code in one place, and avoid the current situation where we have decode/encode pairs scattered through the code (something that in my experience tends to be a major source of encoding bugs).

Is this a practical proposition? I'm happy to provide a PR making those changes if it is.

(By the way, the _Dummy runner subclass defined in the tests violates this, as it returns strings. Those strings are empty, so it's not likely to be an issue in practice unless something does type tests, but it's indicative of how easy it is to get mixed up, particularly when dealing with Python 2/3 compatible code).

I appreciate that this feels like an obsession with "purity" over "practicality", but I've dealt with a number of projects seeing Windows/Python3 encoding issues, and they have always boiled down to not getting the conversion boundaries right, so I do think that this is a case where clear separation of concerns is worth it.

[1] It would also be reasonable to define these interfaces in terms of text, and drop the need for subclasses to provide an encoding at all - but that's a much bigger change to the subclassing API, and not likely to be worth the disruption for users.

pfmoore commented Jan 23, 2016

Suggestion for refactoring Reader.handle_stdin:

def handle_stdin(self, input_):
    for chars in read_chars_unbuffered(input_):
        self.write_stdin_text(chars) # wrapper round encode + self.write_stdin
        if self.program_finished.is_set() and not chars:

Then read_chars_unbuffered can be defined as a platform specific function that yields (unicode) characters one at a time as soon as they become available. It wants to return Unicode, not bytes, because we can't guarantee that our stdin and the subprocess' stdin are using the same encoding, so we need to transcode. On Unix, this would use the existing select approach:

def read_chars_unbuffered(input_):
    use_select = isatty(input_)
    with character_buffered(input_):
        while True:
            chars = None
            # "real" terminal stdin needs select() to tell us when it's
            # ready for a nonblocking read().
            if use_select:
                reads, _, _ =[input_], [], [], 0.0)
                ready = bool(reads and reads[0] is input_)
            # Otherwise, assume a "safer" file-like object that can be read
            # from in a nonblocking fashion (e.g. a StringIO or regular
            # file).
                ready = True
            if ready:
                # Read 1 character at a time for interactivity's sake.
                chars =
                # Short-circuit if not using select() and appeared to hit
                # end-of-stream.
                if not use_select and not chars:
                # We yield the (unicode) characters here. Note that input_ will be
                # opened in text mode, so that in Python 3, we get a string (not bytes)
                # Python 2 text mode returns an 8-bit string (ugh, Python 2 :-() so we
                # encode it so we don't have to deal with the ugliness elsewhere
                # This is so ugly it should probably go into a Python 2/3 compatibility function
                if isinstance(chars, six.binary_type):
                    # Seriously, Python 2 - how are we supposed to know the encoding
                    # of these bytes???
                    if hasattr(input_, 'encoding'):
                        if hasattr(input_, 'errors'):
                            chars = chars.decode(input_.encoding, input_.errors)
                            chars = chars.decode(input_.encoding)
                        chars = chars.decode() # Sigh. Assume the default encoding
                yield chars
            # Take a nap so we're not chewing CPU.

This is still ugly, but (IMO) it puts the ugliness where it belongs, in the low-level and platform-specific read_chars_unbuffered, while leaving the higher level handle_stdin loop that feeds our stdin onto the subprocess, looking clean so that it's easy to see the logic.



I've never seen return anything other than bytes under Python 3, and it's documented explicitly as returning a bytestring. @bitprophet can you confirm when or how you saw return something other than a bytestring? If it did, that'd be a bug in Python.

Traced it back to 00cc68a but unfortunately I didn't record the specifics and Github doesn't make it easy to ask "hey what ticket comments did I file around this date?". It's possible it was due to something stupid I did (maybe even the test harness). That said...see next.

Ideally, I'd like to rely on returning bytes, as it takes some bug-prone guesswork out of the code. I'd like to see the subclassing interface [...] to be defined strictly in terms of bytes encoded. We could then centralise all of the conversion code in one place

+1 on this; that sort of scattered-about crap is one of the things this framework was created to try and avoid, but it always sneaks in. Here in particular, I suspect it's due to the emphasis on responsibility-defined classes - but we retain a small number of util functions, which read_chars_unbuffered or similar could potentially join.

Re: the existing encoding/no-encoding/re-encoding stuff in general, hopefully a centralized treatment of the IO boundary will improve more than it breaks, and I'm confident any real issues causing my above "sometimes is funny under Py3" change, will re-raise if necessary.

I appreciate that this feels like an obsession with "purity" over "practicality"

Eh, that's the name of the game for this whole project, it's an exercise in balancing the two. I agree that for a topic as sensitive and easy to screw up as IO/encoding, getting it right is important and the sooner it happens / the easier it's made, the better.

I appreciate your detailed input & patience with my "spent a decade privileged to only care about Python 2 on POSIX" ways, as usual :)

I'm going to examine your proposed patch above & wrap my head around the "make all the IO stuff care only about bytes" idea & what else might be needed for it, then push commits re: such. I want to at least take an organizational stab at it now so I can continue mucking with the rest of this ticket; then we can collaborate on additional fixes within the newly centralized encoding bits.

I may also merge the current state of this ticket's branch & release it soon, so it at least gets the getch/etc stuff out the door. Better to bump things up from "totally broken" to "degraded but works sometimes" instead of sitting around longer 'til "perfect".


Merged 289 branch to master, then merged #306 but appveyor is being stinky (see #305). May release as-is, depends how fast I can resolve the other open issues for 0.12.1.

pfmoore commented Jan 30, 2016

I appreciate your detailed input & patience with my "spent a decade privileged to only care about Python 2 on POSIX" ways, as usual :)

Cheers. It always feels more to me like "you've been writing working useful code for years and someone new comes in and tells you you're doing it all wrong", so I'm glad it doesn't come across that way :-)

I'll take a look at those appveyor issues over the weekend, to see if I can work out what's going wrong. It's a bit odd as I did have appveyor tests working on my branch (although the PRs I supplied were only ever parts of what I had, so the final merged result must have not been quite what I thought I'd sent).

pfmoore commented Jan 30, 2016

I'm fairly sure the issue is that you're using msvcrt.getch() in read_byte, which is (a) wrong, and (b) not going to support mocking out stdin like the tests do (as it ignores input_ on Windows).

I'm just testing a quick fix that just uses on all platforms. I'm mostly certain that's the correct thing to do (it's what I did in my PR) but as current master still has the bytes/text confusion doing this is at best a stopgap to keep things working till we have the bytes/types stuff sorted out properly.

pfmoore commented Jan 30, 2016

OK, you want PR #316. It switches back to for all platforms, but leaves the read_byte wrapper in so we can review later. I've also added an explanatory comment describing why you don't want to touch msvcrt.getch() with a bargepole in this context :-)

Tests worked on my clone, just waiting for them to run on the PR.


Merged #316 as above, thanks again Paul!

Added another checklist item re: Paul's notes about encoding so I don't forget. Blew a bunch of stupid days wrestling with's just too easy to lose context in this business when that sort of small side tangent becomes not-small :(

@bitprophet bitprophet modified the milestone: 0.12.2, 1.0 Feb 4, 2016
@bitprophet bitprophet modified the milestone: 0.12.2, 0.12.3 Feb 8, 2016
@bitprophet bitprophet modified the milestone: 0.13, 0.12.3 Mar 29, 2016

(Will get back to this pretty soon, once I wrap up some paramiko stuff...)

Note to self: another decent "semi-interactive fancy things" test (for this, or generally) is to run a modern pip with a big requirements.txt, its spinner & progress bar act funky on fabric 1 for example.

bitprophet commented May 22, 2016 edited

Ran into #274 independently recently, re: docs.tree (from invocations) causing UnicodeDecodeError normally and hanging when --pty is enabled. Looks like this behavior started in the 0.11 line, it goes away once one gets back to 0.10.

So clearly the encoding related item left open here, and #274, and doubtless others, need solidifying. I've got some reading and catching up to do.

EDIT: remembered why this was even notable: we have a *#$%! test for this exact case and it's obviously not failing. It needs doublechecking :(


Welp, it's because that test hides output, and the decode error only happens when, you know. You decode. Some text. Which only happens when the stored bytes data gets printed (by...not hiding) or otherwise stringified (e.g. I can get it to happen simply by print(run('cat tree.out', hide=True)), then the __str__ of Result - which includes a representation of the stdout - is the part that explodes).

I'm guessing the earlier errors the test captured were surfacing regardless of actual printing, and things changed since; otherwise it's a Very Bad Test.

For now, simply adding a coerce-to-string line (not using print) suffices to make it fail correctly.

@bitprophet bitprophet modified the milestone: 0.13, 0.14 Jun 11, 2016
bitprophet commented Jul 20, 2016 edited

Noted in #374: IPython as of 5.x is now even more broken under Invoke (rather, as of recent Invoke, presumably). Another complex use case to test alongside vim and pip's progress bars.

Ideally, I'll solve the couple open checkboxes here and open a new damn issue for "we have powerful autoresponse now, let's fix the stuff that broke".


Spun off more tickets for the remaining bits here, this is done after all the overhaul work from #294 / #369, yay this.

@bitprophet bitprophet closed this Sep 23, 2016
@bitprophet bitprophet modified the milestone: 0.16 Feb 14, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment