Skip to content

feat: redirect output stream to os.devnull via parameters of Popen#203

Merged
BainanXia merged 1 commit intodevelopfrom
redirect_output_stream
Jun 20, 2020
Merged

feat: redirect output stream to os.devnull via parameters of Popen#203
BainanXia merged 1 commit intodevelopfrom
redirect_output_stream

Conversation

@BainanXia
Copy link
Copy Markdown
Collaborator

@BainanXia BainanXia commented Jun 17, 2020

Purpose

This branch is a clean-up version of the recent working branch nohup_execute regarding the disconnected issue we encountered during scenario runs.

We conducted a series of test runs to explore the possible solutions to get rid of the requirement of active SSH connection between local client and our remote server (Zeus). The following two approaches that redirect output stream to os.devnull are proved to work:
a)

cmd = [
    'ssh', username+'@'+const.SERVER_ADDRESS,
    'export PYTHONPATH="%s:$PYTHONPATH";' % path_to_package,
    'nohup', 'python3', '-u',
    '%s' % path_to_script,
    self._scenario_info['id'],
    '</dev/null >/dev/null 2>&1 &'
]

b)

cmd = [
    'ssh', username+'@'+const.SERVER_ADDRESS,
    'export PYTHONPATH="%s:$PYTHONPATH";' % path_to_package,
    'python3',
    '%s' % path_to_script,
    self._scenario_info['id']
]
process = Popen(cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL)

After reading through the documentation of subprocess.popen, the parameters of function Popen does following:

stdin, stdout and stderr specify the executed program’s standard input, standard output and standard error file handles, respectively. Valid values are PIPE, DEVNULL, an existing file descriptor (a positive integer), an existing file object, and None. PIPE indicates that a new pipe to the child should be created. DEVNULL indicates that the special file os.devnull will be used. With the default settings of None, no redirection will occur; the child’s file handles will be inherited from the parent. Additionally, stderr can be STDOUT, which indicates that the stderr data from the applications should be captured into the same file handle as for stdout.

Literally, there is no difference between a) and b). In both cases, we create a child process with output stream being redirect to os.devnull and any lower level process (Julia script) will inherit this property and suppress the communication with the local client who starts the process.

Closes #278.

What is the code doing

Implement approach b) above.

Validation

This branch has been tested via Scenario 746 and 747. Both of them are created locally on a Mac laptop with VPN on. Then they went into the queue on Gurobi Cloud. Right after they were submitted, the local VPN was cut off. Both of the two scenarios were successfully finished and extracted.

Time to review

5 min to 30 min depends on how much one would like to dive into the documentations of the subprocess module.

@BainanXia BainanXia requested review from danielolsen and rouille June 17, 2020 08:42
@BainanXia BainanXia self-assigned this Jun 17, 2020
@BainanXia BainanXia added this to the Times A Changin milestone Jun 17, 2020
@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

Thanks for reading the documentation. I think it is the better of the two options since we are using Popen anyway.

@danielolsen
Copy link
Copy Markdown
Contributor

I agree that both '</dev/null >/dev/null 2>&1 &' and stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL redirect to /dev/null on the local machine running the ssh command, and (b) is the cleaner, more explicit option.

Initially, I had thought that all text after the 'ssh' command would get executed on the remote server, and therefore (a) would be redirecting text on the server, rather than on the local machine. But this is not true, as you can see from running ssh you@zeus.intvenlab.com echo "foo" >~/bar.txt: bar.txt appears on your local machine.

Therefore, I still don't understand how/why this solves the problem of the remote process trying to write text and failing on a broken pipe caused by a disconnection of the ssh between our local machine and Zeus. Is ssh smart enough to detect when its output is going to /dev/null, and to tell the remote processes to direct output to /dev/null there instead of sending text over the internet to get discarded locally?

Don't we also still want/need the nohup? I have been running Eastern scenarios exclusively with (a), so I am hesitant to lose anything from that solution.

@BainanXia when you tested (b) on 746 and 747, how long did you keep the VPN off? Long enough to get beyond the 10-15 minutes of buffering that we've observed?

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

@danielolsen, try to run that:

from subprocess import Popen, DEVNULL

cmd = ['ssh', 'dolsen@zeus.intvenlab.com', 'echo "foo" > ~/bar.txt']

process = Popen(cmd, stdin=DEVNULL, stdout=DEVNULL, stderr=DEVNULL)

bar.txt will be created in $HOME on zeus.

@danielolsen
Copy link
Copy Markdown
Contributor

@rouille you are right. The same happens if we redirect to PIPE rather than DEVNULL: we still get the file on the server.

In that case, maybe there is a difference between (a) and (b)? Because (a) seems to do the redirection of stdout to /dev/null on the remote server, and (b) seems to only control redirection of whatever comes back to the local machine?

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

With PIPE the standard streams are attached to the process, i.e., you can get them via:

(v1) [~/REM/PostREISE/postreise] (update_init_files) brdo$ python -i ~/Desktop/test_popen.py 
PID: 17737
>>> stdout = process.stdout.read()
>>> print(list(filter(None, stdout.decode("utf-8").splitlines())))
[]

there is nothing to read because the standard output has been redirected to a file. But you can get it. For DEVNULL, it is not accessible.

(v1) [~/REM/PostREISE/postreise] (update_init_files) brdo$ python -i ~/Desktop/test_popen.py 
>>> stdout = process.stdout.read()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
AttributeError: 'NoneType' object has no attribute 'read'

@danielolsen
Copy link
Copy Markdown
Contributor

You're right, it's more than just redirection of the text that comes back to the local machine, there are other differences as well in what the Popen call returns.

Solution (a) was still using PIPE.

@BainanXia
Copy link
Copy Markdown
Collaborator Author

BainanXia commented Jun 17, 2020

@danielolsen When I tested 746 and 747, I cut off my VPN right after the jobs were submitted and never reconnected until they were finished and extracted on via my remote PC in the office.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

I would say that the a) and b) are equivalent since the only thing that seems to matter is redirecting the standard streams to devnull. a) shorctcuts b) so setting the keywords to PIPE or DEVNULL don't matter and I believe the nohup in a) has no effect since Popen handles the SIGHUP. My 2 cents.

@danielolsen
Copy link
Copy Markdown
Contributor

I would say that the a) and b) are equivalent since the only thing that seems to matter is redirecting the standard streams to devnull. a) shorctcuts b) so setting the keywords to PIPE or DEVNULL don't matter and I believe the nohup in a) has no effect since Popen handles the SIGHUP. My 2 cents.

What do you mean 'Popen handles the SIGHUP'?

@BainanXia
Copy link
Copy Markdown
Collaborator Author

BainanXia commented Jun 17, 2020

@danielolsen Try ssh you@zeus.intvenlab.com "echo "foo">./bar.txt", this will create the file on the remote server. The quotes "protect" the command and its redirection so that it is not evaluated by the local shell.

If we dive into the source code of subprocess.Popen here https://github.com/python/cpython/blob/3.8/Lib/subprocess.py we could tell Popen interprets the parameter settings via os, which is equivalent to the way that shell interprets the cmd command.

@danielolsen
Copy link
Copy Markdown
Contributor

@BainanXia I have tried that one before :)

For the user's purposes, (a) and (b) are equivalent, and from your testing it seems that (b) works, but I still don't understand why redirecting the stdout from the local ssh call to the local /dev/null avoids the condition where the disconnection of the ssh pipe causes the remote Julia process to fail because it cannot write to it.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

I would say that the a) and b) are equivalent since the only thing that seems to matter is redirecting the standard streams to devnull. a) shorctcuts b) so setting the keywords to PIPE or DEVNULL don't matter and I believe the nohup in a) has no effect since Popen handles the SIGHUP. My 2 cents.

What do you mean 'Popen handles the SIGHUP'?

Popen allows communication between the parent process and its child processes using signals (SIGTERM, SIGKILL, ...). If the ssh connection is broken then the communication is lost and the parent process has no control on the child process. This means that the kill, terminate, poll, send_signal methods of the Popen class will be ineffective. I believe that if Popen.std(out)(in)(err) are set to PIPE then if the connection is broken the job on zeus will be automatically be killed because the streams cannot be transported. If those are set to DEVNULL, Popen do not expect anything back.

@danielolsen
Copy link
Copy Markdown
Contributor

I believe that if Popen.std(out)(in)(err) are set to PIPE then if the connection is broken the job on zeus will be automatically be killed because the streams cannot be transported. If those are set to DEVNULL, Popen do not expect anything back.

Automatically killed by who? Popen cannot kill the job because the connection is broken. If the job is killed by sshd on the server reporting a broken pipe because it cannot communicate with ssh on the laptop, then that should happen regardless of where the local ssh process is sending the text locally, right? Unless somehow sshd is told to not try to communicate with ssh. The only way I can see that happening under the hood is if ssh somehow detects that it is going to be sending to /dev/null locally and then tells sshd 'don't bother sending me any text back, I won't be storing it anyway'.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

I believe that if Popen.std(out)(in)(err) are set to PIPE then if the connection is broken the job on zeus will be automatically be killed because the streams cannot be transported. If those are set to DEVNULL, Popen do not expect anything back.

Automatically killed by who? Popen cannot kill the job because the connection is broken. If the job is killed by sshd on the server reporting a broken pipe because it cannot communicate with ssh on the laptop, then that should happen regardless of where the local ssh process is sending the text locally, right? Unless somehow sshd is told to not try to communicate with ssh. The only way I can see that happening under the hood is if ssh somehow detects that it is going to be sending to /dev/null locally and then tells sshd 'don't bother sending me any text back, I won't be storing it anyway'.

Don't you think that setting the streams to DEVNULL tells the child process (ssh) to not communicate the streams to the parent process?

@BainanXia
Copy link
Copy Markdown
Collaborator Author

I believe that if Popen.std(out)(in)(err) are set to PIPE then if the connection is broken the job on zeus will be automatically be killed because the streams cannot be transported. If those are set to DEVNULL, Popen do not expect anything back.

Automatically killed by who? Popen cannot kill the job because the connection is broken. If the job is killed by sshd on the server reporting a broken pipe because it cannot communicate with ssh on the laptop, then that should happen regardless of where the local ssh process is sending the text locally, right? Unless somehow sshd is told to not try to communicate with ssh. The only way I can see that happening under the hood is if ssh somehow detects that it is going to be sending to /dev/null locally and then tells sshd 'don't bother sending me any text back, I won't be storing it anyway'.

Don't you think that setting the streams to DEVNULL tells the child process (ssh) to not communicate the streams to the parent process?

The remote child process is set with default settings, the file handle of which will be inherited from the parent process. This is documented in my first post.

@danielolsen
Copy link
Copy Markdown
Contributor

Don't you think that setting the streams to DEVNULL tells the child process (ssh) to not communicate the streams to the parent process?

In my mind I see five generations of processes. Python/Popen -> ssh (local) -> sshd (server) -> Python (server) -> Julia (server). Note: anything Julia prints (before the stdout is redirected to a file) skips Python and goes straight to sshd.

If sshd is trying to communicate back to ssh, but can't because that connection is broken, then I don't see how it matters whether ssh would send the text to Python or not.

The remote child process is set with default settings, the file handle of which will be inherited from the parent process. This is documented in my first post.

I can see how Popen would apply to the first parent -> child relationship between Python and ssh, but not how it would automatically apply to the others.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

@BainanXia

I believe that if Popen.std(out)(in)(err) are set to PIPE then if the connection is broken the job on zeus will be automatically be killed because the streams cannot be transported. If those are set to DEVNULL, Popen do not expect anything back.

Automatically killed by who? Popen cannot kill the job because the connection is broken. If the job is killed by sshd on the server reporting a broken pipe because it cannot communicate with ssh on the laptop, then that should happen regardless of where the local ssh process is sending the text locally, right? Unless somehow sshd is told to not try to communicate with ssh. The only way I can see that happening under the hood is if ssh somehow detects that it is going to be sending to /dev/null locally and then tells sshd 'don't bother sending me any text back, I won't be storing it anyway'.

Don't you think that setting the streams to DEVNULL tells the child process (ssh) to not communicate the streams to the parent process?

The remote child process is set with default settings, the file handle of which will be inherited from the parent process. This is documented in my first post.

@BainanXia, this implies that any streams produced by call.py or its subprocesses are directed to /dev/null on zeus when you set std(in)(err)(out) to DEVNULL in Popen?

@jenhagg
Copy link
Copy Markdown
Collaborator

jenhagg commented Jun 17, 2020

I'm fairly confident that a remote process can't be a child of a local process - this wouldn't make sense given that the os kernel has to manage process state locally. The remote parent here is just sshd (whose parent is the remote init system), which executes commands we send it as children. You can see this by running ps auxf on zeus. Only thing I get out of this though, is if local stream redirection has any impact remotely, it would have to be something done by ssh itself (as opposed to the Popen or os process management).

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

@jon-hagg, do you understand why both options seem to act similarly? I don’t know much about all of this.

@jenhagg
Copy link
Copy Markdown
Collaborator

jenhagg commented Jun 17, 2020

@rouille yeah I'm going back and forth in my head a bit. Seems like there are some differences, so just to summarize -

  • first approach: nohup and redirection apply to remote process
  • second approach: nohup is omitted, and redirection applies to the local ssh process

Now that I think about it, I'm not sure why the second approach works, unless coincidentally. I'd still expect it to get a SIGHUP on disconnection as before. Can't say much else at the moment, will update if I think of something useful.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

It would be nice to understand what is going on?

@BainanXia
Copy link
Copy Markdown
Collaborator Author

I think at this point, it is a good opportunity to understand what fundamentally causes the disconnected issue. We starts a local process via Popen which executes a remote script on the server with a remote process. While the script is running, it keeps producing texts via output stream and modifying files in remote directory. The issue occurs, when we cut off the SSH connection between local client and remote server during the run, which terminates the remote process as well. I presume this should not happen in general and conducted the following simple tests:

I have following script helloword.py and helloword.jl remotely on Zeus in my home folder:

import time

with open("helloworld_log","w") as f:
	i = 0
	while i<30:
		i += 1
		time.sleep(1)
		f.write('helloworld %d'%(i))
		print('helloworld %d'%(i))
f = open("helloworld_log_julia.txt", "w")
for i in 1:30
	sleep(1)
	println(f, "helloword: ",i)
	println("helloworld: ",i)
end
close(f)

I execute following script locally on my Mac with VPN on.

from subprocess import Popen, PIPE, DEVNULL

cmd = ['ssh', 'bxia@zeus.intvenlab.com', 'python', 'helloworld.py']
process = Popen(cmd, stdout=PIPE)
stdout = process.stdout.read()
print(stdout)
from subprocess import Popen, PIPE, DEVNULL

cmd = ['ssh', 'bxia@zeus.intvenlab.com', 'julia', 'helloworld.jl']
process = Popen(cmd, stdout=PIPE)
stdout = process.stdout.read()
print(stdout)

And indeed in both tests, no matter I terminated the local process or not, the remote log were complete, i.e. the remote process finished successfully. The only difference was whether I could get the stdout printed out locally on the terminal.

Then I have the following question popped up in my mind: during our script running, do we have any functions that require the connection between local client and the remote server, in other words, is there any function that tries to write anything in the pipe and will terminate the remote process if it fails to do so.

@danielolsen
Copy link
Copy Markdown
Contributor

Then I have the following question popped up in my mind: during our script running, do we have any functions that require the connection between local client and the remote server, in other words, is there any function that tries to write anything in the pipe and will terminate the remote process if it fails to do so.

Currently, we are only writing to the pipe from the start of the Julia process until the process gets available capacity on the Gurobi cloud, and then for about a minute afterwards until all the input files are loaded/prepared. After that, REISE.jl redirects all further output to local (Zeus) files. See Breakthrough-Energy/REISE.jl#57. This REISE.jl feature was implemented around the same time as we started testing on the modifications to PowerSimData, which makes it a bit harder to draw solid conclusions.

@BainanXia, did Scenarios 746 and 747 have to queue for a while, or did they get cloud capacity right away?

@BainanXia
Copy link
Copy Markdown
Collaborator Author

Then I have the following question popped up in my mind: during our script running, do we have any functions that require the connection between local client and the remote server, in other words, is there any function that tries to write anything in the pipe and will terminate the remote process if it fails to do so.

Currently, we are only writing to the pipe from the start of the Julia process until the process gets available capacity on the Gurobi cloud, and then for about a minute afterwards until all the input files are loaded/prepared. After that, REISE.jl redirects all further output to local (Zeus) files. See intvenlab/REISE.jl#57. This REISE.jl feature was implemented around the same time as we started testing on the modifications to PowerSimData, which makes it a bit harder to draw solid conclusions.

@BainanXia, did Scenarios 746 and 747 have to queue for a while, or did they get cloud capacity right away?

They have to queue for a while until I spun up another machine.

@danielolsen
Copy link
Copy Markdown
Contributor

How long were they in the queue? Before we started redirecting the Julia output, we were seeing writes to the ssh pipe still succeeding for 10-15 minutes after the connection had been killed. Did they queue (and keep printing) longer than that?

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

And we did not have any problem with REISE...

@BainanXia
Copy link
Copy Markdown
Collaborator Author

@danielolsen I waited about 10 minutes before I spun up a new machine. Hence, they actually stayed in the queue for about 10 minutes.

@BainanXia
Copy link
Copy Markdown
Collaborator Author

@rouille Any suggestions on what else I could test with my helloworld scripts to reproduce the disconnection issue without touching the real framework.

@danielolsen
Copy link
Copy Markdown
Contributor

And we did not have any problem with REISE...

If you want to try to expand the capabilities of MATPOWER/MOST, be my guest 😛

Any suggestions on what else I could test with my helloworld scripts to reproduce the disconnection issue without touching the real framework.

Try counting to 3600 instead of 30? If the sshd process will buffer for ~15 minutes, printing for an hour might be enough to trigger a complaint.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 17, 2020

And we did not have any problem with REISE...

If you want to try to expand the capabilities of MATPOWER/MOST, be my guest 😛

@danielolsen we should not look back but it is very strange that it does not happen for MATLAB + MATPOWER/MOST + GUROBI. I think that if it breaks for 3600s it would be worth to implement the helloworld in MATLAB and see what happens.

@danielolsen
Copy link
Copy Markdown
Contributor

And we did not have any problem with REISE...

If you want to try to expand the capabilities of MATPOWER/MOST, be my guest 😛

@danielolsen we should not look back but it is very strange that it does not happen for MATLAB + MATPOWER/MOST + GUROBI. I think that if it breaks for 3600s it would be worth to implement the helloworld in MATLAB and see what happens.

If we want to be rigorous, let's try both matlab helloworld.m as well as calling a python script that uses the matlab module to call the script, like we were doing with REISE.

@BainanXia
Copy link
Copy Markdown
Collaborator Author

Interesting observation, 1-hour test (counting to 3600) failed for both Python and Julia script. Python stopped at 3398 whereas Julia stopped at 3346. Will do a 2-hour test now.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 18, 2020

Just to make sure, you ran 4 tests, one for Julia for options a and b and one for Python for options a and b. Correct?

@BainanXia
Copy link
Copy Markdown
Collaborator Author

Just to make sure, you ran 4 tests, one for Julia for options a and b and one for Python for options a and b. Correct?

Nope. Only two tests for Python and Julia respectively. Neither a) nor b), but our original setup, with which we encountered disconnected issue. The purpose of the tests is to reproduce the disconnected issue without our scenario framework to understand what actually causes the problem.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 18, 2020

Just to make sure, you ran 4 tests, one for Julia for options a and b and one for Python for options a and b. Correct?

Nope. Only two tests for Python and Julia respectively. Neither a) nor b), but our original setup, with which we encountered disconnected issue. The purpose of the tests is to reproduce the disconnected issue without our scenario framework to understand what actually causes the problem.

Ok, thanks for refreshing my memory. When did you start the tests? STG did some maintenance from 7pm onward yesterday affecting the VPN.

Now, you do a 2h test without no connection interruption. What do you plan to do next?

@BainanXia
Copy link
Copy Markdown
Collaborator Author

Just to make sure, you ran 4 tests, one for Julia for options a and b and one for Python for options a and b. Correct?

Nope. Only two tests for Python and Julia respectively. Neither a) nor b), but our original setup, with which we encountered disconnected issue. The purpose of the tests is to reproduce the disconnected issue without our scenario framework to understand what actually causes the problem.

Ok, thanks for refreshing my memory. When did you start the tests? STG did some maintenance from 7pm onward yesterday affecting the VPN.

Now, you do a 2h test without no connection interruption. What do you plan to do next?

Yesterday around 5:30. I'm not sure whether the tests were affected by the STG maintenance since I cut off my connection to Zeus (as well as the VPN) right after I started the tests. I presume the maintenance won't shut down the machines in the lab, hence should have no impact on the tests. Given the results are not as expected, I ran another 2h test to check it again.

The whole loop involves 5 components: local client ssh to Zeus, sshd on Zeus, Python caller that calls either Matlab or Julia script, the script starts Gurobi client and submit jobs to Gurobi cloud, Gurobi cloud sends results back. We are aware of the issue is caused by something keeps trying to write streams to the ssh pipe between local client and Zeus and it terminates the process if it fails to do so, but not sure what it is. I think need to check those components one by one until we locate the function/script that is doing so.

@danielolsen
Copy link
Copy Markdown
Contributor

We know there is some sort of buffer in writing to the SSH pipe, as evidenced by the jobs not immediately failing once the SSH is disconnected, and we know that the buffer has some sort of limit, because the pipe eventually does fail. We have previously seen this happen after 10-15 minutes, but maybe it is not time-based, but size-based. That might explain the longer time for the helloworld scripts to fail (1 short line per second), compared to the simulation engine (more output).

@BainanXia
Copy link
Copy Markdown
Collaborator Author

Test update: the 2h tests conducted yesterday ended up with 1270 with Python and 998 with Julia, whereas 7200 is the expected number. Given both 1h tests and 2h tests were failed at some random time, no matter with or without connection interruption (STG maintenance), could we conclude that it is necessary for the SSH connection being active with the standard stream option 'PIPE'? If so, why REISE could survive?

@danielolsen
Copy link
Copy Markdown
Contributor

Test update: the 2h tests conducted yesterday ended up with 1270 with Python and 998 with Julia, whereas 7200 is the expected number. Given both 1h tests and 2h tests were failed at some random time, no matter with or without connection interruption (STG maintenance), could we conclude that it is necessary for the SSH connection being active with the standard stream option 'PIPE'? If so, why REISE could survive?

My best theory so far is that there is an undocumented feature within MATLAB (or the python interface to MATLAB) that wraps around the stdout printing: either catching and ignoring the broken pipe error, or with its own internal buffer, or something like that. I bet we could replicate this with a try/except wrapper around the print statements in Python/Julia, and the log file would get to 7200 as expected.

Based on everything I've read so far on running commands over SSH, they should be expected to fail if the SSH tunnel does, and the fact that they did not with REISE is the exception, rather than the rule.

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 19, 2020

Should we go with option a and call it a day?

@rouille
Copy link
Copy Markdown
Collaborator

rouille commented Jun 19, 2020

I think we should merge this PR with options a. @danielolsen and @jon-hagg have strong arguments in its favor and I agree that the explicit formulation of the command guarantees that the redirection of stdout and stderr happens on the server.

It does not mean that we give up on b but we can carry out further testing to understand why it works and if we think it is the cleaner option switch to it later. It will be nice to have this PR and #194 merged so @jon-hagg can blacken the entire package and activate the linter.

@BainanXia
Copy link
Copy Markdown
Collaborator Author

@rouille Sure. Will switch to a) in my next commit. I will also create an issue which will take care of the following tests that we would like to carry on.

Copy link
Copy Markdown
Contributor

@danielolsen danielolsen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for your patience as we continued to discuss root causes.

@BainanXia BainanXia force-pushed the redirect_output_stream branch from 922cb09 to 54fec3f Compare June 20, 2020 08:16
@BainanXia BainanXia merged commit 1a8e6c6 into develop Jun 20, 2020
@BainanXia BainanXia deleted the redirect_output_stream branch June 20, 2020 08:59
@ahurli ahurli mentioned this pull request Mar 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants