This updates the stop/kill send signal logic to send the signal first to any child processes (recursively), then to the main command process. This allows you to properly stop commands that spawn child processes. This came up when using Scmd to programmatically run `rackup`. I found that stopping the command would indeed stop the main command but the spawned rackup process would not stop and my script would hang. Note, this uses `which pgrep` and the `pgrep` sys command to lookup the child processes. If those don't return success with sane output, the command will act like it has no child processes and behave as before. Note also that the those sys commands are run using scmd itself (so meta). However, the way those commands are called will prevent any kind of infinite recursion problems (executed w/ `run`).
This test was randomly failing b/c it would timeout. This ups the allowed time to minimize the chance of a timeout happening. In this case it is ok b/c the timeout isn't being tested. The streaming of a large amount of output is being tested. We want to ensure there is enough time given to stream all that data. /cc @jcredding
This updates the `Command` class to take a hash for passing env variables. These are passed to the `popen4` call and are set as environment variables for the child process. This allows passing sensitive env vars like passwords without putting them in the command string.
* some minor cleanups to the gem meta (4451e18) * some general cleanups and modernizations (#16) * have `RunError` *optionally* take a custom backtrace (#17) * read output from stdout and stderr while cmd is running (#18) * Add a bench mark script to help measure performance impact of changes (#20) * use a "self-pipe" to handle cmd timeouts (#21) * Some minor tweaks to the setup (#22) /cc @jcredding
This switches to defining a stop "self-pipe" for signaling when the child process has exited. Intead of looping in the `wait_for_exit` and checking if you've timed out every `WAIT_INTERVAL`, you instead just select on the stop pipe with a timeout (or not). The thread streaming the child process output also checks if the child process has exited. If it has the stream thread writes to the stop pipe which triggers the wait to end.
This add a bench mark script that measures running cmds multiple times and writes the results so we can know how a change impacts performance.
This updates the cmd to begin reading from stdout and stderr as soon as the cmd has started running. This is accomplished by starting a thread that reads `READ_SIZE` bytes from the io as long as the cmd is running. Data is read using `read_nonblock` and new data is detected using `IO.select`. As before, calling `wait` if the cmd was started asynchronously is highly recommended (this either waits for the cmd to finish or times out). Either way, this ensures the read output thread is exited and the exitstatus captured. This is handled automatically if the cmd is started synchronously using `run`. The goal here is to prevent "deadlock" scenarios where a cmd can't finish b/c it has filled up one of the output streams. It also enables output "streaming" as output will now be available for inspecting as soon as it is written. I've added some deadlock tests where I test both a `cat` cmd on both a small and big file. The big file is larger than 64k (the default buffer size of stdout). Both execute the cmd just fine without deadlocking and timing out on the `.wait(1)`. This test fails without these changes. To accomplish this, I also remodeled the command's "run data" as a "child process" object. This is a more appropriate name for the data and behavior this is encapsulating. This allows the same access to the child process' data and allows us to abstract child process handling behavior in this object. Finally, this switches to _not_ stripping any read output. This is necessary b/c we are now streaming output but also good b/c we want to preserve the output exactly as the cmd produced it.
This updates the RunError custom exception to be optionally created with a custom backtrace. If none is provided, it falls back to the `caller`. This makes it behave like any other exception class.