New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[RDY] implement system() with pipes #978

Closed
wants to merge 13 commits into
base: master
from

Conversation

Projects
None yet
8 participants
@aktau
Copy link
Member

aktau commented Jul 21, 2014

So this is attempt 2 with hopefully less code duplication. For attempt 1 and its discussion, see #807.

I would like to request @tarruda to look over this code, while it's WIP because he knows the job architecture best and in many cases I'm not at all sure if I've made the best decisions. Ignore the logging statements for now please.

Preliminary results (thanks to @Shougo for the benchmark script):

let cnt = 50

let start = reltime()

for i in range(1, cnt)
  " 13MB file
  call system('cat song.flac')
endfor

echomsg 'system()         = ' . reltimestr(reltime(start))

I compile on release mode (otherwise the benchmark doesn't even work, yes that's strange and it worries me too). Neovim master doesn't show this problem, so it's been introduced somewhere in these commits. I can only think of the ELOG's, but standard neovim also produces logs. (fixed: it was the logs, because the read stream or libuv only reads 1024 bytes at a time, the log was getting swamped and making my computer do very strange things. I even think that I could input keys and exit vim while this was happening because I'm returning too early from job_wait(), but @tarruda will have to confirm that. In reality I don't see how it can happen though, system_out_cb shouldn't be called after the job exits, at all).

$ make CFLAGS='-flto -DNDEBUG -march=native' CMAKE_EXTRA_FLAGS='-DCMAKE_BUILD_TYPE=MinSizeRel' DEBUG=0

I run it with:

$ $VIM -u NONE -U NONE -E -c "source bench.vim"
  • Vanilla vim: ~12.5 sec (somehow, it's faster than master here, usually it isn't)
  • Neovim master (optimized): ~14 sec
  • Neovim on pipes (optimized): ~4 sec (quite a bit better, difference not very obvious for small commands)

I have no idea if that's the best way to do it. If someone knows a better way, I'm all ears.

To compare with some sort of baseline, I do:

# can't do "cat song.flac > /dev/null" because it seems cat is being clever and does nothing
# which is why I create a ramdisk...
$ diskutil erasevolume HFS+ 'aramdisk' `hdiutil attach -nomount ram://131072`
$ time (for run in {1..50}; do cat song.flac > /Volumes/aramdisk/song.flac; done)
0,10s user 1,37s system 92% cpu 1,592 total

EDIT: I would love to receive other peoples benchmark results too.
EDIT 2: As some will notice, when compiling in DEBUG mode the benchmark doesn't work. I'm not sure why yet.
EDIT 3: it should be a little slower than the approach in #807, as the rstream has an intermediate buffer (which means redundant copying happens, the data gets copied once in #807, but twice in this PR). It's still better than writing and reading from a temporary file though.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 21, 2014

@tarruda I'm getting a whole lot of:

JOB READ ... 1024

When reading large files. Seems like a tiny amount of memory. I wonder if libuv is responsible for this, or us choosing a tiny buffer. Any way to influence this from the job api?

@justinmk

This comment has been minimized.

Copy link
Member

justinmk commented Jul 21, 2014

One remaining edge case that requires temp files is purportedly to detect string encoding (see :help shelltemp). I'm not convinced temp files are required for this, but it's an edge case we need to test if we want to do away with temp files for system() and :%! and :r! filtering.

I've been waiting for an actual report of this issue and it just so happens that one appeared 2 days ago: tpope/vim-sensible#73 (comment)

I'm not sure if this PR even affects the "filter" routines of (neo)vim, but I wanted to mention this for future reference.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 21, 2014

AFAIK, shelltemp doesn't have any effect for f_system, at the moment (I think, should probably check the old implementation).

And yea, I believe filter is another, separate thing.

EDIT: grepping for p_stmp usage shows it's only present in do_filter(), which calls call_shell, which in turn calls os_call_shell.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 21, 2014

@EPNGH you said on the vim-sensible issue tracker:

Here are two plugins that do not work with noshelltemp on gVim 7.4.364 and Windows 7 :
https://github.com/mileszs/ack.vim
https://github.com/dyng/ctrlsf.vim
With set noshelltemp, somehow they do not parse the correct line number of the match anymore.

Can you tell me the exact sequence of commands/actions you took to provoke the error? Does it also occur on UNIX (OSX), if you know? Otherwise it'll be difficult for me to test the issue when I get my hands on do_filter().

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 21, 2014

Alright. Everything seems to work properly now. People can start testing and benchmarking now even before @tarruda gets to review the code. If you benchmark, please compile with make CFLAGS='-flto -DNDEBUG -march=native' CMAKE_EXTRA_FLAGS='-DCMAKE_BUILD_TYPE=MinSizeRel' DEBUG=0 or something similar (without LTO, it doesn't matter a lot), as the logging does slow it down a bit.

@splinterofchaos

This comment has been minimized.

Copy link
Member

splinterofchaos commented Jul 22, 2014

Am I mistaken or does this make the fish patch (7.4.276) unnecessary? It must, at least for my use-cases, because syntastic still runs without error even after a sudo make install!

Might not have been the intended effect, but since not all shells use the same syntax (...or maybe just fish), using just pipes actually is more generic. Keep up the good work!
:-)

@hkupty

This comment has been minimized.

Copy link

hkupty commented Jul 22, 2014

Well.. I did test with a really big file (1.3gb) and the output was:
vim: 102.64
nvim: 103.86
nvim-pipes: 188.84

While on small files (~5kb), it was:
vim: 0.53
nvim 0.25
nvim-pipes: 0.23

I compiled nvim-pipes with: make cmake CFLAGS='-DSTARTUPTIME' CMAKE_EXTRA_FLAGS="-DCMAKE_BUILD_TYPE=Debug -DCMAKE_INSTALL_PREFIX=/usr"

@justinmk

This comment has been minimized.

Copy link
Member

justinmk commented Jul 22, 2014

using just pipes actually is more generic. Keep up the good work!

@splinterofchaos Yes, legacy Vim builds literal shell command strings in make_filter_cmd which is used by get_cmd_output. @aktau's PR eliminates get_cmd_output in f_system function so it benefits any plugin that uses system().

However patch 276 is still needed for components that use:

  • get_cmd_output
    • used by find_locales (we could possibly eliminate this usage)
    • used by expand_backtick (can't eliminate this)
  • make_filter_cmd
    • used by do_filter which is needed for vimscript :range! but not :!.
      • In Neovim, this is the only thing affected by 'shelltemp' option (p_stmp).
      • In legacy Vim, 'shelltemp' also affects mch_call_shell and mch_system
@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

Well.. I did test with a really big file (1.3gb) and the output was:

@hkupty That's a very unexpected performance regression, and pretty much the opposite of what I'm seeing on medium-sized files (13MB). I'll try with a huge file today on OSX. I'd like to get to the bottom of this. In the meantime, to rule out the fact that debug logging is messing with this. Could you compile and test with:

$ make CFLAGS='-DNDEBUG -march=native' CMAKE_EXTRA_FLAGS='-DCMAKE_BUILD_TYPE=MinSizeRel' DEBUG=0

Because your commandline should still have debug logging enabled, which is very slow (it opens and closes the debug file on every log attempt, for example. (by the way, the -DSTARTUPTIME flag will never be necessary anymore, it was eliminated in an earlier PR that's already merged)

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

@hkupty I just tried myself with a 1.1GB file on OSX (I made sure to warm up the file with a throwaway run first, to make sure the OS has things cached and the comparison is fair):

bin time (seconds)
vim 137.94
nvim 186.29
nvim-pipes 12.63

Watching the memory/cpu behaviour in htop gives the expected results: nvim-pipes' CPU is high and its memory usage increases rapidly (as soon as cat starts). While the other 2 have low CPU usage and low memory usage until cat is done, then memory usage starts climbing to the file size slowly

Curiously, nvim master seems consistently about 50% slower than vanilla vim.

Did you apply all the commits from this PR? The last two "SQUASH" commits disable some debugging output that were spamming the nvimlog so much with large files that it was grinding my PC to a halt.

EDIT: The best run I had, with an optimized nvim-pipes, was 11.14 seconds, pretty nice.

@Konfekt

This comment has been minimized.

Copy link

Konfekt commented Jul 22, 2014

Hello @aktau,

I described in tpope/vim-sensible#73 (comment)

how to reproduce it. I cannot test it under Linux though. @tpope cannot reproduce it.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

I described in tpope/vim-sensible#73 (comment)

Thanks @EPNGH, this will be good to test when we get windows compatibility.

int i;
Job *job = job_start(argv,
output ? &buf : NULL,
output ? system_data_cb : NULL,

This comment has been minimized.

@oni-link

oni-link Jul 22, 2014

Contributor

If the job outputs data then read_cb in job.c tries to call this callback (no NULL check). Likewise for the next argument.

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

Uh-oh. I had assumed that the job code wouldn't even open the pipes if I passed in a NULL handle. Do think this is an oversight in the job code or is it meant to be that way @tarruda?

This comment has been minimized.

@tarruda

tarruda Jul 22, 2014

Member

We can't inherit the parent's std{out,err} because it won't work for UIs over msgpack-rpc. For now simply forward to nvim stdout explicitly(if the "pipe" option is not used), later we will use a specialized log bufer for displaying output from child processes(ref #901)

You might need to create a struct that encapsulates the dynamic buffer and the os_system options, and pass it as the job data to handle the job callbacks correctly.

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

We can't inherit the parent's std{out,err} because it won't work for UIs over msgpack-rpc.

I didn't mean inheriting the parent's (nvim)'s stdout/stderr, I meant not opening the pipes at all. That way there could never be any data to read (and the process could also known that it's not expected to output anything). Not sure what the OS does if the process tries to write to such an FD. Perhaps send it to /dev/null efficiently.

You might need to create a struct that encapsulates the dynamic buffer and the os_system options, and pass it as the job data to handle the job callbacks correctly.

I'm really not sure why that would be necessary, it works fine as is. Can you give an example?

This comment has been minimized.

@tarruda

tarruda Jul 22, 2014

Member

For now you could change the job module to redirect to /dev/null when a NULL callback is passed(UV_IGNORE flag or the process would hang when the OS buffers get full), but eventually I want to use os_system instead of os_call_shell which is called by filter commands, and this would require displaying command output when no output argument is passed

This comment has been minimized.

@tarruda

tarruda Jul 22, 2014

Member

@aktau simply assume output will never be NULL(right now it never is because os_system is only called from f_system). We can use a separate PR to completely remove os_call_shell and make the necessary job.c adjustments

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

From the libuv source:

unix/process.c

  for (i = 0; i < stdio_count; i++) {
    pipes[i][0] = -1;
    pipes[i][1] = -1;
  }

  for (i = 0; i < options->stdio_count; i++) {
    err = uv__process_init_stdio(options->stdio + i, pipes[i]);
    if (err)
      goto error;
  }

static int uv__process_init_stdio(uv_stdio_container_t* container, int fds[2]) {
  // ...
  switch (container->flags & mask) {
  case UV_IGNORE:
    return 0;
  // ...
  }
}

  for (fd = 0; fd < stdio_count; fd++) {
    close_fd = pipes[fd][0];
    use_fd = pipes[fd][1];

    if (use_fd < 0) {
      if (fd >= 3)
        continue;
      else {
        /* redirect stdin, stdout and stderr to /dev/null even if UV_IGNORE is
         * set
         */
        use_fd = open("/dev/null", fd == 0 ? O_RDONLY : O_RDWR);
        close_fd = use_fd;

        if (use_fd == -1) {
          uv__write_int(error_fd, -errno);
          _exit(127);
        }
      }
    }

(If I haven't included enough context, it's all in the process.c).

I derive from this that if UV_IGNORE is passsed, the default fd value (-1) is kept. In the uv__process_child_init function this means that it will be redirected to /dev/null, as expected. At least on UNIX.

Now for windows (win/process-stdio.c):

    switch (fdopt.flags & (UV_IGNORE | UV_CREATE_PIPE | UV_INHERIT_FD |
            UV_INHERIT_STREAM)) {
      case UV_IGNORE:
        /* Starting a process with no stdin/stout/stderr can confuse it. */
        /* So no matter what the user specified, we make sure the first */
        /* three FDs are always open in their typical modes, e.g. stdin */
        /* be readable and stdout/err should be writable. For FDs > 2, don't */
        /* do anything - all handles in the stdio buffer are initialized with */
        /* INVALID_HANDLE_VALUE, which should be okay. */
        if (i <= 2) {
          DWORD access = (i == 0) ? FILE_GENERIC_READ :
                                    FILE_GENERIC_WRITE | FILE_READ_ATTRIBUTES;

          err = uv__create_nul_handle(&CHILD_STDIO_HANDLE(buffer, i),
                                      access);
          if (err)
            goto error;

          CHILD_STDIO_CRT_FLAGS(buffer, i) = FOPEN | FDEV;
        }
        break;

This is a bit murkier, but seems to indicate that it's fine.

UV_IGNORE seems to be just that, ignore. Which is exactly what we want. No?

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

For now you could change the job module to redirect to /dev/null when a NULL callback is passed(UV_IGNORE flag or the process would hang when the OS buffers get full), but eventually I want to use os_system instead of os_call_shell which is called by filter commands, and this would require displaying command output when no output argument is passed

Perhaps I misunderstood you earlier and you knew that UV_IGNORE redirects to /dev/null. Why would you pass NULL to output if you want to display the output? I don't fully understand.

This comment has been minimized.

@tarruda

tarruda Jul 22, 2014

Member

UV_IGNORE seems to be just that, ignore. Which is exactly what we want. No?

I'm not sure what's the windows behavior regarding full OS buffers.

In any case we can't simply ignore output if os_system is going to be used in other places other than f_system, mainly because the current behavior is to display the child process output when kShellOptPipe is not set.

To correctly handle the no-kShellOptPipe scenario, some extra work is required: The os_system callback must read output from the child process(it always will open a PIPE even if NULL is passed as the output argument) and forward to nvim stdout from the read callback. Right now this can be done transparently by simply inheriting the file descriptors(it's what os_call_shell does), but it woudn't work when we start using UIs through msgpack-rpc

I suggest leaving this to another PR because some extra refactoring may be required

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

I'm not sure what's the windows behavior regarding full OS buffers.

If anything bad happens, I'd consider it a libuv bug. Ignore is ignore.

The os_system callback must read output from the child process(it always will open a PIPE even if NULL is passed as the output argument) and forward to nvim stdout from the read callback

Technically we could catch all the output and throw it to the screen after being done, like os_system does now, but that's not efficient. To support something like this (as in the current incarnation of vim/nvim), we'd indeed have to alter job_start to be able to override the FD's or something.

Ok I'll leave out the optionality of the output for now.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jul 22, 2014

Looks good, but the terminal needs to be in cooked mode or ctrl+c will be ignored. The job_wait function seems a good place for the settmode calls

@tarruda I'm getting a whole lot of:

JOB READ ... 1024
When reading large files. Seems like a tiny amount of memory. I wonder if libuv is responsible for this, or us choosing a tiny buffer. Any way to influence this from the job api?

Yes it's very small, I kept changing this to small values during development but forgot to use a proper value once the RStream code was properly tested. Please increase JOB_BUFFER_SIZE to 0xffff, which is the same value used by libuv.

The performance issue experienced by @hkupty might be related to the OS swaping pages since vim loads the whole file into memory(eg: It will probably be slow for 1gb file if the machine/vm has about 1gb ram). I think benchmarking with a 200mb file would be enough to start seeing different results.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

The performance issue experienced by @hkupty might be related to the OS swaping pages since vim loads the whole file into memory(eg: It will probably be slow for 1gb file if the machine/vm has about 1gb ram). I think benchmarking with a 200mb file would be enough to start seeing different results.

This should also have horrible performance on nvim master and vanilla vim, since they also read the entire file (the tempfile, that is) into memory. So I don't think that's it.

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

@tarruda I added the settmode() calls and tried to interrupt a large file read. It crashes most of the time:

:[1]    1326 segmentation fault  ~/github/aktau/neovim/build/bin/nvim -u NONE -U NONE -E -c "source bench.vim"

Sometimes I can't do ^C, it just doesn't let me (^C appears on the terminal, yet it doesn't stop processing). This has actually happened to me the last 10 times and now I can't trigger the segmentation fault anymore, frustrating. It seems as if cooked mode was disabled, somehow.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jul 22, 2014

This should also have horrible performance on nvim master and vanilla vim, since they also read the entire file (the tempfile, that is) into memory. So I don't think that's it.

The difference is that for master it first copies the data to a temporary file and the shell handles redirection(eg, no swap has to happen while libuv is doing its job), while for this branch it duplicates the file into memory(so 1gb file would take about 2gb of memory while the write is happening)

@hkupty could you try again with 200mb file?

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

The difference is that for master it first copies the data to a temporary file and the shell handles redirection(eg, no swap has to happen while libuv is doing its job), while for this branch it duplicates the file into memory(so 1gb file would take about 2gb of memory while the write is happening)

That's not the memory pattern I'm seeing. Where are the two 1GB buffers? The rstream reads 1024 at a time, I read it every time and append to the buffer, which grows to about 1.1GB.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jul 22, 2014

Sometimes I can't do ^C, it just doesn't let me (^C appears on the terminal, yet it doesn't stop processing). This has actually happened to me the last 10 times and now I can't trigger the segmentation fault anymore, frustrating. It seems as if cooked mode was disabled, somehow.

Just in case, set ulimit -c unlimited(on linux) to get a core dump the next segfault

That's not the memory pattern I'm seeing. Where are the two 1GB buffers? The rstream reads 1024 at a time, I read it every time and append to the buffer, which grows to about 1.1GB.

It seems I misunderstood the benchmark, I thought @hkupty was sending a 1GB file through a pipe(which would require duplicating the input string)

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

@tarruda I added the settmode() calls and tried to interrupt a large file read. It crashes most of the time:

To reply to my own question. I think the segmentation fault is caused by job_wait() returning immediately after receiving got_int == 1. Some read operation(s) will usually happen after that, but since os_system has already returned the stack-allocated dyn_buffer_t doesn't exist anymore. I need to make job_wait() wait until the job actually exits. Somehow. Any suggestions?

EDIT: alternatively, if I could disable read/write callbacks after job->stopped, that would probably be even better. What do you think @tarruda?

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

@tarruda the latest commit (wait for job after job_stop()) seems to stop the segmentation faults. I've also been unable to trigger the '^C' appearing anymore. I'm on OSX btw.

Increasing JOB_BUFFER_SIZE from 1024 to 0xFFFF (65535) bytes speeds up reading large files a lot. Debug build:

system()         =   4.038310 (used to be 12.63)

This is a consistent speedup, by the way.

The only cost being that there's a 65535 bytes (64 KB = 0.0625 MB) buffer for every job now. I think modern systems can deal with that :).

aktau added a commit to aktau/neovim that referenced this pull request Jul 22, 2014

job: increase JOB_BUFFER_SIZE to 0xFFFF
It used to be 1024 bytes, which is very tiny and slows down some operations
(imaging `cat`-ing a large file). Benchmarks show a large speedup for such
cases. ref neovim#978.

For modern systems 0xFFFF bytes (65535 B = 64 KB = 0.0625 MB) per job
shouldn't be a big problem.
while (1) {
// if I understand it right, this also returns when the rstreams return
// something of the process. Will its read_cb still be called?
bool status = event_poll(ms, sources);

This comment has been minimized.

@tarruda

tarruda Jul 22, 2014

Member

Use a different name here? Its unclear which variable is set when we get to the job->pending_refs == 0 block

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

Indeed, after adding int status because of the cooked mode support, I hadn't thought of that.

This comment has been minimized.

@aktau

aktau Jul 22, 2014

Member

fix pushed.

@hkupty

This comment has been minimized.

Copy link

hkupty commented Jul 22, 2014

@aktau @tarruda, I just noticed how influenced my benchmarks were using debug (I was using the make cmake command I had from history).. This kinda makes sense on my setup... I'll rerun the tests and post here..

@hkupty

This comment has been minimized.

Copy link

hkupty commented Jul 22, 2014

I changed the benchmark a little to run 25 iterations but to use two files: a small (124mb) and a big (the 13gb).

Vim:
Small: 7.978944
Big : 62.696032

Nvim:
Small: 7.838033
Big : 50.658148

Nvim-pipes (pre JOB_BUFFER_SIZE increase, debug off now):
Small: 7.702076
Big : ?¹

Nvim-pipes (HEAD):
Small: 1.495673
Big : 13.328827

¹On both tests I did pre JOB_BUFFER_SIZE increase, I got 1 cpu core of my cpu topped for the test, but no output would come out for Big result. Instead, I got the screen blanked/cleared.

Judging from what I got for the small file, I'd say that JOB_BUFFER_SIZE influenced more than DEBUG..

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

¹On both tests I did pre JOB_BUFFER_SIZE increase, I got 1 cpu core of my cpu topped for the test, but no output would come out for Big result. Instead, I got the screen blanked/cleared.

That's frustrating... that CPU pegging + no output issue occurred for me too, but it was due to deferred output logging. You took the commit immediately before "increase JOB_BUFFER_SIZE", right? Does it show CPU load even when the cat process has exited? I.e.: even when you can :q again.

At least here, without the JOB_BUFFER_SIZE increase, it was already way faster for large outputs. And that would be completely logical, as technically nvim master and vanilla vim are doing: cp largefile /tmp/largefile; cat /tmp/largefile vs nvim-pipes' cat largefile. Does memory usage climb steadily while cat is running? I'm wondering what the story is.

Judging from what I got for the small file, I'd say that JOB_BUFFER_SIZE influenced more than DEBUG..

It provides a very nice speedup, but it shouldn't be everything.

(for cases like this, it's better to just do rm -rf build && make, which automatically calls cmake if necessary).

@hkupty

This comment has been minimized.

Copy link

hkupty commented Jul 22, 2014

Yes, it was 17914c7..
As the prompt appeared for me to :q, CPU immediately dropped back to normal.
While CPU increased, mem usage wasn't enough to start swapping, but still it raised 1~1.3gb while running the tests. It'd still be around 30% mem usage on peak...

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

While CPU increased, mem usage wasn't enough to start swapping, but still it raised 1~1.3gb while running the tests.

So you did get a prompt, but nothing was printed out and CPU dropped to 0... And the memory got released as well? (i.e.: the process appears inactive for all you know...)? Just to be 100% sure, can you look at tail -f ~/.nvimlog when that's happening?

It'd still be around 30% mem usage on peak...

You mean 30% of your system RAM right, meaning you have about 4GB installed?

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 22, 2014

Note to self and all. We need to change system() documentation, which specifically states that pipes are not used (and other things):

system({expr} [, {input}]) system() E677
Get the output of the shell command {expr} as a string. See
systemlist() to get the output as a List.
When {input} is given and is a string this string is written
to a file and passed as stdin to the command. The string is
written as-is, you need to take care of using the correct line
separators yourself.
If {input} is given and is a List it is written to the file
in a way writefile() does with {binary} set to "b" (i.e.
with a newline between each list item with newlines inside
list items converted to NULs).
Pipes are not used.

@hkupty

This comment has been minimized.

Copy link

hkupty commented Jul 22, 2014

I probably expressed myself poorly when I said 'the prompt appeared'... I was able to :q out of the blank screen..

Yes, all the memory got released back to normal usage.
I pasted the log file for a new run on the tests here: http://pastebin.com/0dierrf1

I have 5.7Gb installed, but normal usage is ~500mb.. while running this tests, usage went up to a peak of 1.8GB, so i roughly calculated theese numbers.

aktau added some commits Jul 22, 2014

job: increase JOB_BUFFER_SIZE to 0xFFFF
It used to be 1024 bytes, which is very tiny and slows down some operations
(imaging `cat`-ing a large file). Benchmarks show a large speedup for such
cases. ref #978.

For modern systems 0xFFFF bytes (65535 B = 64 KB = 0.0625 MB) per job
shouldn't be a big problem.
rstream: remove 'reading' struct member
Not necessary, as discussed in #980.

From the libuv mailing list:
https://groups.google.com/forum/#!topic/libuv/OD38PeGeVgQ

E.g. this could happen (red: on Windows):

> > alloc_cb(handle1);
> > alloc_cb(handle2);
> > read_cb(handle1);
> > read_cb(handle2);

But this couldn't:

> > alloc_cb(handle1);
> > alloc_cb(handle1);
> > read_cb(handle1);
> > read_cb(handle1);

Because each stream has a 1-to-1 correspondance with a libuv handle. The
code removed was never executed.

Closes #980.
test/formatc: improve 'inline' function handling
Apple seems to define some functions as `inline` but not `static` in
headers. The ghetto parser wasn't unbelievably happy with this.
test/helpers: allow interning Pascal strings
os_system() returns a Pascal string, for example (it also NUL-terminates the
string, but that's neither here nor there).
test/shell: add tests
- The calls to (partially) initialize logging
  need to go. Blocked on #981.
@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 27, 2014

Squashed some @oni-link fixups and rebased.

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jul 27, 2014

Great job @aktau, merging now

@tarruda

This comment has been minimized.

Copy link
Member

tarruda commented Jul 27, 2014

👍 merged, thanks

@tarruda tarruda closed this Jul 27, 2014

aktau added a commit that referenced this pull request Jul 27, 2014

job: increase JOB_BUFFER_SIZE to 0xFFFF
It used to be 1024 bytes, which is very tiny and slows down some operations
(imaging `cat`-ing a large file). Benchmarks show a large speedup for such
cases. ref #978.

For modern systems 0xFFFF bytes (65535 B = 64 KB = 0.0625 MB) per job
shouldn't be a big problem.

tarruda added a commit that referenced this pull request Jul 27, 2014

@aktau aktau deleted the aktau:system-on-pipes branch Jul 27, 2014

@aktau

This comment has been minimized.

Copy link
Member

aktau commented Jul 27, 2014

Awesome, thanks!

goto done;
set_vim_var_nr(VV_SHELL_ERROR, (long) status);

#if defined(USE_CR)

This comment has been minimized.

@justinmk

justinmk Jul 27, 2014

Member

@aktau This is for Mac OS classic (9) so it can be removed. USE_CR was removed by 01ca460

This comment has been minimized.

@aktau

aktau Jul 27, 2014

Member

Too late, already merged. Then again this would fit just as well in a PR that removes all USE_CR occurrences.

This comment has been minimized.

@justinmk

justinmk Jul 27, 2014

Member

01ca460 did remove all occurrences.

This comment has been minimized.

@aktau

aktau Jul 28, 2014

Member

Damn, I know now. I copy pasted the code from f_system in the first try PR before I started editing. I'll submit another PR.

This comment has been minimized.

@aktau

aktau Jul 28, 2014

Member

Submitted PR #1002 for that.

aktau added a commit to aktau/neovim that referenced this pull request Jul 28, 2014

eval: re-remove USE_CR
It was already removed in 01ca460 and I erroneously introduced it again in
PR neovim#978.

@justinmk justinmk added the terminal label Jul 29, 2014

aktau added a commit that referenced this pull request Jul 30, 2014

eval: re-remove USE_CR #1002
It was already removed in 01ca460 and I erroneously introduced it again in
PR #978.

@justinmk justinmk referenced this pull request Jul 31, 2014

Merged

[RFC] August Newsletter #58

0 of 4 tasks complete

fmoralesc added a commit to fmoralesc/neovim that referenced this pull request Aug 19, 2014

job: increase JOB_BUFFER_SIZE to 0xFFFF
It used to be 1024 bytes, which is very tiny and slows down some operations
(imaging `cat`-ing a large file). Benchmarks show a large speedup for such
cases. ref neovim#978.

For modern systems 0xFFFF bytes (65535 B = 64 KB = 0.0625 MB) per job
shouldn't be a big problem.

fmoralesc added a commit to fmoralesc/neovim that referenced this pull request Aug 19, 2014

eval: re-remove USE_CR neovim#1002
It was already removed in 01ca460 and I erroneously introduced it again in
PR neovim#978.

@elmart elmart referenced this pull request Oct 30, 2014

Closed

[RDY] Fix python setup #1358

Grimy pushed a commit to Grimy/neovim that referenced this pull request Jan 7, 2015

dwb pushed a commit to dwb/neovim that referenced this pull request Feb 21, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment