New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make npm run-script completion faster with `jq` #4241

Merged
merged 3 commits into from Jul 26, 2017

Conversation

Projects
None yet
4 participants
@thalesmello
Contributor

thalesmello commented Jul 25, 2017

Description

When jq is available, it's actually faster to invoke jq and parse the package.json
invoking the npm command.

Also, prior to this commit, both __fish_complete_npm and __fish_npm_run were being run
whenever completions for npm run subcommand was being used, which was actually making
repetitive work (invoking npm command twice). This pull request is supposed to make completion
without jq faster as well

TODOs:

  • Changes to fish usage are reflected in user documentation/manpages.
  • Tests have been added for regressions fixed
  • User-visible changes noted in CHANGELOG.md
Make npm run-script completion faster with `jq`
When jq is available, it's actually faster to invoke jq and parse the `package.json`
invoking the `npm` command.

Also, prior to this commit, both `__fish_complete_npm` and `__fish_npm_run` were being run
whenever completions for `npm run` subcommand was being used, which was actually making
repetitive work (invoking npm command twice). This pull request is supposed to make completion
without `jq` faster as well
@thalesmello

This comment has been minimized.

Show comment
Hide comment
@thalesmello

thalesmello Jul 25, 2017

Contributor

I'm actually not sure. Should there be tests for completions?

Also, should I check one of the other checkboxes?

Contributor

thalesmello commented Jul 25, 2017

I'm actually not sure. Should there be tests for completions?

Also, should I check one of the other checkboxes?

Show outdated Hide outdated share/completions/npm.fish Outdated
@krader1961

This comment has been minimized.

Show comment
Hide comment
@krader1961

krader1961 Jul 25, 2017

Contributor

Should there be tests for completions?

Ideally, yes. But at this time we do not have any unit tests specifically for completions. If you think you can implement unit tests for your completions we would be happy to consider how that could be expanded to all completion scripts.

Contributor

krader1961 commented Jul 25, 2017

Should there be tests for completions?

Ideally, yes. But at this time we do not have any unit tests specifically for completions. If you think you can implement unit tests for your completions we would be happy to consider how that could be expanded to all completion scripts.

@thalesmello

This comment has been minimized.

Show comment
Hide comment
@thalesmello

thalesmello Jul 25, 2017

Contributor

@krader1961 Is there any API that could be used to:

  1. Type something to the terminal
  2. Invoke the completion for the current command
  3. Get the completions in a variable for assertion?
Contributor

thalesmello commented Jul 25, 2017

@krader1961 Is there any API that could be used to:

  1. Type something to the terminal
  2. Invoke the completion for the current command
  3. Get the completions in a variable for assertion?
Show outdated Hide outdated share/completions/npm.fish Outdated
Show outdated Hide outdated share/completions/npm.fish Outdated
@faho

This comment has been minimized.

Show comment
Hide comment
@faho

faho Jul 25, 2017

Member

Type something to the terminal

We do that via expect in our tests. It's a bit weird and can have some issues with timing, but in general it works. However, it's not necessary in this case because

Invoke the completion for the current command

complete -C"$commandline", e.g. complete -C"npm " can be used to print the completions. Note that the argument parsing is idiosyncratic here - there can be no space between the "-C" and the commandline (IIRC).

Get the completions in a variable for assertion?

complete -C is pretty much a regular command and can be used in a command substitution or similar.


The real problem with tests for completions is a different one:

  • We cannot guarantee that every single tool for every single completion is installed, so we need to allow for skipping tests (which we currently don't)

  • Completions often depend on a bunch of data (in fact that's the most interesting ones), so we'd need to ship that (e.g. a fake git repository for the git completions)

  • Tools often change their output based on configuration (which is generally okay), so we'd need to override that

Currently, this hasn't been done. I'd appreciate any good ideas.

Member

faho commented Jul 25, 2017

Type something to the terminal

We do that via expect in our tests. It's a bit weird and can have some issues with timing, but in general it works. However, it's not necessary in this case because

Invoke the completion for the current command

complete -C"$commandline", e.g. complete -C"npm " can be used to print the completions. Note that the argument parsing is idiosyncratic here - there can be no space between the "-C" and the commandline (IIRC).

Get the completions in a variable for assertion?

complete -C is pretty much a regular command and can be used in a command substitution or similar.


The real problem with tests for completions is a different one:

  • We cannot guarantee that every single tool for every single completion is installed, so we need to allow for skipping tests (which we currently don't)

  • Completions often depend on a bunch of data (in fact that's the most interesting ones), so we'd need to ship that (e.g. a fake git repository for the git completions)

  • Tools often change their output based on configuration (which is generally okay), so we'd need to override that

Currently, this hasn't been done. I'd appreciate any good ideas.

Refactor npm.fish for code reutilization
Created function to handle both cases of npm run completion parse, with or without `jq` completion.
@thalesmello

This comment has been minimized.

Show comment
Hide comment
@thalesmello

thalesmello Jul 26, 2017

Contributor

@faho I've refactor my original pull request. Would you be kind to take a look?

We cannot guarantee that every single tool for every single completion is installed, so we need to allow for skipping tests (which we currently don't)

Regarding completion testing, would it be reasonable to use docker as a dependency for testing the project?

By doing so, it would be possible to many different test environments, so it would be possible to test completions.

Also, I think it would be pretty straightforward to execute only one test script using docker, or maybe skip completion testing in the main test suite. What do you think?

Completions often depend on a bunch of data (in fact that's the most interesting ones), so we'd need to ship that (e.g. a fake git repository for the git completions)

That could be accomplished with a fixtures directory. Would be complicated, but doable, I think.

Tools often change their output based on configuration (which is generally okay), so we'd need to override that

By using docker, we could guarantee we're always testing the same environment.

Anyhow, all of these look like a lot of changes for a simple pull request regarding a small completion improvement. Maybe we could foward this discussion to an issue?

Contributor

thalesmello commented Jul 26, 2017

@faho I've refactor my original pull request. Would you be kind to take a look?

We cannot guarantee that every single tool for every single completion is installed, so we need to allow for skipping tests (which we currently don't)

Regarding completion testing, would it be reasonable to use docker as a dependency for testing the project?

By doing so, it would be possible to many different test environments, so it would be possible to test completions.

Also, I think it would be pretty straightforward to execute only one test script using docker, or maybe skip completion testing in the main test suite. What do you think?

Completions often depend on a bunch of data (in fact that's the most interesting ones), so we'd need to ship that (e.g. a fake git repository for the git completions)

That could be accomplished with a fixtures directory. Would be complicated, but doable, I think.

Tools often change their output based on configuration (which is generally okay), so we'd need to override that

By using docker, we could guarantee we're always testing the same environment.

Anyhow, all of these look like a lot of changes for a simple pull request regarding a small completion improvement. Maybe we could foward this discussion to an issue?

@faho

faho approved these changes Jul 26, 2017

@faho faho merged commit a071dea into fish-shell:master Jul 26, 2017

1 check passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details

@faho faho referenced this pull request Jul 26, 2017

Open

Test completion scripts? #4249

@faho

This comment has been minimized.

Show comment
Hide comment
@faho

faho Jul 26, 2017

Member

Anyhow, all of these look like a lot of changes for a simple pull request regarding a small completion improvement. Maybe we could foward this discussion to an issue?

Yes, please. See #4249.

Member

faho commented Jul 26, 2017

Anyhow, all of these look like a lot of changes for a simple pull request regarding a small completion improvement. Maybe we could foward this discussion to an issue?

Yes, please. See #4249.

@zanchey zanchey added this to the fish 2.7.0 milestone Aug 19, 2017

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment