Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Autocompletions randomly stopping completely (or very slow) #566

Open
KareemAlSaudi-RUG opened this issue Mar 23, 2021 · 9 comments
Open

Comments

@KareemAlSaudi-RUG
Copy link

KareemAlSaudi-RUG commented Mar 23, 2021

I'm unsure why but the functionality of jupyter-lsp seems to randomly hang. I can't get it to run for more than a few seconds in JupyterLab. Oftentimes I can get it to work once or twice then it seems that all functionality completely stops. It no longer highlights reoccurences of a variable. Continuous autohinting doesn't work. Pressing tab for suggestions returns absolutely nothing. The only thing that does work is that errors are still highlighted by pyflakes and/or flake8.

I'm not sure exactly how to reproduce this or how to even get it to work before hanging. If any more information is needed from my end please do let me know but as it stands I'm extremely confused.

I'm using JupyterLab 3.0.12 in a Conda environemnt with the latest jupyterlab-lsp version on Windows 10.

@krassowski
Copy link
Member

krassowski commented Mar 23, 2021

  1. Have you tried python-language-server fork described in the readme: https://github.com/krassowski/jupyterlab-lsp#installation or jedi-language-server instead of the default python-language-server?
  2. Could you provide the contents of the example notebook that you are seeing this issue with?
  3. Could you provide the output of the jupyter lab --debug from a fresh session reproducing this issue?
  4. Could you provide the logs from the browser developer console?

@krassowski
Copy link
Member

Also, have you disabled kernel-side Jedi as described in the readme?

@KareemAlSaudi-RUG
Copy link
Author

KareemAlSaudi-RUG commented Mar 23, 2021

  1. Have you tried python-language-server fork described in the readme: https://github.com/krassowski/jupyterlab-lsp#installation or jedi-language-server instead of the default python-language-server?
  2. Could you provide the contents of the example notebook that you are seeing this issue with?
  3. Could you provide the output of the jupyter lab --debug from a fresh session reproducing this issue?
  4. Could you provide the logs from the browser developer console?
  1. I haven't but I'll try them both now and see if that remedies the problem.
  2. Sure, but you'll have to give me a second to get back to you on that.
  3. The output of jupyter lab --debug can be found here.
  4. I'm not sure what you mean by logs here, could you elaborate?
  5. I have disabled kernel-side Jedi using %config Completer.use_jedi = False

Edit: I've narrowed down the issue slightly, while using the jedi-language-server (and testing on pyls now) the slow-down/hang generally comes as a result of attempting to autocomplete based on a pandas dataframe. I'm not sure why but so far this is what seems to be causing it.

Edit 2: It also seems that when using either pyls or the pyls fork that attempting to autocomplete on a pandas dataframe causes autocomplete to completely hang (and not work for any other line of code currently being worked on) until it retrieves suggestions for the dataframe line (which seems to be taking AGES).

@krassowski
Copy link
Member

Great!

  • Could you provide a reproducible example?
  • What are your pandas, jedi, and Python versions?
  • Does it also happen with numpy?
  • Are you sure that the pyls fork is picked up (i.e not the jedi-language-server, and not the old python-language-server)? To make sure I would uninstall others, install the fork and restart jupyterlab.

I do not see anything particularly odd in the debug output from command line (3). For point 4 see:

generating-developer-logs-in-chrome

@KareemAlSaudi-RUG
Copy link
Author

Great!

  • Could you provide a reproducible example?
  • What are your pandas, jedi, and Python versions?
  • Does it also happen with numpy?
  • Are you sure that the pyls fork is picked up (i.e not the jedi-language-server, and not the old python-language-server)? To make sure I would uninstall others, install the fork and restart jupyterlab.

I do not see anything particularly odd in the debug output from command line (3). For point 4 see:

generating-developer-logs-in-chrome

  1. I've attached a few photos illustrating the issue. (note that this is with the jedi-language-server although the issue presents itself with both pyls and the pyls fork)
    a) Numpy works fine.
    1
    b) Pandas works fine.
    2
    c) Calling a generated dataframe on the other hand is painfully slow.
    3
    d) Dataframe in question.
    4

  2. I've attached a copy of my environment.yml - currently using Pandas 1.2.3 on Python 3.8.8. As for Jedi, I generally let Conda manage dependencies and depending on whether I've installed the jedi-language-server, pyls or pyls fork the version differs. Currently, I've got the jedi-language-server installed alongside Jedi 0.18.
    environment.txt

  3. It doesn't happen when calling numpy directly (nor when calling pandas directly). Strictly when calling an already created pandas dataframe. I'm wondering if it has anything to do with the fact that the dataframes I'm working with are quite large although I don't see why that would be the case.

  4. Yep, I'm 100% sure that the fork is being picked up. Every single time I switched language servers I would completely uninstall the previously installed one in favour of the new one I was testing out.

  5. Console output when attempting to autocomplete a Pandas dataframe:
    5
    localhost-1616578912559.log

@droctothorpe
Copy link

I ran into a similar problem (intermittent failures that would recover after a length of time).

I resolved it by modifying my Dockerfile (I'm using JupyterHub) to install as follows:

RUN mamba install 'jupyterlab-lsp=3.5.0' 'jupyter-lsp-python=1.1.4'

@sky-cloud
Copy link

I ran into the similar problem. Finally I diabled the completion by LSP and now the completion speed is much better. My configuration in Settings-Advanced Settings Edtior-Code Completion is as follows:

{
    "continuousHinting": true,
    "showDocumentation": true,
    "disableCompletionsFrom": ['LSP'],
    "kernelCompletionsFirst": true,
    "waitForBusyKernel": false,
    "theme": "material"
}

By the way, Followed the instruction on Notebook-optimized Language Servers, I installed both jedi-language-server and pylsp.

My environment:
CentOS 7.6, Python 3.8.7 build from source, jupyterlab 3.2.5, ipython 7.30.1, jupyter-server 1.13.1, jupyterlab-lsp 3.9.3.

Hope that is helpful.

@mdforti
Copy link

mdforti commented Aug 3, 2022

I came across the same issue under Jupyterlab 3.4.4. lsp is installed by conda , jupyterlab-lsp = 3.10.1 , although I didn't try the workarounds mentioned at the beginning of this issue. Autocompletion also stopped after trying to complete from methods of a Pandas dataframe (actually a small one) and the problem vanishes after disabling lsp.

@krassowski
Copy link
Member

In #851 @fperez wrote:

I also noticed tab-completion being slower than usual, to the point where it interferes with the typing workflow (to save typing, tab completion really needs to keep up with ~ 70 wpm speeds, which means its latency cap should be very, very tight).

I will expand and update my earlier replies on this topic.

Tab completion is slower with default settings because we are waiting for both LSP server and kernel completions. You can get much faster completions by:

  • (IPython only) disabling Jedi in IPython since it is a subset of LSP will return anyways, see point 5 in readme: https://github.com/jupyter-lsp/jupyterlab-lsp#installation; did you use this already?
  • entirely disabling the kernel completions in settings
  • modifying the kernel completions timeout in settings
  • switching to a faster LSP server; last time I checked jedi-langauge-server and pyright were faster than python language server (pylsp)
  • disabling other features which you do not rely on; this helps with both client-side and server-side lag:
    • rendering lag will reduce with completion icons disabled, documentation panel disabled, etc
    • server side: pylsp does not run tasks in parallel due to jedi limitations; if I recall correctly if you type a character with continuous completions enabled, then you:
      • requested diagnostics (you change the document which might have introduced an error)
      • requested highlights (you moved your cursor by typing a character)
      • requested completions
      • this is already a queue, but then when you type another character if you do not immediately see a completion, you type another character further extending the queue; quickly a queue can build up which the pylsp server may not be able to catch up with; on frontend we try not to throttle/debounce requests for that reason (which also introduces lag); this is not inherent to LSP but just to that implementation.
        • why not ask for diagnostics first? well we never ask for diagnostics; it is the server that publishes diagnostics in response to document change notification; if we do not send the document change notification before completer, then the completion suggestions will be incorrect (as those will not include the typed character!); this can be improved on the LSP server side by making them wait a bit with publishing diagnostics and do that on a schedule of say 500ms rather than immediately after document change notification.

There is some good news here though:

I am not (currently) writing a faster Python LSP server or making more performance improvements to pylsp (I already made a bunch, and there could have been regressions since) although I find it likely to yield as much of an improvement as all the other work I mentioned.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants