Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kernel providers #112

Closed

Conversation

kevin-bates
Copy link
Member

@kevin-bates kevin-bates commented Oct 4, 2019

These changes replace the kernel and kernel spec management with the jupyter kernel management framework originally proposed by Thomas Kluyver (@takluyver). This framework essentially replaces jupyter_client with two libraries - refactored to address some of the issues with the current class organization - jupyter_kernel_mgmt and jupyter_protocol.

Upon completion, this pull request will include the following:

  • Cherry-pick Notebook PR WIP: Use new kernel management APIs in notebook server 6.x jupyter/notebook#4837

    Details

    These are the commits included. Their hashes can be found on this branch:
    https://github.com/takluyver/notebook/tree/jupyter-kernel-mgmt
    Note: Because jupyter server does not include client-side code, some of these commits were not included. However I believe those were merely needed to also display the new "provider id" and kernel name, along with the display name. Further investigation is required to determine if those changes are required, but I suspect they aren't.

    0dc8b70fe Initial work towards using jupyter_kernel_mgmt for notebook server
    ae1588888 Provide kernel specs from kernel finder machinery
    ddcc3dd98 Display kernel type ID alongside display name
    27fc52a59 Add dependency on jupyter_kernel_mgmt & jupyter_protocol
    102ae7b82 Switch single kernelspec API handler to kernel_finder
    576585438 Drop kernel_spec_manager
    1dfc5d4c7 Fix interrupting a kernel
    76ab73807 Fix up kernelspec API tests to pass
    f1a79bf5e Default kernel type name pyimport/kernel
    b0e0fecc7 Use pyimport/kernel for session API tests
    fd0b60c3b Use pyimport/kernel for kernels API tests
    c76e6bfb7 Fix dummy objects for SessionManager tests
    2d9e021e4 Use reworked restarter callback system
    a15a1f9dd Kernel restart always means new manager & new connection info
    b42750702 Use updated API for message handlers
    90f9e6803 Restarter callbacks parameters swapped again
    f6e7cfdb7 Clean up some unused imports
    d87f86277 Reworked kernel restart handling
    24d3d0b82 Reorganise code for kernel websocket handler
    cdd45a60a Require jupyter_kernel_mgmt >=0.3
    9ed09aeff Change selectors used by tests to create new notebook
    00ab20c16 Fix race condition making AJAX requests for session
    20e9db4ca Fix selector for new kernel menu entry
    69a79ac6b Try to get more information on test failure
    d63cd9664 Don't try to forward message from kernel when websocket closed
    237a21269 Avoid race condition creating duplicate sessions
    80c278aa3 Fix starting a kernel when modifying a session
    0a18fb974 Fix sessionmanager tests
    c3d63d009 Propagate future out for sending status messages
    65d49a480 Don't crash closing websocket if handler not connected
    6efb93253 Rework how kernel startup works again
    8233daa76 msg_type is part of the message header, not top-level
    e80cafe16 There is no longer a good way to prevent a kernel from restarting
    cab341628 Wait to send starting status until new client is reconnected
    a8dbd489d Expect kernel_starting event in autorestart test
    f6b0674ea Ensure message in frontend always has buffers array
    76b904433 Wait for kernel to be connected to send shutdown_request
    c42fb2180 There is no longer a good way to prevent a kernel auto-restarting
    336a321c6 404 error for websocket opened to nonexistant kernel
    3ab2d12c0 Wait for kernel for cell mode-switching tests
    826ee65ab Message for callback is converted to update_display_data
    bb54f341d Fix deserialisation test
    3bc13fb18 Allow for kernel state changing during session test
    e4eff7a10 Allow execution_state to change in test_modify_kernel_id
    e5bb55c3a Use appropriate attribute in cull debug
    d72106479 Culling should go through MappingKernelManager
    ab84f2aac Use kernel_id from provider's manager
    

    plus a couple fixups from cherry-pick conflict resolution issues.

  • Finish removal of jupyter_client

  • Fix tests

  • Replace use of KernelSpecManager in Gateway package to use KernelFinder and confirm functionality to remote gateway server

  • Add async support (since async functionality recently merged into kernel-mgmt)

    • Update jupyter_kernel_mgmt dependency to >= 0.5.0 once PR 31 is merged and release is built.
  • Provide dev build of server for others to experiment with (see Kernel providers #112 (comment))

  • Add plumbing for conveying launch parameters to kernel providers (parameterized kernel support)

  • Identify necessary changes to support provider id in Notebook classic (Lab works fine, although we probably want to convey provider Ids somehow). See this commit. Thank you @echarles!

  • Enumerate known incompatibilities...

    • KernelManager and KernelSpecManager classes can no longer be overridden. Applications wishing to customize KernelSpecs need to author a Kernel Provider
    • Only MappingKernelManager can apply configurable information to kernel lifecycle. Configuration (traitlets) were intentionally omitted from the framework. Providers can, however, obtain access to the Application configurable instance enabling the ability for applications to convey configuration information to providers.
    • KernelSpecManager.whitelist will need to be configured differently as the class no longer exists.
    • Alias --transport, which maps to KernelManager.transport and defaults to 'tcp' (other option is 'ipc') is no longer supported directly since 'transport' is a function of the kernel provider although a launcher implementation is provided.

... other items will be added as things unfold

Closes: #90

@echarles
Copy link
Member

echarles commented Oct 7, 2019

@kevin-bates cc/ @Zsailer Tested successfully this branch with notebook as server extension - Cells can be executed.

conda create -y -n jupyter-kernel-mgmt python=3.7 nodejs && \
  conda activate jupyter-kernel-mgmt && \
  git clone https://github.com/takluyver/jupyter_kernel_mgmt --branch master --single-branch --depth 1 && \
  cd jupyter_kernel_mgmt && python setup.py develop && cd .. && \
  git clone https://github.com/kevin-bates/jupyter_server.git --branch jupyter-kernel-mgmt --single-branch --depth 1 && \
  cd jupyter_server && python setup.py develop && cd .. && \
  git clone https://github.com/zsailer/notebook.git --branch notebook-ext --single-branch --depth 1 && \
  cd notebook && python setup.py develop && cd .. && \
  jupyter notebook
# open http://localhost:8888/tree

@jasongrout
Copy link
Contributor

Does this change the notebook rest api in any way, or is this change transparent to clients?

@kevin-bates
Copy link
Member Author

kevin-bates commented Oct 7, 2019

Hi @jasongrout. This does not change the rest api.

However, the provider id (identifying the kernel provider) is returned as a prefix to the kernel name when retrieving kernelspecs. Thomas, in his Notebook PR, added some client-side updates to display both the kernel display name and its provider-id-prefixed "symbolic-name". So long as this name is used in subsequent apis (like when starting a kernel), all is fine. The server picks off the prefix to determine on which provider launch should be called.

Down the road, I suppose a client may want to interpret the prefix (from an organization standpoint, or something), but that's not required and @echarles' comment above appears to confirm that things work okay in this area.

EDIT: As noted below "work okay" is probably pushing things. We need to figure out why the kernel name isn't being updated and why the resource files (icons) aren't being pulled.

EDIT2: This experiment suggests that the use of '/' is impacting the displaying of kernel name and icons. However, @echarles has found that the lab-extension doesn't require changes even when using '/'. I'd really like to find a solution where the "known" clients (e.g., Notebook classic, Lab, and Voila) don't require any changes though.

@kevin-bates
Copy link
Member Author

hmm, @echarles - are you seeing the kernel name and its icon in the upper right corner (for notebook) or on the launch page (for lab)?

I see this:
image

when I expect to see python3 with the python logo. Just wondering if the stuff @takluyver did on the client side is required or my current revs are not quite right (still debugging the gateway stuff).

@echarles
Copy link
Member

echarles commented Oct 8, 2019

@kevin-bates It also shows Kernel instead of Python3 even without you 2 last changes related to jupyter_client removal. As you said, Thomas' changes would be needed to interpret the returned name.

I understand there is a consensus to not focus on Notebook (which should live its own life) but rather onboard JupyterLab on server. There are 2 actions for this:

  1. In jupyterlab server components, replace notebook by server
  2. Update the jupyterlab javascript so it interprets the kernel name changes introduced by kernel-mgmt (when this will be merged)

@kevin-bates
Copy link
Member Author

@echarles - thanks for the update - that's helpful info.

Just to clarify, Notebook "complete" (i.e., Notebook as we know it today - server and client combined) - will live its own life. However, there are plans to provide a notebook server-extension for use on jupyter-server. I hope to get further clarification in Thursday's Jupyter Server community call.

Regarding the kernel name changes, my initial thought was that the kernel name isn't getting url-encoded to encode the '/' that now separates the provider-id from the kernel name. However, why then am I not seeing 404 GET /api/kernelspecs/spec/python3? So I suspect the issue is somewhere else. @jasongrout - thoughts? Is kernel name already url-encoded prior to its use as a parameter on REST calls?

The other reason Thomas probably included the provider id in the kernel drop down menu is because it's now possible to have two kernels with the same display_name. For example, it will be common to see two Python 3 kernels - one from the built-in kernel 'spec' provider (provider id = 'spec') and the other from the 'native' ipython kernel (provider is = 'pyimport'.

Here's a snippet of these two kernelspecs:

    "pyimport/kernel": {
      "name": "pyimport/kernel",
      "spec": {
        "language_info": {
          "name": "python",
          "version": "3.6.6",
          "mimetype": "text/x-python",
          "codemirror_mode": {
            "name": "ipython",
            "version": 3
          },
          "pygments_lexer": "ipython3",
          "nbconvert_exporter": "python",
          "file_extension": ".py"
        },
        "display_name": "Python 3",
        "argv": [
          "/opt/anaconda3/envs/jupyter-server-dev/bin/python",
          "-m",
          "ipykernel_launcher",
          "-f",
          "{connection_file}"
        ],
        "resource_dir": "/opt/anaconda3/envs/jupyter-server-dev/lib/python3.6/site-packages/ipykernel/resources",
        "language": "python"
      },
      "resources": {
        "logo-64x64": "/kernelspecs/pyimport%2Fkernel/logo-64x64.png",
        "logo-32x32": "/kernelspecs/pyimport%2Fkernel/logo-32x32.png"
      }
    },
    "spec/python3": {
      "name": "spec/python3",
      "spec": {
        "language_info": {
          "name": "python"
        },
        "display_name": "Python 3",
        "argv": [
          "python",
          "-m",
          "ipykernel_launcher",
          "-f",
          "{connection_file}"
        ],
        "resource_dir": "/opt/anaconda3/envs/jupyter-server-dev/share/jupyter/kernels/python3",
        "metadata": {},
        "language": "python"
      },
      "resources": {
        "logo-64x64": "/kernelspecs/spec%2Fpython3/logo-64x64.png",
        "logo-32x32": "/kernelspecs/spec%2Fpython3/logo-32x32.png"
      }
    },

@Zsailer
Copy link
Member

Zsailer commented Oct 8, 2019

@kevin-bates just wanted to let you know I'm not ignoring this PR. 😄

I've been catching up on all the previous conversation around this work 😅 I realized I didn't fully understand everything going on here. I'll review some more once I get up to speed!

@kevin-bates
Copy link
Member Author

@Zsailer - no worries. There are still plenty of kinks to work out - so its still early.

@takluyver
Copy link
Contributor

it's now possible to have two kernels with the same display_name.

It's always been possible, though these changes will probably make it more common.

@SylvainCorlay
Copy link
Contributor

cc @jasongrout @martinRenou @maartenbreddels we need to review this in details as it has broad implications for voila, and the other issues that we are having with jupyter_client and kernels lifecycle.

@echarles
Copy link
Member

@SylvainCorlay I am trying this branch with classic notebook, jupyterlab and voila. Indeed, as voila server interacts directly with the kernel api, there is some impact on voila side. I have started to implement something but paused in the middle of the road to concentrate for now on jupyterlab in first instance.

@echarles
Copy link
Member

@kevin-bates the good news is that async is merged by @takluyver

The bad news is that it breaks the kernel launch when used in the notebook with

...jupyter_server/services/kernels/kernelmanager.py:44: RuntimeWarning: coroutine 'KernelFinder.launch' was never awaited
  self.connection_info, self.manager = kernel_finder.launch(kernel_type)
...
        self.connection_info, self.manager = kernel_finder.launch(kernel_type)
    TypeError: cannot unpack non-iterable coroutine object

@kevin-bates
Copy link
Member Author

@echarles - yep - I figured that would happen. You'll need to pin to the current released jupyter_kernel_mgmt until async support is in the server. I believe I have a branch for this relative to notebook, so it should "transfer" fairly easily.

@kevin-bates kevin-bates force-pushed the jupyter-kernel-mgmt branch 2 times, most recently from 29a650b to 1ab14b1 Compare October 10, 2019 22:06
@kevin-bates
Copy link
Member Author

Regarding the kernel name/icon images in Notebook classic...
While testing the gateway integration I hit a remote Gateway server running Notebook classic (server) and the kernel names and icon images come across just fine. The difference here is that the kernel names are not prefixed with <providerId>/ so, in that sense, that's a good datapoint.

@jasongrout, @takluyver - do you know why the client-side code wouldn't just use the name value from the kernelspec file (e.g., spec/python3) to use in the display of kernel name and retrieval of the resource files (icons)? If I hit the appropriate endpoints with the fully qualified name (and / encoded), I get the expected results. Since I don't see the server reporting an invalid endpoint or indication that something is even happening, I suspect those requests aren't leaving the client.

@echarles
Copy link
Member

@kevin-bates I am testing your last commit for the async fix but still get the error RuntimeWarning: coroutine 'KernelFinder.launch' was never awaited self.connection_info, self.manager = kernel_finder.launch(kernel_type)

@kevin-bates
Copy link
Member Author

Yes. That's expected because I haven't converted the touch points in server to be asynchronous. Rather than work from the tip of kernel-mgmt, install the released version and try that, although I can't guarantee there weren't changes between the release and async merge.
I'm going to work on the async support next but after I deal with other priorities. Hopefully by tomorrow or Monday.
Thanks for looking at this.

@echarles
Copy link
Member

I have a branch combination for jupyterlab that uses jupyter_server and kernel providers (may be shown at this jupyter server weekly meeting).

Without any change, the kernel name and the kernel specs are correctly displayed.

Screenshot 2019-10-17 at 16 42 32

Screenshot 2019-10-17 at 16 42 37

@echarles
Copy link
Member

echarles commented Apr 8, 2020

@kevin-bates Thx for your answer. I am also in favor of squashing. Less noise in the git history, more chance for the other branches to resolve conflicts. I remember @Zsailer mentioned guys like to show their activity (especially if their boss asks for), but the history is still in the PR is I am not mistaken. I have seen also companies counting the number of lines their staff have contributed...

@kevin-bates
Copy link
Member Author

Closing this since the corresponding JEP has been closed.

Zsailer added a commit to Zsailer/jupyter_server that referenced this pull request Nov 18, 2022
* pass token from login to notebook_service_client

* Bump to 0.10.2
@nreith
Copy link

nreith commented May 3, 2023

Was this ever merged? And how does one enable automatic finding of conda environments with ipykernel as Jupyter kernels? Is nb_conda_kernels that is deprecated still the only way?

@kevin-bates
Copy link
Member Author

Hi @nreith. No - this was not merged. Instead, the notion of kernel provisioners was implemented in jupyter_client to abstract the kernel lifecycle management portion of things. So kernel discovery, which is what the Kernel Providers proposal included beyond lifecycle management, still takes place with a KernelSpecManager and which nb_conda_kernels replaces with its own implementation.

I didn't realize that nb_conda_kernels was deprecated. I knew it wasn't very actively maintained, but not deprecated. Is there a change you find necessary to nb_conda_kernels that is preventing its continued use?

@nreith
Copy link

nreith commented May 5, 2023

To be honest with you, it works fine. But whenever possible, I like to use the official method, and stick to projects that are actively developed. My main reason is I use jhub and related stuff for work, in as close to a "prod" grade situation as we can muster, on k8s, with load balancing and a secondary cluster for failover, etc., etc. In an enterprise setting, we always get harassed with a lot of compliance and security vulnerabilities, etc. That's my main reason for staying close to the active repos. This seems to be a newer fork (https://pypi.org/project/nb-conda-store-kernels/), but very focused on conda-store? whatever that is.

If you don't know of a way for the standard jupyter_client to discover conda envs in ~/.conda/envs and auto-add kernels (very importantly, WITH all necessary environment variables), then I'll stick to nb_conda_kernels. In our situation, we provide a working (base) environment, mostly docker-stacks plus some stuff, but insist that users manage their own conda environments if they want unchanging versions. The standard advice to python3 -m ipykernel install --user --name my-env totally fails to include PATH and LD_LIBRARY_PATH pointing to the conda env /bin and /lib, and a bunch of other important env vars that are causing problems. I can advise users to add --env PATH $PATH --env LD_LIBRARY_PATH $LD_LIBRARY_PATH but who knows what else can be missing. XLA_FLAGS? for gpu-based images w/tensorflow?

Anyway, we can use old stuff and fork and patch ourselves if needs be. But I'm open to any suggestions.

The automation + critically the env vars, is what is good about nb_conda_kernels.

@chbrandt
Copy link

chbrandt commented Jun 29, 2023

Like @nreith, I am also interested if there is any recommended ("official") way for discovering kernels and their context/environment settings.

@kevin-bates The actual question I have at this point is: should we (whoever "we" is) try to reactivate nb_conda_kernels, or is there something coming from jupyter_client side, something in your roadmap that would supersede nb_conda_kernels anyway?

@davidbrochart
Copy link
Contributor

@kevin-bates
Copy link
Member Author

Hi @chbrandt.

@kevin-bates The actual question I have at this point is: should we (whoever "we" is) try to reactivate nb_conda_kernels, or is there something coming from jupyter_client side, something in your roadmap that would supersede nb_conda_kernels anyway?

I think that's the best approach right now. Ideally, this would be addressed via parameterization, where the user is prompted from a set of environments and a custom kernel provisioner (or even perhaps via extension to the LocalProvisioner) would be consulted by the built-in KernelSpecManager which would be responsible for gathering parameter schemas when it produces the set of available kernelspecs. Then, the kernel provisioner would be responsible for "adjusting" the kernel's runtime environment (which is the responsibility of kernel provisioners) and launch the kernel within that activated env.

@echarles
Copy link
Member

I think that's the best approach right now. Ideally, this would be addressed via parameterization, where the user is prompted from a set of environments and a custom kernel provisioner (or even perhaps via extension to the LocalProvisioner) would be consulted by the built-in KernelSpecManager which would be responsible for gathering parameter schemas when it produces the set of available kernelspecs. Then, the kernel provisioner would be responsible for "adjusting" the kernel's runtime environment (which is the responsibility of kernel provisioners) and launch the kernel within that activated env.

@kevin-bates So you are saying to leverage the new jupyter-client provisioners machinery instead of trying to update the old-way done by nb_conda_kernels that was extending the KernelManager, right?

@chbrandt
Copy link

Hi @kevin-bates, @echarles,

To share one thing I noticed: nb_conda_kernels has entrypoint at jupyter_client.kernel_providers, whereas jupyter_client expects an entry at jupyter_client.kernel_provisioners. There is no reference to "kernel_providers" in the docs anymore (https://jupyter-client.readthedocs.io/en/latest/search.html?q=kernel_providers). In nb_conda_kernels's discovery.py module says "Initial support for the kernel provider mechanism to be introduced in jupyter_client 6.0".

@nreith
Copy link

nreith commented Jun 29, 2023 via email

@echarles
Copy link
Member

@chbrandt @nreith Maybe open an new issue on https://github.com/Anaconda-Platform/nb_conda_kernels (ping me/us on that, this issue is closed and the repo is not the one to have that discussion) - Please note also anaconda/nb_conda_kernels#151 which was a discussion started 4 years ago but for the previous attempt called providers which has not be retained, the chosen implementation is called provisioner. I guess one reason to explore the new way is that you could combine the nbconda kernels with other kernels for the same notebook.

@kevin-bates
Copy link
Member Author

@echarles.

@kevin-bates So you are saying to leverage the new jupyter-client provisioners machinery instead of trying to update the old-way done by nb_conda_kernels that was extending the KernelManager, right?

No. I am saying that the current approach should be used. I also added that, "ideally, this would be addressed via parameterization" once parameterization is available. Parameterization is NOT currently available. I then suggested that, when available, a custom provisioner could be used to create the environment (also suggesting we could explore extending or using the LocalProvisioner for that purpose).

Regarding jupyter_client.kernel_providers: in nb_conda_kernels. I believe there was was work done in preparation for kernel providers. Kernel providers never happened, it was only a JEP that was in a POC stage and was pulled.

Kernel provisioners came about as a means to address the lifecycle management portion of kernels, but does not include discovery - which is still the purview of KernelSpecManager (built-in and custom).

Since nb_conda_kernels is essentially a custom KernelSpecManager and since that is still the way discovery takes place, nb_conda_kernels should be used at this time.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Transition to Kernel Provider model for kernel management
10 participants