Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jupyter kernel process doesn't exit after kernel is shutdown #941

Open
digitalsignalperson opened this issue Mar 30, 2023 · 6 comments
Open

Comments

@digitalsignalperson
Copy link

If I do

> jupyter kernel
[KernelApp] Starting kernel 'python3'
[KernelApp] Connection file: /home/asdf/.local/share/jupyter/runtime/kernel-173b54c6-6e15-4bad-a68c-e757e3fd4346.json
[KernelApp] To connect a client: --existing kernel-173b54c6-6e15-4bad-a68c-e757e3fd4346.json

and then e.g.

> jupyter console --existing kernel-173b54c6-6e15-4bad-a68c-e757e3fd4346.json
Jupyter console 6.6.3

Python 3.10.10 (main, Mar  5 2023, 22:26:53) [GCC 12.2.1 20230201]
Type 'copyright', 'credits' or 'license' for more information
IPython 8.11.0 -- An enhanced Interactive Python. Type '?' for help.

In [1]: exit()
Shutting down kernel

the original jupyter kernel process doesn't exit.

Same thing if I shut it down like this

import jupyter_client
cf = jupyter_client.find_connection_file('kernel-173b54c6-6e15-4bad-a68c-e757e3fd4346.json')
km = jupyter_client.BlockingKernelClient(connection_file=cf)
km.load_connection_file()

assert km.is_alive()
km.shutdown()
assert not km.is_alive()

my setup on arch linux with:

jupyter --version
Selected Jupyter core packages...
IPython          : 8.11.0
ipykernel        : 6.21.3
ipywidgets       : 8.0.4
jupyter_client   : 8.0.3
jupyter_core     : 5.3.0
jupyter_server   : 2.5.0
jupyterlab       : 3.6.1
nbclient         : 0.7.2
nbconvert        : 7.2.10
nbformat         : 5.7.3
notebook         : 6.5.3
qtconsole        : 5.4.0
traitlets        : 5.9.0
@kevin-bates
Copy link
Member

I'm not sure what the original intention was for jupyter kernel, but its behaving exactly as its coded. 😄

It only terminates upon reception of the either SIGTERM or SIGINT after it shuts down the actual kernel process. To accommodate detection of the actual kernel process shutting down via an external process (e.g., jupyter console or the python script you list above), the jupyter kernel application would need to monitor the actual kernel process using KernelManager.poll() and I'm not sure that's what we'd want. For example, if monitoring were added, and the external process wanted to restart the kernel started by jupyter kernel, jupyter_kernel's monitor would detect the actual kernel process had exited and terminate, yet the second half of the restart would result in another "actual kernel process".

If you're worried about ZMQ ports getting leaked, the actual kernel process is shut down. Only its launching application (i.e., jupyter kernel) remains running until its terminated via either of the two signals.

((Frankly, I don't know what purpose jupyter kernel serves. I suppose its meant to allow other applications to ONLY submit messages - because it only manages lifecycle via SIGTERM or SIGINT.))

@digitalsignalperson
Copy link
Author

Hmm I see. That's what I ended up settled on, storing a .pid file alongside the .json connection file, and killing the process later when done with it (although with that I didn't bother doing a km.shutdown() which probably isn't graceful).

The way I'm using jupyter kernel is to manage a persistent python kernel in my terminal of choice, send commands or stdin to it from my terminal prompt, and print the stdout/stderr back into the terminal. https://github.com/digitalsignalperson/comma-python/blob/main/%2Cpython
And I have a method to either kill or restart the kernel when needed, which is why I wondered if shutting it down was supposed to terminate the process or not.

If this is an ok place to ask, I also ran into sometimes after creating a new kernel, calling km.execute(to_execute) immediately never results in a km.iopub_channel.msg_ready() returing True. I'm not sure what the correct way to deal with that is, if there's some method to check and wait for before trying to execute something. Notes here: ,python#L70

@kevin-bates
Copy link
Member

and killing the process later when done with it (although with that I didn't bother doing a km.shutdown() which probably isn't graceful).

If you "kill" the processs using SIGTERM (i.e., kill pid) and not SIGKILL (i.e., kill -9 pid) then the signal handler should shutdown the kernel - all good.

If this is an ok place to ask, I also ran into sometimes after creating a new kernel, calling km.execute(to_execute) immediately never results in a km.iopub_channel.msg_ready() returing True. I'm not sure what the correct way to deal with that is, if there's some method to check and wait for before trying to execute something.

I believe the best way to ensure a kernel is ready to receive execution requests is to complete a kernel_info_request (and kernel_info_reply) sequence - at least this is what the server does. Since this may lead to other questions (and my kernel protocol knowledge is limited), I'm going to preemptively add @JohanMabille as he's got this stuff down.

@digitalsignalperson
Copy link
Author

If you "kill" the processs using SIGTERM (i.e., kill pid) and not SIGKILL (i.e., kill -9 pid) then the signal handler should shutdown the kernel - all good.

thanks, I've switched to SIGTERM

I believe the best way to ensure a kernel is ready to receive execution requests is to complete a kernel_info_request (and kernel_info_reply) sequence - at least this is what the server does. Since this may lead to other questions (and my kernel protocol knowledge is limited), I'm going to preemptively add @JohanMabille as he's got this stuff down.

If I remove my 1 second sleep before I try to execute something and add the kernel info request, it now similarly does not get the response from the kernel_info_request sometimes

The code is doing more or less this:

cf = jupyter_client.find_connection_file(connection_file_path)
km = jupyter_client.BlockingKernelClient(connection_file=cf)
km.load_connection_file()
km.kernel_info()
while True:
    if km.iopub_channel.msg_ready():
        # Sometimes msg_ready() never returns True, other times I do see the kernel_info_request

https://github.com/digitalsignalperson/comma-python/blob/c28b985bbf2df35afea1554cda8e7de0d94e95ff/%2Cpython#L178-L182

e.g. sometimes this works

,python --new "import numpy as np"
Killed kernel with pid 376602
Started kernel with pid 401469

soometimes it doesn't

,python --new "import numpy as np"
Killed kernel with pid 401469
Started kernel with pid 401919
No messages from kernel
Couldn't get kernel info

@kevin-bates
Copy link
Member

I see that your repo references jupyter_client == 8.0.3. You might also see if jupyter_client < 8 behaves differently.

@JohanMabille
Copy link
Member

JohanMabille commented Apr 4, 2023

The SUB socket of the client can take time to connect to the IOPUB channel, and the client can miss important messages (especially those with the kernel status). The current workaround implemented in different clients is to "nudge" the kernel, i.e. send requests until the SUB socket is connected and able to receive the "idle" status message (i.e. having km.iopub_channel.msg_ready() returning True). You can find more detail in this issue.

The next version of the protocol will fix this issue "by design", using a socket that broadcasts a message when it receives a new connection. Clients can wait for this message on iopub (which is guaranteed to be delivered by ZMQ) before sending requests to the message.

You can find the detail of this JEP here. The JEP has been accepted, but not implemented yet.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants