You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In #211 we added support for Pod.exec() which behaves like subprocess.run() and runs a command in a Pod and gathers the results.
It would also be useful to add a way of calling exec that behaves more like subprocess.Popen() and asynchronously opens the process. This would allow for finer control of communicating with the process via stdin/stdout/stderr.
We could call this command Pod.exec_open() which would return an Exec object.
The text was updated successfully, but these errors were encountered:
jacobtomlinson
changed the title
Support subprocess.POpen() style command execution for advanced uses
Support subprocess.Popen() style command execution for advanced uses
Nov 21, 2023
One use case for this would be to stream files to/from a Pod.
With the current Pod.exec() implementation we wait for the remote command to finish before returning. Therefore if we want to mimic the behaviour of kubectl cp we need to call a tar command via exec which loads whatever files we are copying into memory, then we flush those files back to disk.
# Works today!importkr8s, io, tarfilepod=kr8s.objects.Pod.get("my-pod")
# Archive the contents of /tmp/foo in the container and pipe through stdout into a bytes buffer# then extract those files from the buffer to /tmp/bar on the local systemwithio.BytesIO() asarchive_buffer:
pod.exec(["tar", "cf", "-", "/tmp/foo"], stdout=archive_buffer)
archive_buffer.seek(0)
archive=tarfile.TarFile(fileobj=archive_buffer)
archive.extractall("/tmp/bar")
However if the contents of /tmp/foo in the container are large we will run out of memory doing this.
Doubly because we store the stdout twice, once in the CompletedExec.stdout attribute and once in the BytesIO bufferUpdate: You can avoid this now with #213
It would be better if we could start the process with exec_open and use the stdout attribute as a readable stream object so that tarfile can start reading while it is still being written to which would help keep the buffer small.
# Potential solutionimportkr8s, io, tarfilepod=kr8s.objects.Pod.get("my-pod")
# Archive the contents of /tmp/foo in the container and pipe straight through to tarfile to# then extract those fileswithpod.exec_open(["tar", "cf", "-", "/tmp/foo"], stdout=archive_buffer) asexec:
archive=tarfile.TarFile(fileobj=exec.stdout)
archive.extractall("/tmp/bar")
In #211 we added support for
Pod.exec()
which behaves likesubprocess.run()
and runs a command in a Pod and gathers the results.It would also be useful to add a way of calling exec that behaves more like
subprocess.Popen()
and asynchronously opens the process. This would allow for finer control of communicating with the process via stdin/stdout/stderr.We could call this command
Pod.exec_open()
which would return anExec
object.The text was updated successfully, but these errors were encountered: