New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Forward stdout to some widget #268
Comments
Hi @johngull ! This is something we've been considering for a while now. We're actually considering different APIs and architectures for this right now. We'll post here when we have something a little more concrete to talk about. |
Are there any workarounds or progress on this that you are aware of? This would make a powerful feature! |
This would be an extremely useful feature for streamlit as it would also enable many exploration and coding workflows for which notebooks are usually used now. I have an imperfect workaround that prints stdout and stderr to the frontend after the command is finished, as opposed to streamed during execution: stdout = io.StringIO()
stderr = io.StringIO()
try:
with contextlib.redirect_stdout(stdout):
with contextlib.redirect_stderr(stderr):
<code to be run>
except Exception as e:
st.write(f"Failure while executing: {e}")
finally:
st.write("Execution output:\n", stdout.getvalue())
st.write("Execution error output:\n", stderr.getvalue()) |
I built on zblz's workaround and turned it into a wrapper that might be useful for some people: import contextlib
from functools import wraps
from io import StringIO
import streamlit as st
def capture_output(func):
"""Capture output from running a function and write using streamlit."""
@wraps(func)
def wrapper(*args, **kwargs):
# Redirect output to string buffers
stdout, stderr = StringIO(), StringIO()
try:
with contextlib.redirect_stdout(stdout), contextlib.redirect_stderr(stderr):
return func(*args, **kwargs)
except Exception as err:
st.write(f"Failure while executing: {err}")
finally:
if _stdout := stdout.getvalue():
st.write("Execution stdout:")
st.code(_stdout)
if _stderr := stderr.getvalue():
st.write("Execution stderr:")
st.code(_stderr)
return wrapper Then you can do something like stprint = capture_output(print)
stprint(some_variables) which makes print debugging easier edit: added relevant import statements for code snippet |
Hello @tvst, are there any news on this matter? |
+1 on this! Super needed for training models which take time. |
I could capture stdout by reading a StringIO instance shared between the main thread and sub thread. import time
import contextlib
from concurrent.futures import ThreadPoolExecutor
from io import StringIO
import streamlit as st
def print_func(sec: int):
for i in range(sec):
print(f"{i} [sec] printed in sub thread\n")
time.sleep(1)
return "print finished"
def thread_func(sio: StringIO):
with contextlib.redirect_stdout(sio):
result = print_func(10)
return result
if __name__ == '__main__':
st.write("# thread test")
shared_sio = StringIO()
with ThreadPoolExecutor() as executor:
future = executor.submit(thread_func, shared_sio)
# observe shared_sio
placeholder = st.empty()
while future.running():
placeholder.empty()
placeholder.write(shared_sio.getvalue())
time.sleep(1)
# after the end of sub thread
placeholder.empty()
placeholder.write(shared_sio.getvalue())
result = future.result()
st.write(result) |
+1 - I also have some use cases this would be helpful on. |
I created a gist from this problem from streamlit.redirect as rd
with rd.stdout:
print('blablabla')
time.sleep(1)
print('Hello from code block') It can be configure easily: st.sidebar.text("Standard output message here:")
to_out = st.sidebar.empty()
...
with rd.stdout(to=to_out, format='markdown'):
print('**Hello** from markdown') It can be nesting: garbage = st.empty()
with rd.stdout:
print('important message')
with rd.stdout(to=garbage):
print("some garbage")
garbage.write()
print("another important message") What do you think? |
Hi - used your Gist and it works excellently. Nice work! :) I'm wondering as I have output which is long (which I don't have control over) - could I overwrite one line at a time rather than the text just scrolling down the page? Cheers, p.s. I am more of a Data Scientist rather than a Computer Scientist! |
Hi Michael, I extended the gist with 2 more parameters: It can be used as following: with rd.stdout(max_buffer=55):
i = 0
while True:
print("Hello ", i)
i += 1 These new parameters truncates the stdout to maximum I think you want something like this 😄 |
G'day Bela (from Sydney, Aus) Thanks so much for your prompt and thoughtful reply. It is essentially exactly what I needed! I am tweaking it for my precise purposes (e.g. parsing the iterative output from an optimisation routine). Here is the repo if you're interested - https://github.com/Mjboothaus/PhD-Thesis I have an app front end on it - deployed here (I don't have it working precisely just yet): |
Is this possible now? I was trying to deploy some application + docker containers in a remote system using streamlit + ansible. (already have existing playbook). |
Sharing @fredzannarbor's thoughts from #5842 --
|
Hi @carolinedlu! Thanks for the comment sharing. I extend my gist with another 2 arguments For working import streamlit as st
from random import randint
import time
debug, info, error = st.columns(3)
i = 0
with rd.stdout(to=debug, regex='DEBUG', duplicate_out=True): # duplicate_out == True -> writes to original stdout
with rd.stdout(to=info, regex='INFO', duplicate_out=True): # duplicate_out == True -> writes to upper layer too
with rd.stdout(to=error, regex='ERROR', duplicate_out=True):
while True:
severity = ('DEBUG', 'INFO', 'ERROR')[randint(0, 2)]
i += 1
print('[{}] Message {}'.format(severity, i))
time.sleep(1) streamlit-main-2022-12-16-15-12-55.webm |
Hey! We are unlikely to implement this directly since it seems like more of an edge case. This would be great for a custom component though! |
Hi @schaumb Thank you for creating this gist! I would like to know about the licensing of this gist. |
Hi @123sin123 , |
Thank you @schaumb. I'm sorry for asking you many questions. Are there any restrictions on the use of it? |
@123sin123 Use the gist, as it has an Apache 2.0 license. |
I believe implementing this feature would greatly enhance the user experience especially when running large models or processing a large number of inputs, such as with certain HuggingFace pipelines. They often provide useful information like progress bars or prompts to indicate the progress. As someone who wants to showcase a model through HF pipeline and streamlit I really hope to see official components or functions implemented to support this feature! |
I don't agree that this should be classified as an edge case. If you think of it as "make it easier for both developers and users to get reports from machine learning internals", it is squarely in Streamlit's core mission. That said, I have experimented with the gist and redirecting a boatload of stdoout/strerr to streamlit. A huge stream of stdout looks pretty clunky in streamlit, and I find myself exploring alternatives. the problem is that if you have a streamlit front end and a pure python back end module communicating, to get back end info in the front end you have to either a) modify the function signature to add selected returns for streamlit to consume or b) put st.infos inside the back end module. Both of these are a PITA. What one really wants, perhaps, is a way to dynamically and conditionally filter stdout from the back end into the front end or b) maybe to use generative AI to do the same task. |
Hi, this would be VERY helpful. I have an application that downloads data using an sdk, and it logs the progress to stdout. No way to show the download progress otherwise |
See https://discuss.streamlit.io/t/how-can-a-python-script-detect-whether-is-being-run-by-streamlit/46333/3?u=fredzannarbor -- I have yet to try this, but it would be another option. |
Hi, I am also interested in this. We use 3rd party solver for solving linear problem and it would be great to see progress, which is now only printed into terminal, then I can print results after the solver finish. |
Problem
I tried to use streamlit for the interactive clustering app.
In my case clustering is long. And the only way to see any progress is to use
verbose=1
. In such a case, sklearn algorithm put some information to stdout.But I see no possibility to show this information with streamlit.
Community voting on feature requests enables the Streamlit team to understand which features are most important to our users.
If you'd like the Streamlit team to prioritize this feature request, please use the 👍 (thumbs up emoji) reaction in response to the initial post.
The text was updated successfully, but these errors were encountered: