-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to track the memory usage when running a flow #89
Comments
Hi Jing! I'm afraid I'm not aware of a way to monitor memory usage for each component. Currently Bionic computes every flow component within the same Python process, so it's hard to separate out the memory used by one component. I don't think there's a good general technique for this. However, in the future, we plan to support running each component in a separate process, and in that case we should be able to see how much memory a given component is using. |
Hi Janek, I was not able to find our email thread, so I put my question here 😂 . I was thinking whether it's reasonable to add a wrapper for a flow component if the flow is single-threaded. import tracemalloc
tracemalloc.start()
def get_mem_usage(func):
def call(*args, **kwargs):
pre, _ = tracemalloc.get_traced_memory()
result = func(*args, **kwargs)
after, _ = tracemalloc.get_traced_memory()
logging.info("Memory usage is %s", str(after - pre))
return result
return call |
Thanks for showing me the If I understand this code right, it would estimate the total size of the value returned by
So, overall this seems like a reasonable technique but some C/C++ libraries might produce misleading results. I don't expect Bionic to start using multiple threads anytime soon, so that part shouldn't be a problem. |
Thanks for confirming, Janek! I will close the ticket for now. |
I have experienced quite some memory usage when running a flow. Instead of disabling all in-memory caching, is there an easy way to monitor the memory usage for each flow component?
The text was updated successfully, but these errors were encountered: