-
Notifications
You must be signed in to change notification settings - Fork 110
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
There is one error in GUI with multiprocess #12
Comments
Hi, We previously encountered a similar issue randomly in our environment, though it hasn't appeared recently. The problem seems to be:
This issue occurs only when the GUI is enabled, so it should not be a problem when running in headless mode. |
Thank you for your answer. |
same error |
Hello, is there any progress on solving the issue? Additionally, I would like to ask if I can run this project with an RTX 3070 (39GB). Regards. |
I'm sorry, I can't resolve it. |
I modified the function named 'get_latest_queue' in gui_utils.py as follows: def get_latest_queue(q): message = None while True: try: message_latest = q.get_nowait() if message is not None: del message message = message_latest except queue.Empty: if q.qsize() < 1: break # zajia: for unsolved bug related to "torch.storage._UntypedStorage" except TypeError: print("get a torch.storage._UntypedStorage error!") break return message I try to capture the error and do not modifiy the 'message' variable. It seems work properly. |
The hardware environment is a single 3090. And timing of this error is uncertain. Although the slam process continues later, the stuck GUI cause the memory overflow. This may be the reason why Issue#7 cannot run.
The text was updated successfully, but these errors were encountered: