New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Looking for information on expected speed / RAM usage #6
Comments
Having dug into this some more I believe it looks like for me over 50% of the execution time is used running the enabled.observation_callable() function in this line on the 3 camera sensors / observables - which as I understand it generates the RGB inputs. If I've understood this correctly a few questions:
Making the edits in (1) above to remove the 3 cameras I get the following execution times for rgb_stacking using the same code as in the original comment: |
Hello! Thank you very much for the detailed message here are some answers I hope will be useful:
|
Hi! Thanks so much for your response, that's really helpful. I have added some replies to your bullets in order below:
Thanks again, really appreciate all your help! |
No worries glad to help! Here are further answers
The back camera was indeed not used as input for the agent, the main use was to use with the blob detector to improve the localisation of the object
You can indeed increase the speed of the simulation by increasing the timestep. Please note however that this will make the contacts between the objects and the plan fairly unstable and they will start shacking and moving around significatly (the might even slide out of the gripper). But appart from that there is nothing inherently wrong with increasing the physics timestep.
The environment is indeed large so this doesn't surprise me to much. |
Great thanks that's really helpful! Think that is all my questions for now so happy to close this issue. Just finally on RAM though (in case it is useful for anyone else who might read this), I dug into it some more and it looks like the main thing driving the greater RAM usage is the setting of the Mujoco parameters nconmax and njmax to 5000 in these lines. If I revert to the Mujoco default (which I think is 1000) it cuts the env size from around 0.7GB to more like 0.15GB (because it allocates an n * n array for each of these parameters). I imagine this also potentially leads to less accurate contact dynamics though. |
Hi,
Apologies if I have missed something obvious.
When I run the rgb_stacking environment for 300 steps with STATE_ONLY observations (and fixed action) it seems to take usually well over 20x longer than running 300 steps (with fixed action) using the Meta-World benchmark environment, which also uses a Sawyer arm with the MuJoCo simulator to do pick and place (and other) tasks.
rgb_stacking is obviously a more complicated environment/simulation but this seemed like a lot so I wanted to check if this is about the slowdown you would expect compared to other MuJoCo sawyer arm simulations (given the greater complexity) or whether this looks like an issue with my implementation?
On my machine the rgb_stacking environment also uses up much more RAM c.0.75GB for every instance - is this also what you would expect?
Couple of other questions:
Any help would be really appreciated.
Example Execution Times 300 steps (code below):
rgb_stacking:
real 1m13.886s
user 1m42.632s
meta-world:
real 0m1.571s
user 0m4.835s
rgb_stacking test code:
meta-world test code:
The text was updated successfully, but these errors were encountered: