-
Notifications
You must be signed in to change notification settings - Fork 718
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is the recommended way to render
a TensorFlow Agents TF-Agent?
#59
Comments
Currently, I think the recommended way is to use |
Thanks for the suggestion! Calling It doesn't really work though - the render provided is always the same, no matter how long training went on. It also looks the same as if I call my more complex expression above with any valid batch index (they all also look the same). Maybe there is no way to |
My feeling is that they don't want the TF-environment to be used for rendering. They only expect it to be used during the agent training and evaluation which doesn't need rendering. Rendering is after all way in which the human user can have visual confirmation of the agent's performance. The agent sees the environment purely through state observations and rewards. |
As @sidneyyan mentioned the simplest way is to just render in Python-Environment. |
@sibyjackgrove Is somewhat correct. It wasn't because we didn't want people to render through the TF-Env, however I tried this today and some of the environments have issues rendering when done in the wrapped tf_environment as the rendering could be triggered outside of the main thread which OpenGL doesn't like in general. @gnperdue Do you have a specific environment or example we could look at where your rendering isn't changing when you call the wrapped python env? As mentioned above, we normally render directly from the python env. |
Hmmm... well, if I try with OpenAI Gym 'CartPole-v0', the environment is not repeating, but with my custom Gym environment it is. Let me dig and see if the problem isn't in my code... (I'm returning a Matplotlib figure instead of an rgb array, so maybe that is part of the issue.) Thanks for all the comments! |
Sounds good. Make sure the state of your env is changing as you step it. Best of luck and feel free to open up a new issue if there's something we can help with. |
@gnperdue Hi, did you ever find a way to render your environment? |
@grizzlybearg I don't think so (memory is hazy on exactly what I was working on back than), but that's okay. I worked around... thanks for checking in. |
I tried to post this question at StackOverflow, but I lack the reputation to create a 'tensorflow-agents' tag. So...
Open AI Gym environments carry a
.render()
method that is directly accessible in the TF-Agents Python environment created bygym_wrapper.GymWrapper
.However, when training with an agent in the TensorFlow environment created by
tf_py_environment.TFPyEnvironment
, calling.render()
throws a not-implemented exception.If you dig a bit, you find the environment underneath the TensorFlow environment is a batched Python env, and you can cheat your way to the Gym environment at the bottom with something like:
Where the
-1
index is showing the position in the batch. However, no matter what index I provide, the render never updates.What is the recommended way to get a TF-Agents TensorFlow environment to render?
Thanks for any thoughts!
The text was updated successfully, but these errors were encountered: