You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for your outstanding work. I would like to ask: how should we generate offline datasets, such as medium or medium-expert version, like D4RL. Also is it possible to render states into images to support learning offline policy from visual observations?
The text was updated successfully, but these errors were encountered:
Thank you for the question. For the dataset generation, please refer to this page in the documentation. Though we do not provide pre-trained agents for the data collection, one can train a policy online and use it as a data collection policy.
For the rendering of visual inputs, we do not have functions specialized for the visualization of the image observations. However, the offline reinforcement learning module should be able to handle it, taking advantage of PixelEncoderFactory provided in d3rlpy. For example, when training CQL, you can simply pass the instance as actor_encode_factory=pixel_encoder_factory, critic_encoder_factory=pixel_encoder_factory, as described in d3rlpy's documentation.
Thanks for your outstanding work. I would like to ask: how should we generate offline datasets, such as medium or medium-expert version, like D4RL. Also is it possible to render states into images to support learning offline policy from visual observations?
The text was updated successfully, but these errors were encountered: