Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Questions] how can we generate offline dataset like D4RL? #24

Open
return-sleep opened this issue Dec 6, 2023 · 1 comment
Open

[Questions] how can we generate offline dataset like D4RL? #24

return-sleep opened this issue Dec 6, 2023 · 1 comment

Comments

@return-sleep
Copy link

Thanks for your outstanding work. I would like to ask: how should we generate offline datasets, such as medium or medium-expert version, like D4RL. Also is it possible to render states into images to support learning offline policy from visual observations?

@aiueola
Copy link
Collaborator

aiueola commented Dec 10, 2023

@return-sleep

Thank you for the question. For the dataset generation, please refer to this page in the documentation. Though we do not provide pre-trained agents for the data collection, one can train a policy online and use it as a data collection policy.

For the rendering of visual inputs, we do not have functions specialized for the visualization of the image observations. However, the offline reinforcement learning module should be able to handle it, taking advantage of PixelEncoderFactory provided in d3rlpy. For example, when training CQL, you can simply pass the instance as actor_encode_factory=pixel_encoder_factory, critic_encoder_factory=pixel_encoder_factory, as described in d3rlpy's documentation.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants