Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How could I see the video in rig... #19

Closed
cww97 opened this issue Jan 11, 2019 · 2 comments
Closed

How could I see the video in rig... #19

cww97 opened this issue Jan 11, 2019 · 2 comments

Comments

@cww97
Copy link

cww97 commented Jan 11, 2019

I see

            save_video=True,
            save_video_period=100,

I can only see the png files in the folder data, I wanna know how or where can I find the video files.

@vitchyr
Copy link
Collaborator

vitchyr commented Jan 17, 2019

It should be in the same folder inside of data as the pngs.

Note that you need to wait until the RL part of the algorithm is running. If you see output about "train/epoch", then it is still pretraining the VAE. Once you see the log outputting information about the "Policy Loss", then that means RL is running.

Lastly, save_video_period=100 means that videos will only be logged once every 100 epochs. If you want to save videos more frequently, reduce this number.

@vitchyr vitchyr closed this as completed Jan 17, 2019
@cww97
Copy link
Author

cww97 commented Jan 23, 2019

It should be in the same folder inside of data as the pngs.

Note that you need to wait until the RL part of the algorithm is running. If you see output about "train/epoch", then it is still pretraining the VAE. Once you see the log outputting information about the "Policy Loss", then that means RL is running.

Lastly, save_video_period=100 means that videos will only be logged once every 100 epochs. If you want to save videos more frequently, reduce this number.

Thank you, sir. I can see the video for 100, 200, 300 now(due to my poor gpu, I need more time)

In reality I have two Dobot Robotic Arms, and some cameras. I wonder where can I change the Robotic Arms and cameras in the sample experiment. And how could I test in real World.

Sorry it seem that my question is a little stupid, I did not learn a lot about Reinforcement Learning and Pytorch(even Machine Learning or Deep Learning). In another word, a rookie. I read the paper 'RIG', it seems that I can almost get the idea of what it means. Could you give me some advice of understand the algorithm, or DRL better ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants