Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Video recording #360

Open
colllin opened this issue Jun 6, 2019 · 8 comments
Open

Video recording #360

colllin opened this issue Jun 6, 2019 · 8 comments

Comments

@colllin
Copy link

colllin commented Jun 6, 2019

Hello 👋

I searched the repo a bit but I’m fairly new to it. I’m running on a headless server and I’m trying to understand if SLM-Lab has the capability for video recording of an episode. I saw that it installs the ffmpeg module in ubuntu_setup.sh, but don’t see if/where ffmpeg is being used. Does it have this capability? Can you point me to the code? If not, I can try to add it if you’d be interested.

@kengz
Copy link
Owner

kengz commented Jun 7, 2019

Hi, the lab does not have this capability now, contributions are certainly welcome! If you'd like to work on it, I'd suggest a minimal method insertion like the one done for rendering here, just to ensure API consistency and simplicity.

@colllin colllin changed the title Video recording? Video recording Jun 9, 2019
@colllin
Copy link
Author

colllin commented Jun 9, 2019

I was able to get it working — the commit is here: colllin@b7d70cd

Example usage:

$ RECORD=true python run_lab.py data/dqn_cartpole_2019_123/dqn_cartpole_spec.json dqn_cartpole enjoy@dqn_cartpole_t0_s0_ckpt-best

I have some open questions that prevent me from submitting as a PR:

  • I can’t get the “enjoy” mode to work. I get an error about eval_df being empty. Should “enjoy” be able to read the eval_df from the data? That doesn’t seem to be working right now, or I’m doing it wrong. I commented out a few lines while I was testing the video recording.
  • Do you have a script or procedure for testing changes across environments? I should test video recording on vector envs and unity envs. I have no idea if the gym.wrappers.Monitor will work universally across envs.

Alternatively, if the integration is too complex, we could close this issue and you could point other people here to merge my commit into their project if they want to record video.

Thank you for your help & attention!

@kengz
Copy link
Owner

kengz commented Jun 10, 2019

Hey, this looks good! It is also simple enough to integrate nicely in. We'd be more than happy to accept a PR; this will be a wonderful addition to the lab.

The enjoy mode thing is probably complaining about a specific eval_frequency. If you run say 10000 max_frame and set eval_frequency to 1000, it will have populated the eval_df with 10 rows. If something else is causing that issue, I can take a look at the CI test and help fix the error.

That said, it suffices to test just on a gym environment now since unity tests are mostly excluded. Most tests are located in test files while directly mirror the original file structure. Feel free to add just 1 simple test.

@colllin
Copy link
Author

colllin commented Jun 11, 2019

Thanks @kengz, I’d be happy to write a test, add some docs, and submit a PR. Can you help me with something though? When I run the following commands (basically right out of the docs), enjoy mode seems to fail for me:

$ python run_lab.py slm_lab/spec/demo.json dqn_cartpole train
...
$ python run_lab.py data/dqn_cartpole_2019_.../dqn_cartpole_spec.json dqn_cartpole enjoy@dqn_cartpole_t0_s1
Traceback (most recent call last):                                                                                                                                                                 
  File "run_lab.py", line 84, in <module>                                                                                                                                                          
    main()                                                                                                                                                                                         
  File "run_lab.py", line 73, in main                                                                                                                                                              
    read_spec_and_run(*args)                                                                                                                                                                       
  File "run_lab.py", line 57, in read_spec_and_run                                                                                                                                                 
    run_spec(spec, lab_mode)                                                                                                                                                                       
  File "run_lab.py", line 42, in run_spec                                                                                                                                                          
    Session(spec).run()                                                                                                                                                                            
  File "/home/ubuntu/SLM-Lab/slm_lab/experiment/control.py", line 115, in run                                                                                                                      
    metrics = analysis.analyze_session(self.spec, self.agent.body.eval_df, 'eval')                                                                                                                 
  File "/home/ubuntu/SLM-Lab/slm_lab/experiment/analysis.py", line 232, in analyze_session                                                                                                         
    assert len(session_df) > 1, f'Need more than 1 datapoint to calculate metrics'                                                                                                                 
AssertionError: Need more than 1 datapoint to calculate metrics 

Am I missing something? It seems that self.agent.body.eval_df hasn’t been populated as expected.

@kengz
Copy link
Owner

kengz commented Jun 12, 2019

Hey, pretty sure you added code is not causing that since it doesnt have any side effects.
Could u double check that you're on the latest commit? git log would show the latest SHA starting with 89edebc
I suspect your demo.json has an eval_frequency that is too big so it never got to collect that many eval checkpoint data rows.

@kengz
Copy link
Owner

kengz commented Jun 29, 2019

hi @colllin are you still facing the issue?

@colllin
Copy link
Author

colllin commented Jun 29, 2019 via email

@kengz
Copy link
Owner

kengz commented Jun 29, 2019

test it under the module of that added method; doing so might be cleaner. Just get an test_env from the conf test to get an example environment and step it through with random action as done here

while not done:
_, reward, done, _ = env.step(env.action_space.sample())

however, if it's problematic, just add a minimal invocation test or no test, since the addition is short.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants