Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance on Hopper-v2 #41

Closed
quanvuong opened this issue Apr 8, 2019 · 11 comments
Closed

Performance on Hopper-v2 #41

quanvuong opened this issue Apr 8, 2019 · 11 comments

Comments

@quanvuong
Copy link

originally posted under another issue but re-posting for visibility

Sorry to open this up again, but I am unable to obtain comparable result to the Tensorflow implementation using the master branch. I post the training graph for the pytorch and tensorflow implementation below for comparison. Both results were averaged over 5 seeds.

TSAC Hopper-v2 Pytorch

TSAC Hopper-v2 TF

The TF implementation's final performance is higher and also learns faster. The shape of the TF implementation also closely matches the shape of the graph in the paper, i.e. quickly increase and plateau at around 400 epochs.

Does the pytorch graph look similar to what you obtained too?

I just want to mention that your repo is awesome. Answering pestering question from me is not your responsibility : ) and I really appreciate any help here.

@vitchyr
Copy link
Collaborator

vitchyr commented Apr 8, 2019

What hyperparameters did you run exactly? I got this after running over 5 seeds.
image

Also, are you using the latest code? I pushed v0.2 only 3 days ago, which includes a few changes that seem to have helped.

Some relevant hyperparams:

  "batch_size": 256,
  "layer_size": 256,
  "max_path_length": 1000,
  "min_num_steps_before_training": 1000,
  "num_epochs": 3000,
  "num_eval_steps_per_epoch": 5000,
  "num_expl_steps_per_train_loop": 1000,
  "num_trains_per_train_loop": 1000,
  "replay_buffer_size": 1000000,
  "discount": 0.99,
  "policy_lr": 0.0003,
  "qf_lr": 0.0003,
  "reward_scale": 1,
  "soft_target_tau": 0.005,
  "target_update_period": 1,
  "use_automatic_entropy_tuning": true

@quanvuong
Copy link
Author

Thank you for the speedy reply! I ran v0.2 with the default hyper-parameters. I’ll double check that the hyper-parameters I ran with match what you posted.

To confirm, the performance metric is logged to “evaluation/Returns Mean”, right?

Also, would you be so kind to share your plotting code? Did you have to do smoothing to get the solid blue line in your graph? If I plot “evaluation/Return Mean” averaged over 5 seeds directly, I got the hyper zigzag pattern in my graph.

@vitchyr
Copy link
Collaborator

vitchyr commented Apr 8, 2019

I'll run it again just to check. Yes, that's the correct metric. I used viskit for plotting and did temporal smoothing. I think the important thing is that the thick, shaded region is about the same width as yours.

@vitchyr
Copy link
Collaborator

vitchyr commented Apr 9, 2019

Ah, I think the issue is that the paths are sometimes not exactly 1000, e.g. if the agent terminates early. This biases the returns to look worse than they actually are, since it might average in a path that had only length 1 (and therefore a return of ~3). For example, by looking at the average rewards, we see that they're basically the same:

image

So, this seems like a bug in the logging/eval code, but not in the training (phew!). I'll push a fix soon.

@quanvuong
Copy link
Author

Thanks! I'll rerun the code and let you know how it goes.

@vitchyr
Copy link
Collaborator

vitchyr commented Apr 9, 2019

I'm getting the following now:
image
Can you smooth out the tensorflow results to see if they results are that different?

@quanvuong
Copy link
Author

It still looks worse than the tensorflow results unfortunately, especially near the end of training.

TSAC Hopper-v2 TF

@vitchyr
Copy link
Collaborator

vitchyr commented Apr 9, 2019

Yeah, it's a bit different... It's not a big difference, but I'll look into it. The only difference I can think of is that I switched to batch training rather than online training, and I'm expecting to add support for online-mode soon.

@quanvuong
Copy link
Author

Okie, thanks so much!

@vitchyr
Copy link
Collaborator

vitchyr commented Apr 13, 2019

Looks like the problem was that the refactored v0.2 code was missing the future entropy term. See #43. Closed with 99e080f.

In particular, here's the hopper plot that I got:

image

@vitchyr vitchyr closed this as completed Apr 13, 2019
@ZhenhuiTang
Copy link

What hyperparameters did you run exactly? I got this after running over 5 seeds. image

Also, are you using the latest code? I pushed v0.2 only 3 days ago, which includes a few changes that seem to have helped.

Some relevant hyperparams:

  "batch_size": 256,
  "layer_size": 256,
  "max_path_length": 1000,
  "min_num_steps_before_training": 1000,
  "num_epochs": 3000,
  "num_eval_steps_per_epoch": 5000,
  "num_expl_steps_per_train_loop": 1000,
  "num_trains_per_train_loop": 1000,
  "replay_buffer_size": 1000000,
  "discount": 0.99,
  "policy_lr": 0.0003,
  "qf_lr": 0.0003,
  "reward_scale": 1,
  "soft_target_tau": 0.005,
  "target_update_period": 1,
  "use_automatic_entropy_tuning": true

Hi, I was wondering how to change the random seeds?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants