Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix GPU compat, Question on NN __init__ #6

Merged
merged 3 commits into from May 18, 2019
Merged

Conversation

josiahls
Copy link
Contributor

Section 1 of 2

When GPU is on, assert isinstance(x, torch.FloatTensor) will fail since GPU tensors are not torch.FloatTensor.
This is fixed by adding:
self.check_input_data_into_forward_once(x.cpu())
However, perhaps it would be better to try:
assert isinstance(x.cpu(), torch.FloatTensor) ?

Section 2 of 2

There are parameter conflicts during initialization.
Note in https://github.com/p-christ/Deep-Reinforcement-Learning-Algorithms-with-PyTorch/blob/80c09bdac501af3bc47d74901ceadd2f778cf3cb/agents/Base_Agent.py#L310 the initialization is:

return NN(input_dim=input_dim, hidden_layers_info=hyperparameters["linear_hidden_units"],
                  output_dim=output_dim, output_activation=hyperparameters["final_layer_activation"],
                  batch_norm=hyperparameters["batch_norm"], dropout=hyperparameters["dropout"],
                  hidden_activations=hyperparameters["hidden_activations"], initialiser=hyperparameters["initialiser"],
                  columns_of_data_to_be_embedded=hyperparameters["columns_of_data_to_be_embedded"],
                  embedding_dimensions=hyperparameters["embedding_dimensions"], y_range=hyperparameters["y_range"],
                  random_seed=seed).to(self.device)

However the current initialization is:

    def __init__(self, input_dim: int, layers_info: list, output_activation=None,
                 hidden_activations="relu", dropout: float =0.0, initialiser: str ="default", batch_norm: bool =False,
                 columns_of_data_to_be_embedded: list =[], embedding_dimensions: list =[], y_range: tuple = (),
                 random_seed=0, print_model_summary: bool =False)

Two issues:

  • hidden_layers_info does not exist
  • output_dim does not exist

So we can either:
a. Change Deep-Reinforcement-Learning-Algorithms-with-PyTorch code to match this
b. Change this to match Deep-Reinforcement-Learning-Algorithms-with-PyTorch.

Currently I changed a little bit of both, however I like how Deep-Reinforcement-Learning-Algorithms-with-PyTorch natively inputs layer information. I guess I am currious about the rational behind the differences between the 2. Based on your response, I might change this to better match Deep-Reinforcement-Learning-Algorithms-with-PyTorch's usage of this library. I like having a parameter for input, hidden, and output separate.

One advantage of a single input "layer_info" might be the greater flexibility of different layers. Possibly supporting list, or dictionary inputs in the future.

josiahls and others added 3 commits May 15, 2019 15:07
@p-christ
Copy link
Owner

Great, thanks a lot! Deep-Reinforcement-Learning-Algorithms-with-PyTorch is just using an out of date version off nn_builder. It should be layers_info in both, i will change it

@p-christ p-christ merged commit a3f5005 into p-christ:master May 18, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants