You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I solved the CartPole problem with FQF in legacy mode, and with your help, I have been able to solve it.
Other questions, how to add l2 regularization or dropout to prevent overfitting?
I see that there is norm_layer in the MLP of net. Is l2 regularization added here?
I see that there is norm_layer in the MLP of net. Is l2 regularization added here?
No. In PyTorch, L2 regularization (aka, "weight decay") is a function of the optimizer. See the following code for a proper example, which doesn't add L2-reg/weight decay to biases or norm layer weights:
I solved the CartPole problem with FQF in legacy mode, and with your help, I have been able to solve it.
Other questions, how to add l2 regularization or dropout to prevent overfitting?
The text was updated successfully, but these errors were encountered: