You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello! Thank you for a great paper. Could you please explain the idea behind the division of the loss value by eta? I didn't see any information related to this in the paper. In my experiments, I haven't observed any significant difference between the code with/without this division, but maybe I am missing something important.
Hello, thanks for the interest! Indeed, this part is not thoroughly discussed in the paper. Originally we tried minimizing $L(\theta-\eta * g) - L(\theta) \approx - \eta ||g||^2 $, so dividing by $\eta$ makes the objective less sensitive to the choice of the target learning rate $\eta$. However, this objective can be unstable, since it does not require the initial loss or the updated loss to be small. Later we switched to the current version without removing the "dividing by eta" part for controlled experiments, and we haven't thoroughly analyzed whether it is more desirable to keep this term. It should have similar effects in keeping the objective less sensitive to the choice of the target learning rate $\eta$, e.g., when $\eta$ is very small, the updated loss can be very close to the initial loss, and vice versa.
Hello! Thank you for a great paper. Could you please explain the idea behind the division of the loss value by eta? I didn't see any information related to this in the paper. In my experiments, I haven't observed any significant difference between the code with/without this division, but maybe I am missing something important.
gradinit/gradinit_utils.py
Line 231 in cb46853
The text was updated successfully, but these errors were encountered: