You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
While I was working on creating some samples for #1287, I noticed that the gradients coming back to LSTM layers are different when the LSTM layer is broadcast vs. the map function is used. I suspect that it is the gradients from broadcasting that are incorrect because the nets I'm working on struggle to optimize the loss when broadcasting and seem to optimize the loss fine when using map.
This is a minimal working example comparing the gradients from using map with the gradients from dot broadcasting:
using Flux
using Flux: mse
using Zygote: gradient
using Statistics
using Random
Random.seed!(20)
x = [rand(4) for x in1:3]
y = [rand(4) for x in1:3]
forward =LSTM(4, 4)
m1(x) =forward.(x)
m2(x) =map(forward, x) |> collect
loss1(x, y) =begin Flux.reset!(forward); sum(mse.(m1(x), y)) endloss2(x, y) =begin Flux.reset!(forward); sum(mse.(m2(x), y)) end
ps =params(forward)
g1 =gradient(ps) doloss1(x, y)
end
g2 =gradient(ps) doloss2(x, y)
endfor (i, p) inenumerate(ps)
diff = g1[p] .- g2[p]
abs_diff =abs.(diff)
max_vals =maximum.(zip(g1[p], g2[p]))
rel_diff =abs.(diff ./ max_vals)
println("param $i mean abs diff: ", mean(abs_diff))
println("param $i mean rel diff: ", mean(rel_diff))
println()
end
which gives me the results
param 1 mean abs diff: 0.008737461794316823
param 1 mean rel diff: 0.41276081573440826
param 2 mean abs diff: 0.0010637529608087419
param 2 mean rel diff: 0.4532311729338062
param 3 mean abs diff: 0.01340231641668706
param 3 mean rel diff: 0.27217502820508244
param 4 mean abs diff: 0.024221922410463972
param 4 mean rel diff: 1.2236773487876056
param 5 mean abs diff: 0.07576487501903266
param 5 mean rel diff: 6.1682256880252115
This is with Julia 1.4.1, Flux 0.11.0, and Zygote 0.5.4. I get the same results with Julia 1.5.1, Flux 0.11.1, and Zygote 0.5.5.
The text was updated successfully, but these errors were encountered:
While I was working on creating some samples for #1287, I noticed that the gradients coming back to LSTM layers are different when the LSTM layer is broadcast vs. the
map
function is used. I suspect that it is the gradients from broadcasting that are incorrect because the nets I'm working on struggle to optimize the loss when broadcasting and seem to optimize the loss fine when usingmap
.This is a minimal working example comparing the gradients from using
map
with the gradients from dot broadcasting:which gives me the results
This is with Julia 1.4.1, Flux 0.11.0, and Zygote 0.5.4. I get the same results with Julia 1.5.1, Flux 0.11.1, and Zygote 0.5.5.
The text was updated successfully, but these errors were encountered: