Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make autodiff insertion more versatile #181

Open
wants to merge 7 commits into
base: master
Choose a base branch
from

Conversation

KnutAM
Copy link
Member

@KnutAM KnutAM commented Mar 25, 2022

Two problems are clear from #179

  1. The @implement_gradient macro doesn't work if the original function gives anything more specific than AbstractTensor
  2. The method doesn't give any error if the output type of the user-specified gradient function is incorrect.
  • Export the propagate_gradient function to give the user full control over type specification for dispatching with dual numbers. This makes it possible to solve point 1 above, and this is also documented with an example.
  • Throw error for incorrect output from the user-supplied gradient function
  • Add tests that dimension mismatches are caught

@KnutAM KnutAM marked this pull request as ready for review March 25, 2022 16:18
@KnutAM KnutAM closed this Mar 25, 2022
@KnutAM KnutAM reopened this Mar 25, 2022
@codecov-commenter
Copy link

codecov-commenter commented Mar 25, 2022

Codecov Report

Base: 97.85% // Head: 97.86% // Increases project coverage by +0.01% 🎉

Coverage data is based on head (5c90d4d) compared to base (7a67a82).
Patch coverage: 100.00% of modified lines in pull request are covered.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #181      +/-   ##
==========================================
+ Coverage   97.85%   97.86%   +0.01%     
==========================================
  Files          16       16              
  Lines        1211     1219       +8     
==========================================
+ Hits         1185     1193       +8     
  Misses         26       26              
Impacted Files Coverage Δ
src/Tensors.jl 82.85% <ø> (ø)
src/automatic_differentiation.jl 99.07% <100.00%> (+0.03%) ⬆️

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@KnutAM
Copy link
Member Author

KnutAM commented Mar 26, 2022

Ready for review, @fredrikekre or @KristofferC

@KnutAM
Copy link
Member Author

KnutAM commented Jun 30, 2022

Found AllTensors today, this could also potentially be used here like

macro implement_gradient(f, f_dfdx)
    return :($(esc(f))(x :: Union{AllTensors{<:Any, <:Dual}, Dual}) = _propagate_gradient($(esc(f_dfdx)), x))
end

Still less specific than f(::Tensor{2,3}), but seems like it should solve #179 while maintaining the syntactic convenience.
Does anyone see any drawbacks to this approach compared to the current implementation?

@@ -250,11 +250,26 @@ be of symmetric type

"""
macro implement_gradient(f, f_dfdx)
return :($(esc(f))(x :: Union{AbstractTensor{<:Any, <:Any, <:Dual}, Dual}) = _propagate_gradient($(esc(f_dfdx)), x))
return :($(esc(f))(x :: Union{AbstractTensor{<:Any, <:Any, <:Dual}, Dual}) = propagate_gradient($(esc(f_dfdx)), x))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return :($(esc(f))(x :: Union{AbstractTensor{<:Any, <:Any, <:Dual}, Dual}) = propagate_gradient($(esc(f_dfdx)), x))
return :($(esc(f))(x :: Union{AbstractTensor{<:Any, <:Any, <:Dual}, Dual}, args...) = propagate_gradient($(esc(f_dfdx)), x, args...))

Note from #197 , requires corresponding update to propagate_gradient

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants