-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
implement FiniteDiffs submodule: gradient, divergence and laplacian operators #22
Conversation
Codecov Report
@@ Coverage Diff @@
## master #22 +/- ##
==========================================
+ Coverage 90.59% 92.07% +1.47%
==========================================
Files 6 5 -1
Lines 202 227 +25
==========================================
+ Hits 183 209 +26
+ Misses 19 18 -1
Continue to review full report at Codecov.
|
baae567
to
2dadaee
Compare
Benchmark to the optimized X = rand(256, 256);
∇X = fgradient(X);
p = cat(∇X..., dims=3);
@btime Images.div($p); # 48.573 μs (2 allocations: 512.05 KiB)
@btime fdiv($∇X); # 146.490 μs (6 allocations: 1.50 MiB)
X = rand(1024, 1024);
∇X = fgradient(X);
p = cat(∇X..., dims=3);
@btime Images.div($p); # 1.314 ms (2 allocations: 8.00 MiB)
@btime fdiv($∇X); # 4.385 ms (6 allocations: 24.00 MiB) I'll work out |
2dadaee
to
d2a42b8
Compare
d2a42b8
to
a86e651
Compare
a86e651
to
2eea7af
Compare
I hope to get to this tomorrow morning, sorry for the delay. |
No worries, I start a lecture series to introduce Julia to students in our school every Sunday, I'll task switch to it in the meantime. BTW, your new "Why Julia" introduction https://www.youtube.com/watch?v=x4oi0IKf52w is super insightful 😆 |
Next lecture in the series is Monday and then they start coming quickly. Steal anything from https://github.com/timholy/AdvancedScientificComputing that's useful to you (see the schedule link to see what's ahead). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's quite telling that almost all my comments are about documentation and tests. Nice work!
src/diff.jl
Outdated
|
||
- forward/backward difference [`fdiff`](@ref) are the Images-flavor of `Base.diff` | ||
- gradient operator [`fgradient`](@ref) and its adjoint via keyword `adjoint=true`. | ||
- divergence operator [`fdiv`](@ref) is the negative sum of the adjoint gradient operator of |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if people will be surprised by "negative sum", given that divergence is typically written without appeal to the adjoint. That said, from a technical standpoint you are correct, with one quibble: it's not really the gradient because that would produce n
components from a scalar-valued input. Technically, I guess fdiv
is defined as <∇u, ∇v> = -<u, fdiv(∇v)>
, right?
I wonder if it would be better to be a bit vague here, possibly linking to something? Perhaps something like "the divergence operator fdiv
is a sum of discrete derivatives of vector fields" and specify the more precise meaning below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is definitely an oversight, I should make this a comment and implementation details instead of the docstring. Thanks for catching it.
src/diff.jl
Outdated
!!! tips Non-allocating | ||
This function will allocate a new set of memories to store the intermediate | ||
gradient fields `∇X`, if you pre-allcoate the memory for `∇X`, then this function | ||
will use it and is thus non-allcating. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
!!! tips Non-allocating | |
This function will allocate a new set of memories to store the intermediate | |
gradient fields `∇X`, if you pre-allcoate the memory for `∇X`, then this function | |
will use it and is thus non-allcating. | |
!!! tip Avoiding allocations | |
The two-argument method will allocate memory to store the intermediate | |
gradient fields `∇X`. If you call this repeatedly with images of consistent size and type, | |
consider using the three-argument form with pre-allocated memory for `∇X`, | |
which will eliminate allocation by this function. |
Note it's !!! tip
and not !!! tips
: https://juliadocs.github.io/Documenter.jl/stable/showcase/#Tip-admonition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Every now and then I find myself evil to take your limited open-source time on language checks. 😢
|
||
The in-place version of divergence operator [`fdiv`](@ref). | ||
""" | ||
function fdiv!(out::AbstractArray, V₁::AbstractArray, Vs::AbstractArray...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm guessing this should be @inline
, otherwise I think you'll get a bit of splat penalty from callers like fdiv(::Tuple) and
flaplacian!`.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't observe a clear performance difference by trying the flaplacian
benchmark, so I choose to keep it unchanged for now.
However, I'm quite curious about what makes you trying to think this way. Are there any references or information outside that I can take a look at?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about references, other than maybe checking the MethodInstance
s that get created via MethodAnalysis.jl. If you see abstract instances then it's possible that things will be better with forced-inlining. However, varargs typically work out well when the types are homogenous, and that seems likely to be true in this case, which may explain why you didn't see this.
|
||
@testset "fdiff entrypoints" begin | ||
A = rand(Float32, 5) | ||
@test ImageBase.fdiff(A, rev=true) == ImageBase.FiniteDiff.fdiff(A, rev=true) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could even just check ImageBase.fdiff === ImageBase.FiniteDiff.tdiff
and so on.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unfortunately, this doesn't hold true if I do
@deprecate fdiff ImageBase.FiniteDiff.fdiff
without inspecting the deprecation details, it seems that @deprecate
only passes positional arguments and not keyword arguments.
julia> A = rand(Float32, 5);
julia> ImageBase.fdiff(A, rev=true)
ERROR: MethodError: no method matching fdiff(::Vector{Float32}; rev=true)
Closest candidates are:
fdiff(::Any...) at deprecated.jl:45 got unsupported keyword argument "rev"
@@ -107,3 +111,76 @@ | |||
@test fdiff(A, dims=1) == fdiff(float.(A), dims=1) | |||
end | |||
end | |||
|
|||
@testset "fgradient" begin | |||
for T in generate_test_types([N0f8, Float32], [Gray, RGB]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for T in generate_test_types([N0f8, Float32], [Gray, RGB]) | |
for T in generate_test_types(Any[N0f8, Float32], Any[Gray, RGB]) |
(tiny bit easier on compiler and speeds up the tests)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll fix this and elsewhere in a separate PR.
Edit:
Actually, this doesn't change anything by comparing the first run time; in both cases, they take about 0.07s to compile the function. I'll keep them unchanged for now.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine. If you're fixing inference triggers in SnoopCompile these may show up without the Any
s but that's just "analysis noise" and not anything concerning.
19 -8 20 -17 -5 -18 -10 | ||
] | ||
ΔX = ref_laplacian(X) | ||
@test eltype(ΔX) == Int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this important to test here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, to ensure that we don't promote the Int
to float types. This gives a consistent result to Base's diff
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, my point is that if it doesn't pass it's actually a bug in ImageFiltering, but that isn't something that can be fixed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You are right, I just keep it here as a detector for potential bug tracing when the test fails. If it turns out to be annoying we can always remove it.
New operators: - gradient and adjoint gradient: `fgradient` - divergence: `fdiv` - laplacian: `flaplacian` To better organize the symbols, I deprecate two entrypoints: * `ImageBase.fdiff` => `ImageBase.FiniteDiff.fdiff` * `ImageBase.fdiff!` => `ImageBase.FiniteDiff.fdiff!`
Maintaining old compatibility becomes quite troublesome especially when we have 1.6 as the new LTS version now.
412952e
to
0c8ffb7
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yay!
Just responding to a couple of your excellent points.
|
||
The in-place version of divergence operator [`fdiv`](@ref). | ||
""" | ||
function fdiv!(out::AbstractArray, V₁::AbstractArray, Vs::AbstractArray...) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure about references, other than maybe checking the MethodInstance
s that get created via MethodAnalysis.jl. If you see abstract instances then it's possible that things will be better with forced-inlining. However, varargs typically work out well when the types are homogenous, and that seems likely to be true in this case, which may explain why you didn't see this.
@@ -107,3 +111,76 @@ | |||
@test fdiff(A, dims=1) == fdiff(float.(A), dims=1) | |||
end | |||
end | |||
|
|||
@testset "fgradient" begin | |||
for T in generate_test_types([N0f8, Float32], [Gray, RGB]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's fine. If you're fixing inference triggers in SnoopCompile these may show up without the Any
s but that's just "analysis noise" and not anything concerning.
19 -8 20 -17 -5 -18 -10 | ||
] | ||
ΔX = ref_laplacian(X) | ||
@test eltype(ΔX) == Int |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To clarify, my point is that if it doesn't pass it's actually a bug in ImageFiltering, but that isn't something that can be fixed here.
Four legacy methods are deprecated in favor of `ImageBase.fdiff`: - `forwarddiffx` - `forwarddiffy` - `backdiffx` - `backdiffy` See the following two PRs for more information: - JuliaImages/ImageBase.jl#11 - JuliaImages/ImageBase.jl#22 This commit bumps ImageBase compatibility to v0.1.5
Four legacy methods are deprecated in favor of `ImageBase.fdiff`: - `forwarddiffx` - `forwarddiffy` - `backdiffx` - `backdiffy` See the following two PRs for more information: - JuliaImages/ImageBase.jl#11 - JuliaImages/ImageBase.jl#22 This commit bumps ImageBase compatibility to v0.1.5
This is a rework of #19 on top of the
fdiff
function. The advantages of this version are that it's simpler and that it is friendly to GPU.To better organize the symbols, I introduce a submodule
FiniteDiffs
(definitely not the FiniteDiffs.jl). I also deprecate the entrypointsImageBase.fdiff
andImageBase.fdiff!
.If we don't consider the memory limit, benchmark on the Laplacian operator shows that this version is more performant than ImageFiltering's version on gray images, and less performant on RGB images.
closes #19
closes #20