Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename TensorProduct and implement TensorCore.tensor #232

Merged
merged 13 commits into from Jan 20, 2021

Conversation

devmotion
Copy link
Member

@devmotion devmotion commented Jan 17, 2021

This PR renames TensorProduct to KernelTensorProduct (analogously to KernelSum and KernelProduct) and implements and exports TensorCore.tensor === TensorCore.:⊗ for tensor product kernels (similar to the overloads for KernelSum and KernelProduct).

IMO the name TensorProduct is too generic to be owned and exported by KernelFunctions. The overload is nice to have, I think, especially with the more verbose KernelTensorProduct. TensorCore is a lightweight package that contains default implementations for tensor operations. Originally, the code was merged in JuliaLang/julia#35150 which was later reverted in JuliaLang/julia#35744 and moved to its own package. It seems other packages with tensor operations such as Kronecker.jl should also overload TensorCore.tensor or rename their functions if they are type piracy to avoid conflicts.

docs/make.jl Outdated Show resolved Hide resolved
docs/make.jl Outdated Show resolved Hide resolved
docs/src/api.md Outdated Show resolved Hide resolved
docs/src/userguide.md Outdated Show resolved Hide resolved
@@ -0,0 +1,145 @@
"""
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I only changed TensorProduct to KernelTensorProduct and added a new docstring. I renamed the file in a separate PR but somehow, in contrast to the tests, Github does not display the changes nicely.

@@ -1,5 +1,3 @@
hadamard(x, y) = x .* y
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

TensorCore contains an implementation of hadamard.

x -> SqExponentialKernel() * LinearKernel(; c=x[1]),
rand(1);
ADs=[:ForwardDiff, :ReverseDiff, :Zygote],
x -> KernelProduct(SqExponentialKernel(), LinearKernel(; c=exp(x[1]))), rand(1)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems AD errors are fixed by enforcing non-negativity of the constant offset. Same below for KernelSum and KernelTensorProduct.

@@ -0,0 +1,36 @@
@testset "overloads" begin
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consistent with the current test setup, I created a separate file that tests only the implementations of +, *, and .

@@ -1,7 +1,7 @@
using KernelFunctions
using AxisArrays
using Distances
using Kronecker
using Kronecker: Kronecker
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Again this avoids conflicts between Kronecker.:⊗ and TensorCore.:⊗ while still loading kernelkronmat.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also opened an issue: MichielStock/Kronecker.jl#80

docs/src/userguide.md Outdated Show resolved Hide resolved
@eval begin
$M.$op(k1::Kernel, k2::Kernel) = $T(k1, k2)

$M.$op(k1::$T, k2::$T) = $T(k1.kernels..., k2.kernels...)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default behavior is/was that if we have one kernel <:Tuple and one kernel <:AbstractVector, one will get a <:Tuple. Is it really a sensible choice?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The nice thing about this default is that you don't end up with abstract types unexpectedly but, of course, it might not be friendly to the compiler. I haven't seen any instances of it in our examples and tests, so I am not sure how often such a combination happens in practice.

@devmotion
Copy link
Member Author

AD errors on Julia 1.3 are caused by GaborKernel and seem unrelated.

@devmotion
Copy link
Member Author

Is there anything left that should be addressed?

@devmotion
Copy link
Member Author

This is ready for another round of review, most (all?) unrelated changes are included in the master branch.

@devmotion devmotion merged commit 5fd5157 into master Jan 20, 2021
@devmotion devmotion deleted the dw/tensorproduct branch January 20, 2021 16:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants