Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unintuitive ncon result when scalar #152

Closed
lkdvos opened this issue Oct 3, 2023 · 3 comments
Closed

Unintuitive ncon result when scalar #152

lkdvos opened this issue Oct 3, 2023 · 3 comments

Comments

@lkdvos
Copy link
Collaborator

lkdvos commented Oct 3, 2023

Noted by @leburgel:

When contracting a network that results in a scalar, currently ncon does not insert a final call to tensorscalar. As such the following (unintuitive) behaviour is happening:

julia> using TensorOperations
julia> A = rand(2,2);
julia> ncon([A, A], [[1, 2], [1, 2]])
0-dimensional Array{Float64, 0}:
1.42602

It is probably easier to just catch this and wrap the result in tensorscalar, as ncon is inherently type-unstable anyways, this should not make a huge difference.

@Jutho
Copy link
Owner

Jutho commented Oct 4, 2023

While it doesn't make a big difference, it is technically a breaking change. Other than that, I have no strong feelings. I thought that at least knowing that the return type is always some type of array or tensor more generally is somewhat consistent, but the compiler is inferring Any so it doesn't make a difference in practice. No strong opinions about this one.

@lkdvos
Copy link
Collaborator Author

lkdvos commented Oct 4, 2023

My main argument would be that @tensor does in fact automatically insert tensorscalar, thus this would improve consistency in that regard

julia> using TensorOperations
julia> A = rand(2,2);
julia> ncon([A, A], [[1, 2], [1, 2]])
0-dimensional Array{Float64, 0}:
1.60394
julia> @tensor A[1 2] * A[1 2]
1.60394

As a side-note, in principle we could make this "type-stable" by promoting the output kwarg to an optional argument, and supplying an Index2Tuple. While not really affecting the performance, we can then at least assert the output type. As a double side note, this would also be useful to make TensorMaps output with the desired codomain and domain.
All of this because writing @tensor expressions for WxHxD PEPO's is apparently hard 😁

@lkdvos
Copy link
Collaborator Author

lkdvos commented Jul 16, 2024

Fixed since 0390086

@lkdvos lkdvos closed this as completed Jul 16, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants