Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Tensor Class and tensorProduct between vectors #194

Open
TheFausap opened this issue Mar 21, 2017 · 39 comments
Open

Tensor Class and tensorProduct between vectors #194

TheFausap opened this issue Mar 21, 2017 · 39 comments

Comments

@TheFausap
Copy link
Contributor

Hello,

could be possible to implement a tensorProduct operation between two Vectors ?
I defined such function for my purpose, but i don't know how to add to math-php, and if for you is ok.

function tensorProduct(Vector $v1, Vector $v2)
{
        $lv1 = $v1->getN();
        $lv2 = $v2->getN();
        $k = 0;
        $ra = array_fill(0,$lv1*$lv2-1,0.0);

        for ($i = 0; $i < $lv1; $i++) {
                for ($j = 0; $j < $lv2; $j++) {
                        $ra[$k] = $v1[$i]*$v2[$j];
                        $k += 1;
                }
        }
        $r = new Vector($ra);
        return $r;
}

regards,
Fausto

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 21, 2017

I believe this is already implemented as Vector::outerProduct() and it returns a Matrix.

@markrogoyski
Copy link
Owner

As @Beakerboy says, try out Vector outerProduct(). Also of interest may be Matrix kroneckerProduct().

Let us know if this works for you.

@TheFausap
Copy link
Contributor Author

the tensorProduct of two vectors is a vector, if the vectors are defined in R^2, the tensorProduct is defined in R^(2*2)

(1,0) tensorProduct (1,0) = (1,0,0,0)

I'll try with kroneckerProduct (with 1-col matrix as vectors) and let you know. :)

@TheFausap
Copy link
Contributor Author

ok kroneckerProduct works, but it returns a matrix. Now I need a vector. I tried the asVectors(), but if I try to print such value I have an error.

$kk0 = new Matrix(
        [
                [1.0],
                [0.0]
        ]
);
$kk00 = $kk0->kroneckerProduct($kk0);
$kkk00 = $kk00->asVectors();

if I try an echo of such value, I have

PHP Notice:  Array to string conversion in /home/vandine/PHP-QC/ql.php on line 90
Array done 

I tried also to create a new Vector from that array with : $kkkk00 = new Vector($kkk00);
and if I print it, I have [[1, 0, 0, 0]]. Why are there two square bracket ?

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 21, 2017

Maybe outerProduct is not what you need. Is there Mathmatica or Wikipedia article that describes the operation? Wikipedia calls the outer product the tensor product of two vectors. In your code snippet above, $j is never incremented, so the inner for loop will never halt. Also, the use of '-1' in the array_fill function appears off. Using this with two vectors, each of length 2, $ra is initialized with only 3 elements. it looks like your saying:
$A = [a,b];
$B = [c,d];
$AB = $A->tensorProduct($B); // should return [a*c, a*d, b*c, $b*d]
Is this right? Two vectors returning a vector of their products? If you have a link so we know the order of the results is correct, it can certainly be added.

The double square brackets tell you it is an array of arrays. In this case the outer array contains only one array:
$vector = [1,0,0,0];
$matrix[0] = $vector;
print_r($matrix); //Should output [[1,0,0,0]]

@Beakerboy
Copy link
Contributor

I just looked at the kronecker product wikipedia article, and for two vectors, it looks like it should return what you want. The asVectors method returns an array of rows. If you need a Vector, you might want:

$kk0 = new Matrix(
[
[1.0],
[0.0]
]
);
$kk00 = $kk0->kroneckerProduct($kk0);
$kkk00 = new Vector($kk00->getColumn(0));

We should probably make the asVectors method allow for an array of columns as well as an array of rows.

@TheFausap
Copy link
Contributor Author

@Beakerboy I tried the function and it works. The array_fill creates the array starting from 0 to $lv1 * $lv2 - 1 ... so, for K0 $lv1 and $lv2 are the same and equals 2, so it's 2*2-1 = 3
$j is incremented in the second for loop, nested in the for loop for $i.

@TheFausap
Copy link
Contributor Author

@Beakerboy you can check this page, in the example section:

https://www.quantiki.org/wiki/tensor-product

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 21, 2017

array_fill creates $lv1 * $lv2 - 1 INSTANCES of the value provided. It's array_fill(from, num, value) not array_fill(from, to, value). Stupid me, yes, $j is incremented (been doing too much VBA lately where every for loop needs a "Next i" at the end). The array_fill is unnecessary though. If you added a print_r you would see that you initialized it to 3 elements, but the for loop adds a fourth one to it.

Your link confuses me towards the bottom, (ϕ1,ϕ2)⊗(ψ1,ψ2) equals two different things. The kronecker product above, and outer product below. I guess it's a case of the ⊗ operator having many meanings.

@markrogoyski it looks like Vector::directProduct and Vector::outerProduct are the same thing....return $A->kroneckerProduct($Bᵀ); what is missing is return new Vector($A->kroneckerProduct($B)->getColumn(0));

@TheFausap
Copy link
Contributor Author

ok... because there's an equivalence in some cases :) sorry for this.
maybe this pdf could be better:

http://hitoshi.berkeley.edu/221a/tensorproduct.pdf

chapter 3 (page 6) defines the tensorproduct for two general vectors.

@Beakerboy
Copy link
Contributor

@TheFausap As you use the library, if you have any suggestions on extending it to be more useful for Tensors, let me know. I began a class a while back, but I'm not very familiar on the subject.

https://github.com/Beakerboy/math-php/blob/Tensor/src/LinearAlgebra/Tensor.php

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 21, 2017

@Beakerboy Interesting... just an idea: you defined mixed tensor T^h_k (I'm sorry but I have no idea how to do indices with this markup language). Now with it, you could generalize everything, because:

  • tensor (0,0) is a scalar
  • tensor (0,1) is a vector of V (vector space - column vectors, for example)
  • tensor (1,0) is a vector of V* (dual vector space - row vectors, for example)
  • tensor (1,1) is a generic map : takes vector and generates vector, T(w,v) = w(f(v)) where f is generic function and depends on the matrix describing the tensor. (for example kronecker symbol)
  • tensor (2,0) is a scalar product : for example the standard scalar product (euclidean) is described by a (2,0) tensor g_ij = [[1,0,0],[0,1,0],[0,0,1]].
  • tensor (2,1) is vector product in 3d space.
  • tensor (3,0) is mixed product (scalar and vector together) : (a vector b) scalar c

About the tensor (2,0) changing it (metric tensor) you can change the scalar product (for example, could define easily a scalar product with a spherical metric).
You need first of all define the contraction, but remember the contraction happens only with an upper index and lower index are equal (einstein convention). a contracted tensor becomes (h-1,k-1) tensor, where h and k > 0, otherwise it remains 0, no indices, no contraction.

If there's a way to render some math inside the comment it will be easier. :)

@markrogoyski
Copy link
Owner

@TheFausap

kroneckerProduct returns a matrix, which you can print as is:

$matrix = new Matrix([
    [1,2],
    [3,4],
]);
$kroneckerProduct = $matrix->kroneckerProduct($matrix);
echo $kroneckerProduct;
// [1, 2, 2, 4]
// [3, 4, 6, 8]
// [3, 6, 4, 8]
// [9, 12, 12, 16]

Matrix asVectors() returns an array of matrix columns each as vectors. They are not rows. Each vector can be printed individually.

$matrix = new Matrix([
    [1, 2, 3],
    [4, 5, 6],
    [7, 8, 9],
]);
echo $matrix;
// [1, 2, 3]
// [4, 5, 6]
// [7, 8, 9]

$vectors = $matrix->asVectors();
foreach ($vectors as $vector) {
    echo $vector . \PHP_EOL;
}
// [1, 4, 7]
// [2, 5, 8]
// [3, 6, 9]

However, the return value $vectors is itself an array, which you will need to either iterate, or use something like print_r or var_dump or var_export.

print_r($vectors);
/*
Array
(
    [0] => MathPHP\LinearAlgebra\Vector Object
        (
            [n:MathPHP\LinearAlgebra\Vector:private] => 3
            [A:MathPHP\LinearAlgebra\Vector:private] => Array
                (
                    [0] => 1
                    [1] => 4
                    [2] => 7
                )
        )

    [1] => MathPHP\LinearAlgebra\Vector Object
        (
            [n:MathPHP\LinearAlgebra\Vector:private] => 3
            [A:MathPHP\LinearAlgebra\Vector:private] => Array
                (
                    [0] => 2
                    [1] => 5
                    [2] => 8
                )
        )

    [2] => MathPHP\LinearAlgebra\Vector Object
        (
            [n:MathPHP\LinearAlgebra\Vector:private] => 3
            [A:MathPHP\LinearAlgebra\Vector:private] => Array
                (
                    [0] => 3
                    [1] => 6
                    [2] => 9
                )
        )
)
*/

I can see how asVectors() can be confusing. So perhaps I'll add asColumnVectors and asRowVectors and remove asVectors().

Also, as @Beakerboy said, if you have any feedback about improvements or feature requests, please don't hesitate to let us know. Thanks for your interest in MathPHP and the great discussion about functionality.

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 22, 2017

@TheFausap Right now I don't think my Tensor class knows if a dimension is covariant or contravariant, so it needs to be improved, ie. tᵝᵞ vs tᵝᵧ vs tᵦᵧ. I need to read up more on the subject to figure out how to design the class to represent the object the best, and be able to accurately perform the operations. Any examples you could provide would be great.

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 22, 2017

@markrogoyski thanks a lot! :) your package is full of very useful functions, but if I found some nice addition, I'll let you know.
About asVectors, maybe you could change in asArray.

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 22, 2017

@Beakerboy Basically, each linear map is described by a matrix (you can think a vector like a 1 col matrix, if you want extend the idea). So from a programming point of view a tensor, IMHO, cannot be a simple matrix, but a dictionary (for example):
Tijkl = array( cov => array(2,3), ctrv => array(1,4), m = Matrix) the total number of indices represents the size of square matrix (i.e. 1 index -> vector, 2 indices -> matrix, 3 indices -> 3d matrix, and so on).
Multiplying tensors you can have tensor with more indices, A tensorProduct B (where A and B are quadri-vectors, or (1,0) tensor gives a (2,0) tensor, according the following rule:
A = ( a , b , c , d )
and
B = ( p , q , r , s )

Cνμ the tensor. The indices needs to be incremented separately, so fixing the A index to 1, multiply for all the elements of B, and we have the tensor elements C1,1, C1,2, C1,3 e C1,4.

C = (ap  aq  ar  as
     --  --  --  --
     --  --  --  --
     --  --  --  --) 
and so on.

C = ( ap , aq , ar , as 
      bp , bq , br , bs 
      cp , cq , cr , cs 
      dp , dq , dr , ds )

the contraction is the opposite of multiplication, reduces the index number, a (2,0) tensor can be contracted to (1,0) tensor (i.e. a vector).

Now if you tensorMultiply Cνμ with At you have Dνμt (3,0) or
tensorMultiply Cνμ with At we have Fνμt (2,1) and also this is a 3d matrix.

PS: the tensorProduct in my first request, is a different one: is a tensorProduct between vector spaces, not tensors, so it behaves differently, it transfor vectors in vectors (with bigger dimensions)

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 22, 2017

I read some Tensor articles last night, and I think you verified what I came out thinking. The characteristics which make a specific tensor unique are; number of contravariant indicies (or dimensions), number of covariant indicies, the order of the indices (although conventions seems to idicate that contravariant indices should all come first), the "length" of each dimension, and the specific value at each position.

I've updated my class to include a few options:

  1. Separate parameters to save which of the indicies are covarient, and which are contravariant (like you showed in your "definition array")
  2. One array, where a value of +1 indicates contravariance (because the convention is the index is superscripted, and a -1 indicates it is covariant (because it is subscripted)

Now I need to find a general tensor multiplication that will determine how mixtures of covariant and contravariant indices combine into the new tensor. I have code to create an "empty" new Tensor given the two tensors which will be combined. The thing I need is a general process of, given Aᵝᵩ ⊗ Bᵪᵞ = Tᵝᵩᵪᵞ, what combination of elements from A and B are used to calculate, say T⁴₂₃⁶? You said above, that if we have two contravariant indicies, Aᵝ ⊗ Bᵞ = Tᵝᵞ, T⁴⁶ = A⁴ x B⁶. What about Aᵝ ⊗ Bᵪ, or other mixes?

Can contraction only occur when the two specified indicies are dissimilar variance-types? Also, does the input Tensor need to be "square" along the contracted dimensions?

For example, can you contract:

    ( [ap , aq , ar , as]
C =   [bp , bq , br , bs]  = ap + bq + cr
      [cp , cq , cr , cs] )

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 22, 2017

@Beakerboy Generally the product of two tensors is based on the KroneckerProduct.
About the indices, I give you and example about Riemann Tensor (1,3) [rank 4 tensor]. It's defined as

Rargb = (GagsGsbr + etc)

as you can see the number of indices on the left side is the indices on the right side (removing only the index "s", because in the multiplication is contracted, and yes the contraction can occur only between covariant and controvariant indices).

the size of the matrix (rows and cols) representing the tensor depends where the tensor is defined, for example the metric tensor (2,0) [rank 2 tensor] in general relativity (3+1 dimensions), is a 4x4 2d-matrix.

to visualize something, I found a nice pdf. at http://www.ita.uni-heidelberg.de/~dullemond/lectures/tensor/tensor.pdf
look at page 18 and 19 (in the paper is assumed a three dimensional space, so the indices goes from 1 to 3).
If you look at page 21, there's a simple formula to calculate the numbers of components, provided that the space is three dimensional 3^(n+m). If the space has another dimension, the formula is d^(n+m), where d is the dimension.

@TheFausap
Copy link
Contributor Author

@Beakerboy if you have a rank 2 tensor (1,1): Ckl, if you want to contract this, you have a (1-1,1-1) tensor or a scalar: Ckk.
According the Einstein convention, this means a sum along the indices (for example 1 to 3): C11 + C22 + C33.
Because this is a regular matrix this is : C(1,1)+C(2,2)+C(3,3).

@Beakerboy
Copy link
Contributor

@TheFausap so you are saying, yes, you CAN contract a rectangular Tensor that is 4 elements by 3, for example? I'll update my Tensor class to accommodate this. I think the class has a working constructor, and the contraction method is pretty close to correct.

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 22, 2017

@Beakerboy when you setup the space dimension, the indices runs all the same range. so if we are handling tensors in three dimensional space, all the indices runs from 1 to 3. So the matrices are always square. for (1,1) tensor number of elements are 3^(1+1) = 9 -> 3x3 matrix

@Beakerboy
Copy link
Contributor

From stack exchange:

Firstly, a tensor is simply an element of the tensor product of some vector spaces or bimodules or something. In this sense, of course there are non-square tensors. For example an element of V⊗kW would be called a tensor, for any k-vector spaces V and W. But the words covariant and contravariant don't have any meaning here.

Is this worth considering in a general Tensor class? Should there be a parent, and a SquareTensor?

@markrogoyski
Copy link
Owner

@Beakerboy
I think you may be right. Direct product and outer product definitions look the same. And my unit test cases pass on each other. I implemented them at different times using different source definitions. Searching around, there are multiple sources defining direct product or outer product, but I didn't seem to come across any authoritative sources that mention both terms and say they in fact the same thing.

I'm not sure what you mean by the what is missing part. I went by this definition:
http://mathworld.wolfram.com/VectorDirectProduct.html

@Beakerboy
Copy link
Contributor

@markrogoyski what I meant was, we have two copies of the outer product, while the vector version of the Kronecker product is missing.

@markrogoyski
Copy link
Owner

Can you provide a link to something that shows the definition of Kronecker product for vectors? How would it be different from the outer product? The Kronecker product is generalizing the outer product (of Vectors) and applying it to matrices.
Thanks.

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 23, 2017

@markrogoyski The outer product is the kronecker product of a column matrix and a row matrix. The kronecker product of two column vectors will also be a column vector. If you feel that the kronecker product is only truly defined for matrix math, then I guess that's fine, and we can keep it out of the Vector class.

@markrogoyski
Copy link
Owner

OK. I think understand what you mean.
For example:

[1]   [3]     [3]
[2] ⊗ [4]   = [4]
              [6]
              [8]

@TheFausap
Copy link
Contributor Author

@Beakerboy The difference between covariant and controvariant indices, is useful for the contraction, but from the component point of view (and matrix or whatever else associated in the tensor description) is not useful.
I mean a (1,1) tensor or a (2,0) tensor defined in a three dimensional vector space, is always a 3x3 matrix, because what identify is the rank (the sum of how many covariant and controvariant indices there are).
A tensor (2,3) is 5-rank tensor (a very strange object) with 2 controvariant indices and 3 covariant indices.

@TheFausap
Copy link
Contributor Author

Your example, in the code, is useful:
// For each element in $N, sum the values of $this where $m = $n and the rest of the indices match the position in N.
// ie Tᵝᵩᵪᵞ->contract(1,2) = Nᵪᵞ = T¹₁ᵪᵞ + T²₂ᵪᵞ + T³₃ᵪᵞ ... for all χ and γ.

so the tensor Tᵝᵩᵪᵞ is 4-rank tensor, (2,2) type. Contracting, it becomes a (1,1) tensor, 2-rank tensor (or a matrix), with components:
N(1,1) = T¹₁ᵪᵞ + T²₂ᵪᵞ + T³₃ᵪᵞ ... with χ and γ = 1 and 1 respectively
N(1,2) = T¹₁ᵪᵞ + T²₂ᵪᵞ + T³₃ᵪᵞ ... with χ and γ = 1 and 2 respectively
N(1,3) = T¹₁ᵪᵞ + T²₂ᵪᵞ + T³₃ᵪᵞ ... with χ and γ = 1 and 3 respectively
N(2,1) = T¹₁ᵪᵞ + T²₂ᵪᵞ + T³₃ᵪᵞ ... with χ and γ = 2 and 1 respectively
etc.

@Beakerboy
Copy link
Contributor

@TheFausap I believe the code does exactly what you explain.

@TheFausap
Copy link
Contributor Author

@Beakerboy the contraction could also happens between two objects: Tij Aj = Ki, following the same rule.

@Beakerboy
Copy link
Contributor

@TheFausap is this the same as performing a tensor product first, then contracting, or is there a clever way to discard the 'j' indicies first, and make it a first order tensor Ti times a scaler A?

@TheFausap
Copy link
Contributor Author

@Beakerboy In my opinion, is better to apply the definition : Ti1A1 + Ti2A2 + Ti3A3 = Ki.
so you have a 3-vector with component (K1, K2, K3).

@Beakerboy
Copy link
Contributor

@TheFausap I just submitted code for the Kronecker Sum of two square matrices. Wikipedia says it "appears naturally in physics when considering ensembles of non-interacting systems."

@Beakerboy
Copy link
Contributor

@TheFausap The document you linked to states:

The result of a product between tensors is again a tensor if in each summation the summation takes place over one upper index and one lower index.

When I asked if one can multiply first, then contract, I'm now thinking the answer is no, because multiplication necessitates contraction.

If this is the case, a tensor product function should something like this:

function product(Tensor $T, int $my_index, int $t_index) : Tensor
{
    // Check that the dimensions match at the m and n positions.
    $N_dimensions = array_merge(A dimensions with "my_index" removed, T dimensions with "t_index" removed);
    $N_cov_or_con =  array_merge(A cov_or_con with "my_index" removed, T cov_or_con with "t_index" removed);
    $N_zero_definition = Mult::multiply($N_dimensions, $N_cov_or_con)
    $N = $this->zeroes($N_zero_definition)
    for ($i = 0; $i < number of elements in N; $i++) {
        //$N_value = sum like in the contract function
        $N->setValue($N_value, $N_position);
    }
    return $N;
}

I can fill in the rest and test it out. Could you give me a list of tensors and the expected result of their tensor product so I can test the function?

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 24, 2017

@Beakerboy this could be an interesting approach, also to implement other operations between vectors and tensors, scalar and tensors, tensors and tensors.

https://people.rit.edu/pnveme/personal/EMEM851n/constitutive/tensors_rect.html

as test, you can use

  1. the generalized pythagorean theorem: s = gijxixj with the metric tensor gij that could express the lenght in spaces with different curvatures (i.e. in the eclidean space : gij = dij where d is kronecker tensor defined as: dii = 1, dij = 0 with i <> j.)

  2. cross product: to implement this, we need the Levi-Civita tensor : eijk defined in this way,
    eijk = 0 if two index are the same, 1 if the indices are a even permutation of 1 2 3, -1 if the indices are a odd permutation of 1 2 3. Now the i-th element of cross product is (a x b)i = eijkajk

this also can easily extend the cross product in higher dimensions, if the indices runs from 1 to 4, for example.

@Beakerboy
Copy link
Contributor

Beakerboy commented Mar 24, 2017

@TheFausap Can the Kronecker delta be defined as a higher order tensor, or just second order? If so, is it 1 iff all indices match, or just two? Also, can it be defines with any assortment of covariant and contravariant indices, or is it always (0,2)?

@TheFausap
Copy link
Contributor Author

TheFausap commented Mar 24, 2017

@Beakerboy kronecker tensor is always rank 2 tensor i.e. identity matrix, and it can be defined in higher dimensional spaces, and it has a property it's same even if you change coordinate system (cartesian to spherical). I'm not aware about a (0,3) tensor, for example, defined in the same way.

@markrogoyski markrogoyski changed the title tensorProduct between vectors. Tensor Class and tensorProduct between vectors Mar 28, 2017
@Beakerboy
Copy link
Contributor

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants