Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

python3.9 and mttkrp #48

Closed
berryj5 opened this issue Jan 27, 2023 · 2 comments
Closed

python3.9 and mttkrp #48

berryj5 opened this issue Jan 27, 2023 · 2 comments

Comments

@berryj5
Copy link

berryj5 commented Jan 27, 2023

Some people were unable to run the code below using Python3.9. Multiple people had this experience, probably with different variants of 3.9. One person said that he fixed the issue by reverting to an older version of numpy, though I didn't get which one.

import pyttb as ttb
import numpy as np


# 2D Example dense
# ---------------------------------------------------------------
print("--------------------------------------------------------")
print("DENSE EXAMPLE (high-level; we'll dig into MTTKRP later):")
print("--------------------------------------------------------")

weights = np.array([1,2])
fm0 = np.array([[3,3], [4,5]])
fm1 = np.array([[3,3], [4,5], [6,7]])

K = ttb.ktensor.from_data(weights, [fm0, fm1])

print(f'dense Kruskal tensor: {K.full()}')

print("--------------------------------------------------------")
dcmp, init, stats = ttb.cp_als(K, 2)
print("--------------------------------------------------------")

print(f'decomposition after cp_als: {dcmp}')

print(dcmp)

ValidateK = ttb.ktensor.from_data(dcmp.weights,dcmp.factor_matrices)

print("--------------------------------------------------------")
print(f'validation of solution: {ValidateK.full()}')
print("--------------------------------------------------------")


# same 2D Example sparse
# ----------------------------------------------------------------
print("--------------------------------------------------------")
print("SPARSE EXAMPLE (high-level; we'll dig into MTTKRP later):")
print("--------------------------------------------------------")
weights = np.array([1,2])
subs = np.array([[0,0], [0,1], [0,2], [1,0], [1,1], [1,2]])
vals = np.array([[27],[42],[60],[42],[66],[94]])
shape = (2,3)

spk = ttb.sptensor.from_data(subs, vals, shape)

print(spk.full())

sp_dcmp, sp_init, sp_stats = ttb.cp_als(spk, 2)

print(sp_dcmp)

sp_ValidateK = ttb.ktensor.from_data(sp_dcmp.weights,sp_dcmp.factor_matrices)

print(sp_ValidateK.full())

print("-----------------------------------")
print("BACK TO DENSE EXAMPLE : MTTKRP demo")
print("-----------------------------------")

A = np.array([[3,3], [4,5]])
B = np.array([[3,3], [4,5], [6,7]])

print(f'A: {A}')
print(f'B: {B}')

kr = ttb.khatrirao(A, B)
print(f'Khatri-Rao(A,B): {kr}')

fm0 = np.array([[1,1], [1,1]])
fm1 = np.array([[1,1], [1,1], [1,1]])
fm2 = np.array([[1,1], [1,1], [1,1], [1,1]])
weights = np.array([1,1])
Kd = ttb.ktensor.from_data(weights, [fm0, fm1,fm2])
print(f'Kd.full: {Kd.full()}')

print(f'Show that MTTKRP is the time bottleneck')
Kf = ttb.ktensor.from_function(np.random.random_sample, (200, 30, 40), 2)
kfdcmp, kfinit, kfstats = ttb.cp_als(Kf.full(), 3)

U = [np.ones((2, 2)), np.ones((3, 2)), np.ones(((4, 2)))]
print(f'U: {U}')

print("---------------------------------------------")
print(f'COMPUTE MTTKRP using the pyttb tenmat class:')
print("---------------------------------------------")

Kdmat0 = ttb.tenmat.from_tensor_type(Kd.full(), rdims = np.array([0]))
print(f'-------------------------------------------:')
print(f'Maticized Tensor along way 0 is this matrix:')
print(f'-------------------------------------------:')
print(f'{Kdmat0}')
Kdmat1 = ttb.tenmat.from_tensor_type(Kd.full(), rdims = np.array([1]))
print(f'-------------------------------------------:')
print(f'Maticized Tensor along way 1 is this matrix:')
print(f'-------------------------------------------:')
print(f'{Kdmat1}')
print(f'-------------------------------------------------:')
print(f'Khatri-Rao product of U[1], U[2] (we wouldnt actually compute this):')
print(f'-------------------------------------------------:')
Ukr0 = ttb.khatrirao([U[1], U[2]])
print(f'{Ukr0}')
print(f'-------------------------------------------:')
print(f'Matricized tensor times Khatri-Rao product:')
print(f'-------------------------------------------:')
print(f'{Kdmat0.data @ Ukr0}')
print(f'-------------------------------------------:')

Kdmat1 = ttb.tenmat.from_tensor_type(Kd.full(), rdims = np.array([1]))
print(f'-------------------------------------------:')
print(f'Maticized Tensor along way 1 is this matrix:')
print(f'-------------------------------------------:')
print(f'{Kdmat1}')
Ukr1 = ttb.khatrirao([U[0], U[2]])
print(f'-------------------------------------------:')
print(f'Khatri-Rao product of U[0], U[2] (we wouldnt actually compute this):')
print(f'-------------------------------------------:')
print(f'{Ukr1}')
print(f'-------------------------------------------:')
print(f'Matricized tensor times Khatri-Rao product:')
print(f'-------------------------------------------:')
print(f'{Kdmat1.data @ Ukr1}')
print(f'-------------------------------------------:')

print(f'Next, COMPUTE MTTKRP using the pyttb mttkrp function:')
print(f'this uses indexing to avoid computing the Khatri-Rao product')
print(f'after that, we will break down exactly what an efficient MTTKRP does')
print(f'-----------along way0:---------------------:')
print(f'Kd.mttkrp(U, 0):')
print(f'{Kd.mttkrp(U, 0)}')
print(f'-----------along way1:---------------------:')
print(f'Kd.mttkrp(U, 1):')
print(f'{Kd.mttkrp(U, 1)}')

print(f'-------------------------------------------:')
print(f'-dissection an efficient MTTKRP-------------:')
print(f'-------------------------------------------:')
print(f'The idea is to have a factoring of the tensor in mind,')
print(f'then iteratively multiply each factor rather than computing')
print(f'the Khatri-Rao product')
print(f'-------------------------------------------:')
print(f'recall that this is our tensor.  We show the whole tensor first:')
print(f'{Kd.full()}')
print(f'-------------------------------------------:')
print(f'then show its factor matrices.  These typically start out with some') 
print(f'random init, but we happened to define this example with structure:')
print(f'-------------------------------------------:')
print(f'{Kd}')
print(f'-------------------------------------------:')
print(f'To compute CP_ALS, we could start with U matrices equal to the factor')
print(f'matrices:')
print(f'-------------------------------------------:')
U = Kd.factor_matrices
print(f'U: {U}')
print(f'-------------------------------------------:')
print(f'First, we find the right dimensions for the mttkrp internal computation')
print(f'this will be the number of tensor factors by the number of columns')
print(f'in a factor matrix (these are the same; thus square).')
print(f'There are TWO factors in this tensor even though there are THREE factor matrices')
print(f'The number of weights is equal to the number of factors (TWO) and')
print(f'also equal to the number of columns in a factor matrix.')
print(f'The number of factor matrices corresponds to the number of "toes" in')
print(f'a "chicken foot" (outer product) in the tensor decomposition')
print(f'We initialize a 2 x 2 matrix with the factor weights')
print(f'The ultimate MTTKRP result along way 0 will be the dimension of ')
print(f'factor_matrices[0] @ W =   (2 x 2) x (2 x 2) = (2 x 2)')
print(f'-------------------------------------------:')
print(f'Start with the factor weights W (dim 2 x 2)')
print(f'-------------------------------------------:')
W = np.tile(Kd.weights[:, None], (1, 2))
print(W) 
print(f'-------------------------------------------:')
print(f'we now compute factor_matrix[1].T @ U[1] (full matrix mult)')
print(f'-------------------------------------------:')
W1 = Kd.factor_matrices[1].T @ U[1]
print(f'{W1}')
W *= W1
print(f'-------------------------------------------:')
print(f'running total of W (element-wise multiplication): {W}')
print(f'-------------------------------------------:')
print(f'we next compute factor_matrix[2].T @ U[2] (full matrix mult)')
W2 = Kd.factor_matrices[2].T @ U[2]
W *= W2
print(f'{W2}')
print(f'-------------------------------------------:')
print(f'running total: {W}')
print(f'-------------------------------------------:')
print(f'the final mttkrp result is now factor_matrices[0] @ W (full matrix mult):')
print(f'{Kd.factor_matrices[0] @ W}')

print(f'-------------------------------------------:')
print(f'Repeat for MTTKRP along way 1:')
print(f'-------------------------------------------:')
print(f'U: {U}')
print(f'-------------------------------------------:')
print(f'Again, we find the right dimensions for the mttkrp internal computation')
print(f'this is still the number of tensor factors by the number of columns')
print(f'in a factor matrix. (these are the same, thus square)')
print(f'There are TWO factors in this tensor even though there are THREE factor matrices')
print(f'The number of weights is equal to the number of factors (TWO) and')
print(f'also equal to the number of columns in a factor matrix.')
print(f'The number of factor matrices corresponds to the number of "toes" in')
print(f'a "chicken foot" (outer product) in the tensor decomposition')
print(f'We initialize a 2 x 2 matrix with the factor weights')
print(f'The ultimate MTTKRP result along way 1 will be the dimension of ')
print(f'factor_matrices[0].T @ W =   (3 x 2) x (2 x 2) = (3 x 2)')
print(f'-------------------------------------------:')
W = np.tile(Kd.weights[:, None], (1, 2))
print(W) 
print(f'-------------------------------------------:')
print(f'we now compute factor_matrix[0].T @ U[0] (full matrix mult)')
print(f'-------------------------------------------:')
W1 = Kd.factor_matrices[0].T @ U[0]
print(f'{W1}')
W *= W1
print(f'-------------------------------------------:')
print(f'running total of W (element-wise multiplication): {W}')
print(f'-------------------------------------------:')
print(f'we next compute factor_matrix[2].T @ U[2] (full matrix mult)')
print(f'-------------------------------------------:')
W2 = Kd.factor_matrices[2].T @ U[2]
W *= W2
print(f'{W2}')
print(f'-------------------------------------------:')
print(f'running total: {W}')
print(f'-------------------------------------------:')
print(f'the final mttkrp result is now factor_matrices[1] @ W (full matrix mult):')
print(f'-------------------------------------------:')
print(f'{Kd.factor_matrices[1] @ W}')
@jdtuck
Copy link
Contributor

jdtuck commented Jan 27, 2023

Do you have a copy of the error, or what they couldn't run. Also an output of the current installed packages and versions would be helpful

@berryj5
Copy link
Author

berryj5 commented Jan 27, 2023

I never observed the errors; remote people trying to follow the tutorial reported them. Siva Rajamanickam is somebody who experienced the error on his computer. Here is the correct output I see with Python 3.9.12:

--------------------------------------------------------
DENSE EXAMPLE (high-level; we'll dig into MTTKRP later):
--------------------------------------------------------
converting weights from int64 to np.float
converting factor_matrices[0] from int64 to np.float
converting factor_matrices[1] from int64 to np.float
dense Kruskal tensor: tensor of shape 2 x 3
data[:, :] = 
[[27. 42. 60.]
 [42. 66. 94.]]

--------------------------------------------------------
CP_ALS:
 Iter 0: f = 1.000000e+00 f-delta = 1.0e+00
 Iter 1: f = 1.000000e+00 f-delta = 1.9e-08
 Final f = 1.000000e+00
CP_ALS: total time: 0.0009767500000000817
CP_ALS: mttkrp time: 0.00012987499999983498
CP_ALS: linsol time: 0.00017400000000000748
--------------------------------------------------------
decomposition after cp_als: ktensor of shape 2 x 3
weights=[105.15455954  59.11356865]
factor_matrices[0] =
[[0.53658636 0.54105985]
 [0.84384541 0.84098409]]
factor_matrices[1] =
[[0.03213326 0.78748479]
 [0.69594937 0.08539911]
 [0.71737154 0.61038897]]
ktensor of shape 2 x 3
weights=[105.15455954  59.11356865]
factor_matrices[0] =
[[0.53658636 0.54105985]
 [0.84384541 0.84098409]]
factor_matrices[1] =
[[0.03213326 0.78748479]
 [0.69594937 0.08539911]
 [0.71737154 0.61038897]]
--------------------------------------------------------
validation of solution: tensor of shape 2 x 3
data[:, :] = 
[[27. 42. 60.]
 [42. 66. 94.]]

--------------------------------------------------------
--------------------------------------------------------
SPARSE EXAMPLE (high-level; we'll dig into MTTKRP later):
--------------------------------------------------------
tensor of shape 2 x 3
data[:, :] = 
[[27. 42. 60.]
 [42. 66. 94.]]

CP_ALS:
 Iter 0: f = 1.000000e+00 f-delta = 1.0e+00
 Iter 1: f = 1.000000e+00 f-delta = 1.9e-08
 Final f = 1.000000e+00
CP_ALS: total time: 0.23705037500000004
CP_ALS: mttkrp time: 0.23634658399999975
CP_ALS: linsol time: 5.820900000030882e-05
ktensor of shape 2 x 3
weights=[168.0176826  131.17954436]
factor_matrices[0] =
[[0.53771329 0.53900504]
 [0.84312776 0.84230254]]
factor_matrices[1] =
[[-0.40364516  0.89761928]
 [ 0.76734537 -0.3864723 ]
 [ 0.4982486   0.21194055]]
tensor of shape 2 x 3
data[:, :] = 
[[27. 42. 60.]
 [42. 66. 94.]]

-----------------------------------
BACK TO DENSE EXAMPLE : MTTKRP demo
-----------------------------------
A: [[3 3]
 [4 5]]
B: [[3 3]
 [4 5]
 [6 7]]
Khatri-Rao(A,B): [[ 9  9]
 [12 15]
 [18 21]
 [12 15]
 [16 25]
 [24 35]]
converting weights from int64 to np.float
converting factor_matrices[0] from int64 to np.float
converting factor_matrices[1] from int64 to np.float
converting factor_matrices[2] from int64 to np.float
Kd.full: tensor of shape 2 x 3 x 4
data[0, :, :] = 
[[2. 2. 2. 2.]
 [2. 2. 2. 2.]
 [2. 2. 2. 2.]]
data[1, :, :] = 
[[2. 2. 2. 2.]
 [2. 2. 2. 2.]
 [2. 2. 2. 2.]]

Show that MTTKRP is the time bottleneck
CP_ALS:
 Iter 0: f = 8.685222e-01 f-delta = 8.7e-01
 Iter 1: f = 9.306343e-01 f-delta = 6.2e-02
 Iter 2: f = 9.925696e-01 f-delta = 6.2e-02
 Iter 3: f = 9.996233e-01 f-delta = 7.1e-03
 Iter 4: f = 9.999833e-01 f-delta = 3.6e-04
 Iter 5: f = 9.999993e-01 f-delta = 1.6e-05
 Final f = 9.999993e-01
CP_ALS: total time: 0.024467000000000017
CP_ALS: mttkrp time: 0.015223917000000142
CP_ALS: linsol time: 0.004003876000000295
U: [array([[1., 1.],
       [1., 1.]]), array([[1., 1.],
       [1., 1.],
       [1., 1.]]), array([[1., 1.],
       [1., 1.],
       [1., 1.],
       [1., 1.]])]
---------------------------------------------
COMPUTE MTTKRP using the pyttb tenmat class:
---------------------------------------------
-------------------------------------------:
Maticized Tensor along way 0 is this matrix:
-------------------------------------------:
matrix corresponding to a tensor of shape 2 x 3 x 4
rindices = [ 0 ] (modes of tensor corresponding to rows)
cindices = [ 1, 2 ] (modes of tensor corresponding to columns)
data[:, :] = 
[[2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2. 2.]]

-------------------------------------------:
Maticized Tensor along way 1 is this matrix:
-------------------------------------------:
matrix corresponding to a tensor of shape 2 x 3 x 4
rindices = [ 1 ] (modes of tensor corresponding to rows)
cindices = [ 0, 2 ] (modes of tensor corresponding to columns)
data[:, :] = 
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]

-------------------------------------------------:
Khatri-Rao product of U[1], U[2] (we wouldnt actually compute this):
-------------------------------------------------:
[[1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]]
-------------------------------------------:
Matricized tensor times Khatri-Rao product:
-------------------------------------------:
[[24. 24.]
 [24. 24.]]
-------------------------------------------:
-------------------------------------------:
Maticized Tensor along way 1 is this matrix:
-------------------------------------------:
matrix corresponding to a tensor of shape 2 x 3 x 4
rindices = [ 1 ] (modes of tensor corresponding to rows)
cindices = [ 0, 2 ] (modes of tensor corresponding to columns)
data[:, :] = 
[[2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]
 [2. 2. 2. 2. 2. 2. 2. 2.]]

-------------------------------------------:
Khatri-Rao product of U[0], U[2] (we wouldnt actually compute this):
-------------------------------------------:
[[1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]]
-------------------------------------------:
Matricized tensor times Khatri-Rao product:
-------------------------------------------:
[[16. 16.]
 [16. 16.]
 [16. 16.]]
-------------------------------------------:
Next, COMPUTE MTTKRP using the pyttb mttkrp function:
this uses indexing to avoid computing the Khatri-Rao product
after that, we will break down exactly what an efficient MTTKRP does
-----------along way0:---------------------:
Kd.mttkrp(U, 0):
[[24. 24.]
 [24. 24.]]
-----------along way1:---------------------:
Kd.mttkrp(U, 1):
[[16. 16.]
 [16. 16.]
 [16. 16.]]
-------------------------------------------:
-dissection an efficient MTTKRP-------------:
-------------------------------------------:
The idea is to have a factoring of the tensor in mind,
then iteratively multiply each factor rather than computing
the Khatri-Rao product
-------------------------------------------:
recall that this is our tensor.  We show the whole tensor first:
tensor of shape 2 x 3 x 4
data[0, :, :] = 
[[2. 2. 2. 2.]
 [2. 2. 2. 2.]
 [2. 2. 2. 2.]]
data[1, :, :] = 
[[2. 2. 2. 2.]
 [2. 2. 2. 2.]
 [2. 2. 2. 2.]]

-------------------------------------------:
then show its factor matrices.  These typically start out with some
random init, but we happened to define this example with structure:
-------------------------------------------:
ktensor of shape 2 x 3 x 4
weights=[1. 1.]
factor_matrices[0] =
[[1. 1.]
 [1. 1.]]
factor_matrices[1] =
[[1. 1.]
 [1. 1.]
 [1. 1.]]
factor_matrices[2] =
[[1. 1.]
 [1. 1.]
 [1. 1.]
 [1. 1.]]
-------------------------------------------:
To compute CP_ALS, we could start with U matrices equal to the factor
matrices:
-------------------------------------------:
U: [array([[1., 1.],
       [1., 1.]]), array([[1., 1.],
       [1., 1.],
       [1., 1.]]), array([[1., 1.],
       [1., 1.],
       [1., 1.],
       [1., 1.]])]
-------------------------------------------:
First, we find the right dimensions for the mttkrp internal computation
this will be the number of tensor factors by the number of columns
in a factor matrix (these are the same; thus square).
There are TWO factors in this tensor even though there are THREE factor matrices
The number of weights is equal to the number of factors (TWO) and
also equal to the number of columns in a factor matrix.
The number of factor matrices corresponds to the number of "toes" in
a "chicken foot" (outer product) in the tensor decomposition
We initialize a 2 x 2 matrix with the factor weights
The ultimate MTTKRP result along way 0 will be the dimension of 
factor_matrices[0] @ W =   (2 x 2) x (2 x 2) = (2 x 2)
-------------------------------------------:
Start with the factor weights W (dim 2 x 2)
-------------------------------------------:
[[1. 1.]
 [1. 1.]]
-------------------------------------------:
we now compute factor_matrix[1].T @ U[1] (full matrix mult)
-------------------------------------------:
[[3. 3.]
 [3. 3.]]
-------------------------------------------:
running total of W (element-wise multiplication): [[3. 3.]
 [3. 3.]]
-------------------------------------------:
we next compute factor_matrix[2].T @ U[2] (full matrix mult)
[[4. 4.]
 [4. 4.]]
-------------------------------------------:
running total: [[12. 12.]
 [12. 12.]]
-------------------------------------------:
the final mttkrp result is now factor_matrices[0] @ W (full matrix mult):
[[24. 24.]
 [24. 24.]]
-------------------------------------------:
Repeat for MTTKRP along way 1:
-------------------------------------------:
U: [array([[1., 1.],
       [1., 1.]]), array([[1., 1.],
       [1., 1.],
       [1., 1.]]), array([[1., 1.],
       [1., 1.],
       [1., 1.],
       [1., 1.]])]
-------------------------------------------:
Again, we find the right dimensions for the mttkrp internal computation
this is still the number of tensor factors by the number of columns
in a factor matrix. (these are the same, thus square)
There are TWO factors in this tensor even though there are THREE factor matrices
The number of weights is equal to the number of factors (TWO) and
also equal to the number of columns in a factor matrix.
The number of factor matrices corresponds to the number of "toes" in
a "chicken foot" (outer product) in the tensor decomposition
We initialize a 2 x 2 matrix with the factor weights
The ultimate MTTKRP result along way 1 will be the dimension of 
factor_matrices[0].T @ W =   (3 x 2) x (2 x 2) = (3 x 2)
-------------------------------------------:
[[1. 1.]
 [1. 1.]]
-------------------------------------------:
we now compute factor_matrix[0].T @ U[0] (full matrix mult)
-------------------------------------------:
[[2. 2.]
 [2. 2.]]
-------------------------------------------:
running total of W (element-wise multiplication): [[2. 2.]
 [2. 2.]]
-------------------------------------------:
we next compute factor_matrix[2].T @ U[2] (full matrix mult)
-------------------------------------------:
[[4. 4.]
 [4. 4.]]
-------------------------------------------:
running total: [[8. 8.]
 [8. 8.]]
-------------------------------------------:
the final mttkrp result is now factor_matrices[1] @ W (full matrix mult):
-------------------------------------------:
[[16. 16.]
 [16. 16.]
 [16. 16.]]

dmdunla added a commit that referenced this issue Jun 2, 2023
* Update nvecs to use tenmat.

* Full implementation of collapse. Required implementation of tensor.from_tensor_type for tenmat objects. Updated tensor tests. (#32)

* Update __init__.py

Bump version.

* Create CHANGELOG.md

Changelog update

* Update CHANGELOG.md

Consistent formatting

* Update CHANGELOG.md

Correction

* Create ci-tests.yml

* Update README.md

Adding coverage statistics from coveralls.io

* Create requirements.txt

* 33 use standard license (#34)

* Use standard, correctly formatted LICENSE

* Delete LICENSE

* Create LICENSE

* Update and rename ci-tests.yml to regression-tests.yml

* Update README.md

* Fix bug in tensor.mttkrp that only showed up when ndims > 3. (#36)

* Update __init__.py

Bump version

* Bump version

* Adding files to support pypi dist creation and uploading

* Fix PyPi installs. Bump version.

* Fixing np.reshape usage. Adding more tests for tensor.ttv. (#38)

* Fixing issues with np.reshape; requires order='F' to align with Matlab functionality. (#39)

Closes #30 .

* Bump version.

* Adding tensor.ttm. Adding use case in tenmat to support ttm testing. (#40)

Closes #27

* Bump version

* Format CHANGELOG

* Update CHANGELOG.md

* pypi puslishing action on release

* Allowing rdims or cdims to be empty array. (#43)

Closes #42

* Adding  tensor.ttt implementation. (#44)

Closes 28

* Bump version

* Implement ktensor.score and associated tests.

* Changes to supporting pyttb data classes and associated tests to enable ktensor.score.

* Bump version.

* Compatibility with numpy 1.24.x (#49)

Close #48 

* Replace "numpy.float" with equivalent "float"

numpy.float was deprecated in 1.20 and removed in 1.24

* sptensor.ttv: support 'vector' being a plain list

(rather than just numpy.ndarray). Backwards compatible - an ndarray
argument still works. This is because in newer numpy, it's not allowed to do
np.array(list) where the elements of list are ndarrays of different shapes.

* Make ktensor.innerprod call ttv with 'vector' as plain list

(instead of numpy.ndarray, because newer versions don't allow ragged arrays)

* tensor.ttv: avoid ragged numpy arrays

* Fix two unit test failures due to numpy related changes

* More numpy updates

- numpy.int is removed - use int instead
- don't try to construct ragged/inhomogeneous numpy arrays in tests.
  Use plain lists of vectors instead

* Fix typo in assert message

* Let ttb.tt_dimscheck catch empty input error

In the three ttv methods, ttb.tt_dimscheck checks that 'vector' argument
is not an empty list/ndarray. Revert previous changes that checked for this
before calling tt_dimscheck.

* Bump version

* TENSOR: Fix slices ref shen return value isn't scalar or vector. #41 (#50)

Closes #41

* Ttensor implementation (#51)

* TENSOR: Fix slices ref shen return value isn't scalar or vector. #41

* TTENSOR: Add tensor creation (partial support of core tensor types) and display

* SPTENSOR: Add numpy scalar type for multiplication filter.

* TTENSOR: Double, full, isequal, mtimes, ndims, size, uminus, uplus, and partial innerprod.

* TTENSOR: TTV (finishes innerprod), mttkrp, and norm

* TTENSOR: TTM, permute and minor cleanup.

* TTENSOR: Reconstruct

* TTENSOR: Nvecs

* SPTENSOR:
* Fix argument mismatch for ttm (modes s.b. dims)
* Fix ttm for rectangular matrices
* Make error message consitent with tensor
TENSOR:
* Fix error message

* TTENSOR: Improve test coverage and corresponding bug fixes discovered.

* Test coverage (#52)

* SPTENSOR:
* Fix argument mismatch for ttm (modes s.b. dims)
* Fix ttm for rectangular matrices
* Make error message consitent with tensor
TENSOR:
* Fix error message

* SPTENSOR: Improve test coverage, replace prints, and some doc string fixes.

* PYTTUB_UTILS: Improve test coverage

* TENMAT: Remove impossible condition. Shape is a property, the property handles the (0,) shape condition. So ndims should never see it.

* TENSOR: Improve test coverage. One line left, but logic of setitem is unclear without MATLAB validation of behavior.

* CP_APR: Add tests fpr sptensor, and corresponding bug fixes to improve test coverage.

---------

Co-authored-by: Danny Dunlavy <dmdunla@sandia.gov>

* Bump version

* TUCKER_ALS: Add tucker_als to validate ttucker implementation. (#53)

* Bump version of actions (#55)

actions/setup-python@v4 to avoid deprecation warnings

* Tensor docs plus Linting and Typing and Black oh my (#54)

* TENSOR: Apply black and enforce it

* TENSOR: Add isort and pylint. Fix to pass then enforce

* TENSOR: Variety of linked fixes:
* Add mypy type checking
* Update infrastructure for validating package
* Fix doc tests and add more examples

* DOCTEST: Add doctest automatically to regression
* Fix existing failures

* DOCTEST: Fix non-uniform array

* DOCTEST: Fix precision errors in example

* AUTOMATION: Add test directory otherwise only doctests run

* TENSOR: Fix bad rebase from numpy fix

* Auto formatting (#60)

* COVERAGE: Fix some coverage regressions from pylint PR

* ISORT: Run isort on source and tests

* BLACK: Run black on source and tests

* BLACK: Run black on source and tests

* FORMATTING: Add tests and verification for autoformatting

* FORMATTING: Add black/isort to root to simplify

* Add preliminary contributor guide instructions

Closes #59

* TUCKER_ALS: TTM with negative values is broken in ttensor (#62) (#66)

* Replace usage in tucker_als
* Update test for tucker_als to ensure result matches expectation
* Add early error handling in ttensor ttm for negative dims

* Hosvd (#67)

* HOSVD: Preliminary outline of core functionality

* HOSVD: Fix numeric bug
* Was slicing incorrectly
* Update test to check convergence

* HOSVD: Finish output and test coverage

* TENSOR: Prune numbers real
* Real and mypy don't play nice python/mypy#3186
* This allows partial typing support of HOSVD

* Add test that matches TTB for MATLAB output of HOSVD (#79)

This closes #78

* Bump version (#81)

Closes #80

* Lint pyttb_utils and lint/type sptensor (#77)

* PYTTB_UTILS: Fix and enforce pylint

* PYTTB_UTILS: Pull out utility only used internally in sptensor

* SPTENSOR: Fix and enforce pylint

* SPTENSOR: Initial pass a typing support

* SPTENSOR: Complete initial typing coverage

* SPTENSOR: Fix test coverage from typing changes.

* PYLINT: Update test to lint files in parallel to improve dev experience.

* HOSVD: Negative signs can be permuted for equivalent decomposition (#82)

* Pre commit (#83)

* Setup and pyproject are redundant. Remove and resolve install issue

* Try adding pre-commit hooks

* Update Makefile for simplicity and add notes to contributor guide.

* Make pre-commit optional opt-in

* Make regression tests use simplified dependencies so we track fewer places.

* Using dynamic version in pyproject.toml to reduce places where version is set. (#86)

* Adding shell=True to subprocess.run() calls (#87)

* Adding Nick to authors (#89)

* Release prep (#90)

* Fix author for PyPI. Bump to dev version.

* Exclude dims (#91)

* Explicit Exclude_dims:
* Updated tt_dimscheck
* Update all uses of tt_dimscheck and propagate interface

* Add test coverage for exclude dims changes

* Tucker_als: Fix workaround that motivated exclude_dims

* Bump version

* Spelling

* Tensor generator helpers (#93)

* TENONES: Add initial tenones support

* TENZEROS: Add initial tenzeros support

* TENDIAG: Add initial tendiag support

* SPTENDIAG: Add initial sptendiag support

* Link in autodocumentation for recently added code: (#98)

* TTENSOR, HOSVD, TUCKER_ALS, Tensor generators

* Remove warning for nvecs: (#99)

* Make debug level log for now
* Remove test enforcement

* Rand generators (#100)

* Non-functional change:
* Fix numpy deprecation warning, logic should be equivalent

* Tenrand initial implementation

* Sptenrand initial implementation

* Complete pass on ktensor docs. (#101)

* Bump version

* Bump version

* Trying to fix coveralls

* Trying coveralls github action

* Fixing arrange and normalize. (#103)

* Fixing arrange and normalize.

* Merge main (#104)

* Trying to fix coveralls

* Trying coveralls github action

* Rename contributor guide for github magic (#106)

* Rename contributor guide for github magic

* Update reference to contributor guide from README

* Fixed the mean and stdev typo for cp_als (#117)

* Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity (#118)

* Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity

* Formatted changes with isort and black.

* Updated all `tensor`-named paramteres to `input_tensor`, including in docs (#120)

* Tensor growth (#109)

* Tensor.__setitem__: Break into methods
* Non-functional change to make logic flow clearer

* Tensor.__setitem__: Fix some types to resolve edge cases

* Sptensor.__setitem__: Break into methods
* Non-functional change to make flow clearer

* Sptensor.__setitem__: Catch additional edge cases in sptensor indexing

* Tensor.__setitem__: Catch subtensor additional dim growth

* Tensor indexing (#116)

* Tensor.__setitem__/__getitem__: Fix linear index
* Before required numpy array now works on value/slice/Iterable

* Tensor.__getitem__: Fix subscripts usage
* Consistent with setitem now
* Update usages (primarily in sptensor)

* Sptensor.__setitem__/__getitem__: Fix subscripts usage
* Consistent with tensor and MATLAB now
* Update test usage

* sptensor: Add coverage for improved indexing capability

* tensor: Add coverage for improved indexing capability

---------

Co-authored-by: brian-kelley <brian.honda11@gmail.com>
Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
Co-authored-by: Dunlavy <dmdunla@s1075069.srn.sandia.gov>
Co-authored-by: DeepBlockDeepak <43120318+DeepBlockDeepak@users.noreply.github.com>
dmdunla added a commit that referenced this issue Jun 3, 2023
* Merge latest updates (#124)

* Update nvecs to use tenmat.

* Full implementation of collapse. Required implementation of tensor.from_tensor_type for tenmat objects. Updated tensor tests. (#32)

* Update __init__.py

Bump version.

* Create CHANGELOG.md

Changelog update

* Update CHANGELOG.md

Consistent formatting

* Update CHANGELOG.md

Correction

* Create ci-tests.yml

* Update README.md

Adding coverage statistics from coveralls.io

* Create requirements.txt

* 33 use standard license (#34)

* Use standard, correctly formatted LICENSE

* Delete LICENSE

* Create LICENSE

* Update and rename ci-tests.yml to regression-tests.yml

* Update README.md

* Fix bug in tensor.mttkrp that only showed up when ndims > 3. (#36)

* Update __init__.py

Bump version

* Bump version

* Adding files to support pypi dist creation and uploading

* Fix PyPi installs. Bump version.

* Fixing np.reshape usage. Adding more tests for tensor.ttv. (#38)

* Fixing issues with np.reshape; requires order='F' to align with Matlab functionality. (#39)

Closes #30 .

* Bump version.

* Adding tensor.ttm. Adding use case in tenmat to support ttm testing. (#40)

Closes #27

* Bump version

* Format CHANGELOG

* Update CHANGELOG.md

* pypi puslishing action on release

* Allowing rdims or cdims to be empty array. (#43)

Closes #42

* Adding  tensor.ttt implementation. (#44)

Closes 28

* Bump version

* Implement ktensor.score and associated tests.

* Changes to supporting pyttb data classes and associated tests to enable ktensor.score.

* Bump version.

* Compatibility with numpy 1.24.x (#49)

Close #48 

* Replace "numpy.float" with equivalent "float"

numpy.float was deprecated in 1.20 and removed in 1.24

* sptensor.ttv: support 'vector' being a plain list

(rather than just numpy.ndarray). Backwards compatible - an ndarray
argument still works. This is because in newer numpy, it's not allowed to do
np.array(list) where the elements of list are ndarrays of different shapes.

* Make ktensor.innerprod call ttv with 'vector' as plain list

(instead of numpy.ndarray, because newer versions don't allow ragged arrays)

* tensor.ttv: avoid ragged numpy arrays

* Fix two unit test failures due to numpy related changes

* More numpy updates

- numpy.int is removed - use int instead
- don't try to construct ragged/inhomogeneous numpy arrays in tests.
  Use plain lists of vectors instead

* Fix typo in assert message

* Let ttb.tt_dimscheck catch empty input error

In the three ttv methods, ttb.tt_dimscheck checks that 'vector' argument
is not an empty list/ndarray. Revert previous changes that checked for this
before calling tt_dimscheck.

* Bump version

* TENSOR: Fix slices ref shen return value isn't scalar or vector. #41 (#50)

Closes #41

* Ttensor implementation (#51)

* TENSOR: Fix slices ref shen return value isn't scalar or vector. #41

* TTENSOR: Add tensor creation (partial support of core tensor types) and display

* SPTENSOR: Add numpy scalar type for multiplication filter.

* TTENSOR: Double, full, isequal, mtimes, ndims, size, uminus, uplus, and partial innerprod.

* TTENSOR: TTV (finishes innerprod), mttkrp, and norm

* TTENSOR: TTM, permute and minor cleanup.

* TTENSOR: Reconstruct

* TTENSOR: Nvecs

* SPTENSOR:
* Fix argument mismatch for ttm (modes s.b. dims)
* Fix ttm for rectangular matrices
* Make error message consitent with tensor
TENSOR:
* Fix error message

* TTENSOR: Improve test coverage and corresponding bug fixes discovered.

* Test coverage (#52)

* SPTENSOR:
* Fix argument mismatch for ttm (modes s.b. dims)
* Fix ttm for rectangular matrices
* Make error message consitent with tensor
TENSOR:
* Fix error message

* SPTENSOR: Improve test coverage, replace prints, and some doc string fixes.

* PYTTUB_UTILS: Improve test coverage

* TENMAT: Remove impossible condition. Shape is a property, the property handles the (0,) shape condition. So ndims should never see it.

* TENSOR: Improve test coverage. One line left, but logic of setitem is unclear without MATLAB validation of behavior.

* CP_APR: Add tests fpr sptensor, and corresponding bug fixes to improve test coverage.

---------

Co-authored-by: Danny Dunlavy <dmdunla@sandia.gov>

* Bump version

* TUCKER_ALS: Add tucker_als to validate ttucker implementation. (#53)

* Bump version of actions (#55)

actions/setup-python@v4 to avoid deprecation warnings

* Tensor docs plus Linting and Typing and Black oh my (#54)

* TENSOR: Apply black and enforce it

* TENSOR: Add isort and pylint. Fix to pass then enforce

* TENSOR: Variety of linked fixes:
* Add mypy type checking
* Update infrastructure for validating package
* Fix doc tests and add more examples

* DOCTEST: Add doctest automatically to regression
* Fix existing failures

* DOCTEST: Fix non-uniform array

* DOCTEST: Fix precision errors in example

* AUTOMATION: Add test directory otherwise only doctests run

* TENSOR: Fix bad rebase from numpy fix

* Auto formatting (#60)

* COVERAGE: Fix some coverage regressions from pylint PR

* ISORT: Run isort on source and tests

* BLACK: Run black on source and tests

* BLACK: Run black on source and tests

* FORMATTING: Add tests and verification for autoformatting

* FORMATTING: Add black/isort to root to simplify

* Add preliminary contributor guide instructions

Closes #59

* TUCKER_ALS: TTM with negative values is broken in ttensor (#62) (#66)

* Replace usage in tucker_als
* Update test for tucker_als to ensure result matches expectation
* Add early error handling in ttensor ttm for negative dims

* Hosvd (#67)

* HOSVD: Preliminary outline of core functionality

* HOSVD: Fix numeric bug
* Was slicing incorrectly
* Update test to check convergence

* HOSVD: Finish output and test coverage

* TENSOR: Prune numbers real
* Real and mypy don't play nice python/mypy#3186
* This allows partial typing support of HOSVD

* Add test that matches TTB for MATLAB output of HOSVD (#79)

This closes #78

* Bump version (#81)

Closes #80

* Lint pyttb_utils and lint/type sptensor (#77)

* PYTTB_UTILS: Fix and enforce pylint

* PYTTB_UTILS: Pull out utility only used internally in sptensor

* SPTENSOR: Fix and enforce pylint

* SPTENSOR: Initial pass a typing support

* SPTENSOR: Complete initial typing coverage

* SPTENSOR: Fix test coverage from typing changes.

* PYLINT: Update test to lint files in parallel to improve dev experience.

* HOSVD: Negative signs can be permuted for equivalent decomposition (#82)

* Pre commit (#83)

* Setup and pyproject are redundant. Remove and resolve install issue

* Try adding pre-commit hooks

* Update Makefile for simplicity and add notes to contributor guide.

* Make pre-commit optional opt-in

* Make regression tests use simplified dependencies so we track fewer places.

* Using dynamic version in pyproject.toml to reduce places where version is set. (#86)

* Adding shell=True to subprocess.run() calls (#87)

* Adding Nick to authors (#89)

* Release prep (#90)

* Fix author for PyPI. Bump to dev version.

* Exclude dims (#91)

* Explicit Exclude_dims:
* Updated tt_dimscheck
* Update all uses of tt_dimscheck and propagate interface

* Add test coverage for exclude dims changes

* Tucker_als: Fix workaround that motivated exclude_dims

* Bump version

* Spelling

* Tensor generator helpers (#93)

* TENONES: Add initial tenones support

* TENZEROS: Add initial tenzeros support

* TENDIAG: Add initial tendiag support

* SPTENDIAG: Add initial sptendiag support

* Link in autodocumentation for recently added code: (#98)

* TTENSOR, HOSVD, TUCKER_ALS, Tensor generators

* Remove warning for nvecs: (#99)

* Make debug level log for now
* Remove test enforcement

* Rand generators (#100)

* Non-functional change:
* Fix numpy deprecation warning, logic should be equivalent

* Tenrand initial implementation

* Sptenrand initial implementation

* Complete pass on ktensor docs. (#101)

* Bump version

* Bump version

* Trying to fix coveralls

* Trying coveralls github action

* Fixing arrange and normalize. (#103)

* Fixing arrange and normalize.

* Merge main (#104)

* Trying to fix coveralls

* Trying coveralls github action

* Rename contributor guide for github magic (#106)

* Rename contributor guide for github magic

* Update reference to contributor guide from README

* Fixed the mean and stdev typo for cp_als (#117)

* Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity (#118)

* Changed cp_als() param 'tensor' to 'input_tensor' to avoid ambiguity

* Formatted changes with isort and black.

* Updated all `tensor`-named paramteres to `input_tensor`, including in docs (#120)

* Tensor growth (#109)

* Tensor.__setitem__: Break into methods
* Non-functional change to make logic flow clearer

* Tensor.__setitem__: Fix some types to resolve edge cases

* Sptensor.__setitem__: Break into methods
* Non-functional change to make flow clearer

* Sptensor.__setitem__: Catch additional edge cases in sptensor indexing

* Tensor.__setitem__: Catch subtensor additional dim growth

* Tensor indexing (#116)

* Tensor.__setitem__/__getitem__: Fix linear index
* Before required numpy array now works on value/slice/Iterable

* Tensor.__getitem__: Fix subscripts usage
* Consistent with setitem now
* Update usages (primarily in sptensor)

* Sptensor.__setitem__/__getitem__: Fix subscripts usage
* Consistent with tensor and MATLAB now
* Update test usage

* sptensor: Add coverage for improved indexing capability

* tensor: Add coverage for improved indexing capability

---------

Co-authored-by: brian-kelley <brian.honda11@gmail.com>
Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
Co-authored-by: Dunlavy <dmdunla@s1075069.srn.sandia.gov>
Co-authored-by: DeepBlockDeepak <43120318+DeepBlockDeepak@users.noreply.github.com>

* Adding tests and data for import_data, export_data, sptensor, ktensor. Small changes in code that was unreachable.

* Updating formatting with black

* More updates for coverage.

* Black formatting updates

* Update regression-tests.yml

Adding verbose to black and isort calls

* Black updated locally to align with CI testing

* Update regression-tests.yml

---------

Co-authored-by: brian-kelley <brian.honda11@gmail.com>
Co-authored-by: ntjohnson1 <24689722+ntjohnson1@users.noreply.github.com>
Co-authored-by: Dunlavy <dmdunla@s1075069.srn.sandia.gov>
Co-authored-by: DeepBlockDeepak <43120318+DeepBlockDeepak@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants