Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Initialization of orthogonal tensors with respect to a pivot #931

Open
wants to merge 6 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 3 additions & 0 deletions tensornetwork/backends/abstract_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -1044,3 +1044,6 @@ def eps(self, dtype: Type[np.number]) -> float:

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you add this function to the backend? I don't think we need it here

raise NotImplementedError(
f"Backend {self.name} has not implemented eps.")

def initialize_orthogonal_tensor_wrt_pivot(self,shape=Sequence[int],dtype:Optional[Type[np.number]]=None,pivot_axis:int=-1,seed=Optional[int]=None,backend: Optional[Union[Text, AbstractBackend]] = None,non_negative_diagonal: bool = False):->Tensor
mganahl marked this conversation as resolved.
Show resolved Hide resolved
raise NotImplementedError("Backend '{}' has not implemented initialize_orthogonal_tensor_wrt_pivot.".format(self.name))
10 changes: 10 additions & 0 deletions tensornetwork/backends/numpy/numpy_backend.py
Original file line number Diff line number Diff line change
Expand Up @@ -795,3 +795,13 @@ def eps(self, dtype: Type[np.number]) -> float:
float: Machine epsilon.
"""
return np.finfo(dtype).eps
def initialize_orthogonal_tensor_wrt_pivot(self,shape=Sequence[int],dtype:Optional[Type[np.number]]=None,pivot_axis:int=-1,seed=Optional[int]=None,backend: Optional[Union[Text, AbstractBackend]] = None,non_negative_diagonal: bool = False):->Tensor
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we need this function

if seed:
np.random.seed(seed)
dtype = dtype if dtype is not None else np.float64
if ((np.dtype(dtype) is np.dtype(np.complex128)) or
(np.dtype(dtype) is np.dtype(np.complex64))):
q,r= decompositions.qr(np,np.random.randn(
*shape).astype(dtype) + 1j * np.random.randn(*shape).astype(dtype),pivot_axis,non_negative_diagonal)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is an else clause missing, otherwise line 804 gets overwritten

q,r= decompositions.qr(np,np.random.randn(*shape).astype(dtype),pivot_axis,non_negative_diagonal)
return q
12 changes: 12 additions & 0 deletions tensornetwork/backends/numpy/numpy_backend_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -974,3 +974,15 @@ def test_item(dtype):
def test_eps(dtype):
backend = numpy_backend.NumPyBackend()
assert backend.eps(dtype) == np.finfo(dtype).eps

@pytest.mark.parametrize("dtype", np_dtypes)
def test_initialize_orthogonal_tensor_wrt_pivot_dtype(dtype):
backend = numpy_backend.NumPyBackend()
a = backend.initialize_orthogonal_tensor_wrt_pivot((4,4), dtype=dtype,pivot_axis=-1,seed=10,non_negative_diagonal=False)
assert a.dtype == dtype

@pytest.mark.parametrize("dtype", np_dtypes)
def test_initialize_orthogonal_tensor_wrt_pivot_shape(dtype):
backend = numpy_backend.NumPyBackend()
a = backend.initialize_orthogonal_tensor_wrt_pivot((4,4), dtype=dtype,pivot_axis=-1,seed=10,non_negative_diagonal=False)
assert a.shape[0] == 4
5 changes: 5 additions & 0 deletions tensornetwork/linalg/initialization.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,7 @@
from tensornetwork import backend_contextmanager
from tensornetwork import backends
from tensornetwork.tensor import Tensor
from tensornetwork.linalg import linalg

AbstractBackend = abstract_backend.AbstractBackend

Expand Down Expand Up @@ -200,3 +201,7 @@ def random_uniform(shape: Sequence[int],
the_tensor = initialize_tensor("random_uniform", shape, backend=backend,
seed=seed, boundaries=boundaries, dtype=dtype)
return the_tensor
def initialize_orthogonal_tensor_wrt_pivot(shape=Sequence[int],dtype:Optional[Type[np.number]]=None,pivot_axis:int=-1,seed=Optional[int]=None,backend: Optional[Union[Text, AbstractBackend]] = None,non_negative_diagonal:bool=False) ->Tensor:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering if we could find a less clunky name. Some possibilities that come to my mind are random_orthogonal or random_isometry @alewis?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pls add a docstring that explains what the function is doing, what the arguments are, and what the returned values are.

the_tensor=initialize_tensor("randn",shape,backend=backend,seed=seed,dtype=dtype)
q,r=linalg.qr(the_tensor,pivot_axis,non_negative_diagonal)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

us _ instead of r (unused variable)

return q
1 change: 1 addition & 0 deletions tensornetwork/linalg/tests/TensorNetwork
Submodule TensorNetwork added at 20dc78
15 changes: 15 additions & 0 deletions tensornetwork/linalg/tests/initialization_test.py
Original file line number Diff line number Diff line change
Expand Up @@ -177,3 +177,18 @@ def inner_zero_test(dtype):
numpyCheck = backend_obj.zeros(n.shape, dtype=dtype)
np.testing.assert_allclose(tensor.array, tensorCheck)
np.testing.assert_allclose(numpyT.array, numpyCheck)

def test_initialize_orthogonal_tensor_wrt_pivot(backend):
shape=(5, 10, 3, 2)
pivot_axis=1
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls extend test to several values of the pivot axis

seed = int(time.time())
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls use deterministic seed initialization

np.random.seed(seed=seed)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that line seems superflous

backend_obj = backends.backend_factory.get_backend(backend)
for dtype in dtypes[backend]["rand"]:
tnI = tensornetwork.initialize_orthogonal_tensor_wrt_pivot(
shape,
dtype=dtype,pivot_axis,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that line should throw a syntax error because your passing an argument between named arguments

seed=seed,
backend=backend,non_negative_diagonal)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same here

npI = backend_obj.initialize_orthogonal_tensor_wrt_pivot(shape, dtype=dtype, pivot_axis, seed=seed,non_negative_diagonal)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove the function from the backend

np.testing.assert_allclose(tnI.array, npI)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pls replace with a test that checks if the initialized tensor has the desired properties