New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make TensorVariable interface more similar to that of numpy.ndarray #1080

Open
abalkin opened this Issue Nov 18, 2012 · 21 comments

Comments

Projects
None yet
8 participants
@abalkin
Contributor

abalkin commented Nov 18, 2012

See also gh-1216

Numpy's ndarray instances support many convenience methods most of which are already implemented as global functions in theano.tensor but not available as TensorVariable members.

Here is the summary:

 +-----------------+-----------------------------+--------------------------------+
 | ndarray         | Theano                      | Status                         |
 +-----------------+-----------------------------+--------------------------------+
 | x.argmax()      | T.argmax(x)                 | DONE
 | x.argmin()      | T.argmin(x)                 | DONE
 | x.argsort()     | T.sort.argsort(x)           | DONE
 | x.choose()      |  ---                        | T.switch(x) is numpy.where(x)
 | x.clip()        | T.clip(x)                   | DONE
 | x.compress()    | ---                         |
 | x.conj()        | T.conj(x)                   | DONE
 | x.cumprod()     | T.cumprod(x)                | DONE w/o dtype=
 | x.cumsum()      | T.cumsum(x)                 | DONE w/o dtype=
 | x.diagonal()    | T.diagonal                  | DONE
 | x.dot(y)        | T.dot(x,y) or x.__dot__(y)  | DONE
 | x.fill(sval))   | T.fill(x, sval)             | DONE
 | x.imag          | T.imag(x)                   | DONE
 | x.nonzero()     | T.nonzero(x)                | DONE
 | x.ptp()         | T.ptp(x, axi)               | DONE
 | x.put()         | ---                         | 
 | x.ravel()       | T.flatten()                 | DONE w/o order=
 | x.real          | T.real(x)                   | DONE
 | x.repeat()      | T.repeat(x)                 | DONE
 | x.round()       | T.round(x)                  | DONE w/o decimals=
 | x.searchsorted()| T.searchsorted(x)           | DONE
 | x.sort()        | T.sort(x)                   | DONE
 | x.squeeze()     | T.squeeze(x)                | DONE
 | x.swapaxes()    | T.swapaxes(x)               | DONE
 | x.std()         | T.std(x)                    | DONE
 | x.take(i)       | T.take(x, i)                | DONE
 | x.trace()       | SB.linalg.trace(x)          | DONE
 +-----------------+-----------------------------+--------------------------------+
@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Nov 19, 2012

Member

That is a good idea. We started that, but we didn't finish.

Member

nouiz commented Nov 19, 2012

That is a good idea. We started that, but we didn't finish.

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Nov 19, 2012

Member

I put the milestone 0.6.1 as this is fast to reuse existing code. But we need someone to do it.

Member

nouiz commented Nov 19, 2012

I put the milestone 0.6.1 as this is fast to reuse existing code. But we need someone to do it.

@abalkin

This comment has been minimized.

Show comment
Hide comment
@abalkin

abalkin Nov 19, 2012

Contributor

I'll pick some low-hanging fruits like dot = dot focusing on features which will help implementing linalg functions. It looks like some of them can be copied from numpy.linalg verbatim once ndarray methods are implemented.

Contributor

abalkin commented Nov 19, 2012

I'll pick some low-hanging fruits like dot = dot focusing on features which will help implementing linalg functions. It looks like some of them can be copied from numpy.linalg verbatim once ndarray methods are implemented.

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Nov 19, 2012

Member

I'm not sure I understand what you wrote. I think that in your table, all what had something in the right column are easy as the implementation already exist in Theano. What need to be added is the method in _tensor_py_operators in the file tensor/basic.py. Or you mean you try the one without theano implementation?

I'll add in the table the theano equivalent of take and trace

Member

nouiz commented Nov 19, 2012

I'm not sure I understand what you wrote. I think that in your table, all what had something in the right column are easy as the implementation already exist in Theano. What need to be added is the method in _tensor_py_operators in the file tensor/basic.py. Or you mean you try the one without theano implementation?

I'll add in the table the theano equivalent of take and trace

@abalkin

This comment has been minimized.

Show comment
Hide comment
@abalkin

abalkin Nov 19, 2012

Contributor

There are three levels of difficulty here: trivial - just add an alias to an existing method; simple - adapt an existing function to work as a method or attribute (x.T, x.conj(), x.real, etc.); implementation required - methods with --- in Theano column; design question - mutating methods like .sort() - should these be implemented at all?

I think it all will become clearer once I show the code, but the above is the rough order in which I intend to tackle this issue. I will probably switch back to #1057 once I have what I need.

Contributor

abalkin commented Nov 19, 2012

There are three levels of difficulty here: trivial - just add an alias to an existing method; simple - adapt an existing function to work as a method or attribute (x.T, x.conj(), x.real, etc.); implementation required - methods with --- in Theano column; design question - mutating methods like .sort() - should these be implemented at all?

I think it all will become clearer once I show the code, but the above is the rough order in which I intend to tackle this issue. I will probably switch back to #1057 once I have what I need.

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Nov 19, 2012

Member

Ok. I'll check the code when you made a new PR.

thanks

Member

nouiz commented Nov 19, 2012

Ok. I'll check the code when you made a new PR.

thanks

nouiz added a commit that referenced this issue Nov 27, 2012

Merge pull request #1088 from abalkin/issue-1080
Issue #1080: Make TensorVariable interface more similar to that of numpy.ndarray
@abalkin

This comment has been minimized.

Show comment
Hide comment
@abalkin

abalkin Dec 7, 2012

Contributor

I started implementing take op and ran into the following inconsistency in numpy:

>>> x = np.zeros((2,3,4))
>>> x[:,:,1]
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]])
>>> x.take(1,axis=2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: object of too small depth for desired array

I am not sure I understand this error message and numpy.take() seems to be under-documented. Does anyone know if there are any other cases when take() is not equivalent to advanced indexing?

Contributor

abalkin commented Dec 7, 2012

I started implementing take op and ran into the following inconsistency in numpy:

>>> x = np.zeros((2,3,4))
>>> x[:,:,1]
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]])
>>> x.take(1,axis=2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: object of too small depth for desired array

I am not sure I understand this error message and numpy.take() seems to be under-documented. Does anyone know if there are any other cases when take() is not equivalent to advanced indexing?

@lamblin

This comment has been minimized.

Show comment
Hide comment
@lamblin

lamblin Dec 7, 2012

Member

On Fri, Dec 07, 2012, abalkin wrote:

x.take(1,axis=2)

Apparently, x.take needs a list of indices:

>>> a.take([1], axis=2)
array([[[ 0.2],
        [ 0.2],
        [ 0.2]],

       [[ 0.2],
        [ 0.2],
        [ 0.2]]])

Pascal

Member

lamblin commented Dec 7, 2012

On Fri, Dec 07, 2012, abalkin wrote:

x.take(1,axis=2)

Apparently, x.take needs a list of indices:

>>> a.take([1], axis=2)
array([[[ 0.2],
        [ 0.2],
        [ 0.2]],

       [[ 0.2],
        [ 0.2],
        [ 0.2]]])

Pascal

@abalkin abalkin referenced this issue Dec 7, 2012

Merged

Take op [WIP] #1127

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Feb 18, 2013

Member

gh-1181 add x.nonzeros()

Member

nouiz commented Feb 18, 2013

gh-1181 add x.nonzeros()

@jsalvatier

This comment has been minimized.

Show comment
Hide comment
@jsalvatier

jsalvatier Jan 10, 2014

Contributor

I could really use the cumsum function.

Contributor

jsalvatier commented Jan 10, 2014

I could really use the cumsum function.

@jsalvatier

This comment has been minimized.

Show comment
Hide comment
@jsalvatier

jsalvatier Jan 12, 2014

Contributor

here's a quick implementation of cumsum for vectors: https://gist.github.com/jsalvatier/8378901. I think grad only works for the 1d case.

Contributor

jsalvatier commented Jan 12, 2014

here's a quick implementation of cumsum for vectors: https://gist.github.com/jsalvatier/8378901. I think grad only works for the 1d case.

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Jan 13, 2014

Member

Can you make a PR out of that?

You can put it in theano/tensor/extra_ops.py

thanks

On Sat, Jan 11, 2014 at 7:36 PM, John Salvatier notifications@github.comwrote:

here's a quick implementation of cumsum for vectors:
https://gist.github.com/jsalvatier/8378901. I think grad only works for
the 1d case.


Reply to this email directly or view it on GitHubhttps://github.com//issues/1080#issuecomment-32111732
.

Member

nouiz commented Jan 13, 2014

Can you make a PR out of that?

You can put it in theano/tensor/extra_ops.py

thanks

On Sat, Jan 11, 2014 at 7:36 PM, John Salvatier notifications@github.comwrote:

here's a quick implementation of cumsum for vectors:
https://gist.github.com/jsalvatier/8378901. I think grad only works for
the 1d case.


Reply to this email directly or view it on GitHubhttps://github.com//issues/1080#issuecomment-32111732
.

@nouiz

This comment has been minimized.

Show comment
Hide comment
@nouiz

nouiz Jan 18, 2014

Member

Another user asked for this today:) So it seam you are not the only one wanting this :)

Member

nouiz commented Jan 18, 2014

Another user asked for this today:) So it seam you are not the only one wanting this :)

@tsirif

This comment has been minimized.

Show comment
Hide comment
@tsirif

tsirif Mar 22, 2016

Contributor

How much does it need still to be done for this? Does anybody know which calls or operations are left?
I would like to take it on for GSoC '16 PR

Contributor

tsirif commented Mar 22, 2016

How much does it need still to be done for this? Does anybody know which calls or operations are left?
I would like to take it on for GSoC '16 PR

@gokul-uf

This comment has been minimized.

Show comment
Hide comment
@gokul-uf

gokul-uf Mar 22, 2016

Contributor

@tsirif check out the ideas with low priority section here https://github.com/Theano/Theano/wiki/GSoC2016

Contributor

gokul-uf commented Mar 22, 2016

@tsirif check out the ideas with low priority section here https://github.com/Theano/Theano/wiki/GSoC2016

@tsirif

This comment has been minimized.

Show comment
Hide comment
@tsirif

tsirif Mar 22, 2016

Contributor

@gokul-uf I didn't describe this well, sorry. I mean i want to do a PR in order to be eligible to participate for GSoC. Is there anything available considering this issue?

Contributor

tsirif commented Mar 22, 2016

@gokul-uf I didn't describe this well, sorry. I mean i want to do a PR in order to be eligible to participate for GSoC. Is there anything available considering this issue?

@gokul-uf

This comment has been minimized.

Show comment
Hide comment
@gokul-uf

gokul-uf Mar 22, 2016

Contributor

I'm not sure, I have not been following this. @nouiz or @lamblin would be able to answer your query

Contributor

gokul-uf commented Mar 22, 2016

I'm not sure, I have not been following this. @nouiz or @lamblin would be able to answer your query

@MarcCote

This comment has been minimized.

Show comment
Hide comment
@MarcCote

MarcCote Mar 22, 2016

Contributor

T.searchsorted doesn't seem to exist. I just remembered I had started it a while back. This might help:
https://github.com/MarcCote/Theano/blob/searchsorted/theano/tensor/extra_ops.py#L11

Contributor

MarcCote commented Mar 22, 2016

T.searchsorted doesn't seem to exist. I just remembered I had started it a while back. This might help:
https://github.com/MarcCote/Theano/blob/searchsorted/theano/tensor/extra_ops.py#L11

@tsirif

This comment has been minimized.

Show comment
Hide comment
@tsirif

tsirif Mar 22, 2016

Contributor

Thanks i will look this up!

Contributor

tsirif commented Mar 22, 2016

Thanks i will look this up!

@hlin117

This comment has been minimized.

Show comment
Hide comment
@hlin117

hlin117 May 2, 2016

@tsirif searchsorted is now available in the master branch. However, it does not work on the GPU yet:
https://github.com/Theano/Theano/blob/master/theano/tensor/extra_ops.py#L177

hlin117 commented May 2, 2016

@tsirif searchsorted is now available in the master branch. However, it does not work on the GPU yet:
https://github.com/Theano/Theano/blob/master/theano/tensor/extra_ops.py#L177

@tsirif

This comment has been minimized.

Show comment
Hide comment
@tsirif

tsirif May 2, 2016

Contributor

@hlin117 Yes I wrote a note in the docs, referring explicitly to the fact that there's only a CPU impl yet. Check #4422

Have you something to suggest for a GPU impl? Is there a GPU lib which implements it already? (I guess that it should not use binary search)

Contributor

tsirif commented May 2, 2016

@hlin117 Yes I wrote a note in the docs, referring explicitly to the fact that there's only a CPU impl yet. Check #4422

Have you something to suggest for a GPU impl? Is there a GPU lib which implements it already? (I guess that it should not use binary search)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment