New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A new PEP for infix matrix multiplication #4351

Merged
merged 34 commits into from Jul 6, 2014

Conversation

Projects
None yet
@njsmith
Member

njsmith commented Feb 22, 2014

This is now PEP 465, and a properly rendered version can be found here here.

A poorly rendered version of the latest draft can be found here.

Let's use this PR as a central place to discuss this draft PEP for adding an infix matrix multiplication operator to numpy. The PR format is nice in that it allows line by line comments, updating, etc.

Hopefully this goes without saying, but just in case: We especially welcome comments about how the proposal will work for non-numpy projects (or could work, with fixes); this proposal started in the numpy community but the idea here is to build a consensus about how to make Python better for all projects that care about linear algebra.

Some possible points of discussion:

  • Does this in fact seem like a good idea?
  • Do people agree with the points where I claim "there is consensus that..."? (You can search on "consensus")
  • Any suggestions on making the argument stronger and clearer?
@jtratner

This comment has been minimized.

Show comment
Hide comment
@jtratner

jtratner Feb 22, 2014

👍 well written and I'd be for it (for what that's worth), plus it'd be a big incentive to move to Python 3 and reduce the 2/3 fragmentation. (even if the libraries themselves couldn't use those operators because of compatibility issues)

jtratner commented Feb 22, 2014

👍 well written and I'd be for it (for what that's worth), plus it'd be a big incentive to move to Python 3 and reduce the 2/3 fragmentation. (even if the libraries themselves couldn't use those operators because of compatibility issues)

@fperez

This comment has been minimized.

Show comment
Hide comment
@fperez

fperez Feb 22, 2014

Contributor

Hey @njsmith, we had some discussions ages ago at SciPy'07 or '08, I did my best to summarize them here. That stuff is in an old bazaar repo, but you can get the reST sources here.

I never really followed this up beyond summarizing the conversations in that document. Just mentioning it here in case any of that ancient history is useful, feel free to ignore it in light of the last 5-6 years' worth of experience :)

Contributor

fperez commented Feb 22, 2014

Hey @njsmith, we had some discussions ages ago at SciPy'07 or '08, I did my best to summarize them here. That stuff is in an old bazaar repo, but you can get the reST sources here.

I never really followed this up beyond summarizing the conversations in that document. Just mentioning it here in case any of that ancient history is useful, feel free to ignore it in light of the last 5-6 years' worth of experience :)

@fperez

This comment has been minimized.

Show comment
Hide comment
@fperez

fperez Feb 22, 2014

Contributor

BTW, there's a trivial makefile too that I was using to make rendered versions of the rst sources; it would be handy to have such a view somewhere, as it does make it much nicer to read.

Contributor

fperez commented Feb 22, 2014

BTW, there's a trivial makefile too that I was using to make rendered versions of the rst sources; it would be handy to have such a view somewhere, as it does make it much nicer to read.

@msarahan

This comment has been minimized.

Show comment
Hide comment
@msarahan

msarahan Feb 23, 2014

+1 from me. Nice work. Thanks for putting this together.

msarahan commented Feb 23, 2014

+1 from me. Nice work. Thanks for putting this together.

@toddrjen

This comment has been minimized.

Show comment
Hide comment
@toddrjen

toddrjen Feb 23, 2014

This leaves out things like cross product and matrix division. What about using something like @ as a prefix for an operator to define it as a matrix operation? So @, @/, @__. You could then use @@ for the "reverse" version of that operator, so @@ would be cross product, @@/ would be back division.

toddrjen commented Feb 23, 2014

This leaves out things like cross product and matrix division. What about using something like @ as a prefix for an operator to define it as a matrix operation? So @, @/, @__. You could then use @@ for the "reverse" version of that operator, so @@ would be cross product, @@/ would be back division.

@pv

This comment has been minimized.

Show comment
Hide comment
@pv

pv Feb 23, 2014

Member

@toddrjen: the suggestion of using a prefix for multiple operators is similar to PEP-225, and it has not made progress.

Also, I think cross product and matrix division are below in priority to matrix multiplication in necessity.

Member

pv commented Feb 23, 2014

@toddrjen: the suggestion of using a prefix for multiple operators is similar to PEP-225, and it has not made progress.

Also, I think cross product and matrix division are below in priority to matrix multiplication in necessity.

@toddrjen

This comment has been minimized.

Show comment
Hide comment
@toddrjen

toddrjen Feb 23, 2014

Probably, but I would say matrix exponent is below all of those in importance.

I think the argument for including @@ is applicable to other mathematical operators as well, and I would say they are far more useful.

So I would either cut out @@, or better justify including it but not any of other operators.

toddrjen commented Feb 23, 2014

Probably, but I would say matrix exponent is below all of those in importance.

I think the argument for including @@ is applicable to other mathematical operators as well, and I would say they are far more useful.

So I would either cut out @@, or better justify including it but not any of other operators.

@abalkin

This comment has been minimized.

Show comment
Hide comment
@abalkin

abalkin Feb 23, 2014

Contributor

It will be helpful to include a survey of what other languages use. For example, there is a precedent for using $ for dot/matrix multiply. R uses %*%. APL used an equivalent of +.* (multiply compose add).

Contributor

abalkin commented Feb 23, 2014

It will be helpful to include a survey of what other languages use. For example, there is a precedent for using $ for dot/matrix multiply. R uses %*%. APL used an equivalent of +.* (multiply compose add).

@mforbes

This comment has been minimized.

Show comment
Hide comment
@mforbes

mforbes Feb 23, 2014

@fperez You can view a Rendered version on github through the Files Changed tab.

mforbes commented Feb 23, 2014

@fperez You can view a Rendered version on github through the Files Changed tab.

@pv

This comment has been minimized.

Show comment
Hide comment
Member

pv commented Feb 23, 2014

@fperez

This comment has been minimized.

Show comment
Hide comment
@fperez

fperez Feb 23, 2014

Contributor

Thanks, @pv, @mforbes; it would be good to add that link to the PR description at the top for readability; I don't have the permissions to edit that box. @njsmith?

Contributor

fperez commented Feb 23, 2014

Thanks, @pv, @mforbes; it would be good to add that link to the PR description at the top for readability; I don't have the permissions to edit that box. @njsmith?

@gdmcbain

This comment has been minimized.

Show comment
Hide comment
@gdmcbain

gdmcbain Feb 24, 2014

In addition to the APL, Kx, and R syntax given by @abalkin above, I skimmed some from Rosetta Code:

language operator
APL +.×
IDL #
J +/ .*
Julia *
K _mul
Mathematica .
MATLAB/Octave * (with .* for elementwise)
Maxima . (with * for elementwise)
Pari/GP *
R %*%
Ruby * (but only for a special matrix class)
TI-83 BASIC *

I'd say APL's use of non-ASCII characters was historically unpopular. IDL's hash clashes with Python's comment. J uses an ASCII reworking of APL, I think, so it's got a much more elaborate idea of combining multiple infix operators together; powerful but perhaps not really Pythonic. The dot of Mathematica and Maxima is nice but clashes with Python's object attribute/method/module accessor. Julia, Octave, Pari/GP, and TI-83 BASIC's star is nice but would be backward-incompatible with NumPy's elementwise multiplication; Ruby's star relies on the subclass idea eloquently rebutted by the original poster. R's and K's multicharacter things are pretty ugly.
I don't think any of these alternatives are superior to the proposed at-sign, given that star and dot are unavailable.

gdmcbain commented Feb 24, 2014

In addition to the APL, Kx, and R syntax given by @abalkin above, I skimmed some from Rosetta Code:

language operator
APL +.×
IDL #
J +/ .*
Julia *
K _mul
Mathematica .
MATLAB/Octave * (with .* for elementwise)
Maxima . (with * for elementwise)
Pari/GP *
R %*%
Ruby * (but only for a special matrix class)
TI-83 BASIC *

I'd say APL's use of non-ASCII characters was historically unpopular. IDL's hash clashes with Python's comment. J uses an ASCII reworking of APL, I think, so it's got a much more elaborate idea of combining multiple infix operators together; powerful but perhaps not really Pythonic. The dot of Mathematica and Maxima is nice but clashes with Python's object attribute/method/module accessor. Julia, Octave, Pari/GP, and TI-83 BASIC's star is nice but would be backward-incompatible with NumPy's elementwise multiplication; Ruby's star relies on the subclass idea eloquently rebutted by the original poster. R's and K's multicharacter things are pretty ugly.
I don't think any of these alternatives are superior to the proposed at-sign, given that star and dot are unavailable.

@cjw43

This comment has been minimized.

Show comment
Hide comment
@cjw43

cjw43 Mar 18, 2014

I would like to see a more complete examination of np.Matrix before
considering new operators. The PEP dismisses the Matrix class without
sufficient justification.

There are perhaps warts with np.Matrix, can these be identified and
removed, with less pain than introducing two new operators?

Colin W.

On 15-Mar-2014 3:33 PM, Robert Bradshaw wrote:

As has been pointed out, the issue with np.matrix vs np.ndarary
largely an illustration of
http://en.wikipedia.org/wiki/Composition_over_inheritance . It is rare
for the two different kinds of multiplication to be needed in the same
formula. Also, this doesn't solve A + r where r is a scalar--if A is a
matrix then this is A + Ir where I is the identity matrix. Subtraction
too. Do we need @+ and @-?

Similarly, when one writes A == 1 in a matrix context, we want to know
if it's equal to the identity matrix (not the singular all-one matrix)
and wants a truth value (not an array of booleans that doesn't have a
boolean operator, which, by the way, is another operator that could
behave differently: the all-zero matrix is False). We're up to quite a
few operators that have both matrix and array versions. Hmm... how to
assign different meanings to the same set of operators on an object
oriented language?

Do you have evidence that the pain of np.matrix is not due to the fact
that it (sometimes, inconsistently) pretends to be a np.ndarray? If
not, this PEP is not needed, and adding new (seldom used) operators is
not to be done lightly.


Reply to this email directly or view it on GitHub
#4351 (comment).

cjw43 commented Mar 18, 2014

I would like to see a more complete examination of np.Matrix before
considering new operators. The PEP dismisses the Matrix class without
sufficient justification.

There are perhaps warts with np.Matrix, can these be identified and
removed, with less pain than introducing two new operators?

Colin W.

On 15-Mar-2014 3:33 PM, Robert Bradshaw wrote:

As has been pointed out, the issue with np.matrix vs np.ndarary
largely an illustration of
http://en.wikipedia.org/wiki/Composition_over_inheritance . It is rare
for the two different kinds of multiplication to be needed in the same
formula. Also, this doesn't solve A + r where r is a scalar--if A is a
matrix then this is A + Ir where I is the identity matrix. Subtraction
too. Do we need @+ and @-?

Similarly, when one writes A == 1 in a matrix context, we want to know
if it's equal to the identity matrix (not the singular all-one matrix)
and wants a truth value (not an array of booleans that doesn't have a
boolean operator, which, by the way, is another operator that could
behave differently: the all-zero matrix is False). We're up to quite a
few operators that have both matrix and array versions. Hmm... how to
assign different meanings to the same set of operators on an object
oriented language?

Do you have evidence that the pain of np.matrix is not due to the fact
that it (sometimes, inconsistently) pretends to be a np.ndarray? If
not, this PEP is not needed, and adding new (seldom used) operators is
not to be done lightly.


Reply to this email directly or view it on GitHub
#4351 (comment).

@rkern

This comment has been minimized.

Show comment
Hide comment
@rkern

rkern Mar 18, 2014

Member

@cjw43 There have been several attempts over the years. The PEP acknowledges that history without attempting to recapitulate it, proposal for proposal. The PEP is not the beginning of the examination of the problems with a two-type solution, but the culmination of many years of experience with them (yes, plural). The past threads on the subject can be found in the mailing list archive.

By all means, please make concrete proposals if you have them. In the absence of concrete proposals, I think we have a reasonable case not to hold out hope that a good one will show up after all of these years.

Member

rkern commented Mar 18, 2014

@cjw43 There have been several attempts over the years. The PEP acknowledges that history without attempting to recapitulate it, proposal for proposal. The PEP is not the beginning of the examination of the problems with a two-type solution, but the culmination of many years of experience with them (yes, plural). The past threads on the subject can be found in the mailing list archive.

By all means, please make concrete proposals if you have them. In the absence of concrete proposals, I think we have a reasonable case not to hold out hope that a good one will show up after all of these years.

@rlamy

This comment has been minimized.

Show comment
Hide comment
@rlamy

rlamy Mar 19, 2014

Contributor

@alan-isaac

In what sense? How many Python users have any real awareness of the bitwise operators (which have additionally been completely and usefully hijacked for other uses).

Every user of a library that hijacks these operators needs to be aware of both the hijacked and original meanings. Anybody who is used to a language where the power operator is written '^' gets painfully acquainted with operator.xor at some point. I bet that thousands of man-hours have been wasted answering the question "Why does 2^2 return 0 instead of 4?"

Any new operator increases the total complexity of the language, takes valuable developer time to implement, uses up some memory in the interpreter, etc. There are costs. They are small but they affect everyone in the Python ecosystem. Given that the benefits will only be felt by a small subset of the Python community, they had better be significant.

Contributor

rlamy commented Mar 19, 2014

@alan-isaac

In what sense? How many Python users have any real awareness of the bitwise operators (which have additionally been completely and usefully hijacked for other uses).

Every user of a library that hijacks these operators needs to be aware of both the hijacked and original meanings. Anybody who is used to a language where the power operator is written '^' gets painfully acquainted with operator.xor at some point. I bet that thousands of man-hours have been wasted answering the question "Why does 2^2 return 0 instead of 4?"

Any new operator increases the total complexity of the language, takes valuable developer time to implement, uses up some memory in the interpreter, etc. There are costs. They are small but they affect everyone in the Python ecosystem. Given that the benefits will only be felt by a small subset of the Python community, they had better be significant.

@toddrjen

This comment has been minimized.

Show comment
Hide comment
@toddrjen

toddrjen Mar 19, 2014

On Mar 19, 2014 5:13 AM, "Ronan Lamy" notifications@github.com wrote:

@alan-isaac

In what sense? How many Python users have any real awareness of the
bitwise operators (which have additionally been completely and usefully
hijacked for other uses).

Every user of a library that hijacks these operators needs to be aware of
both the hijacked and original meanings. Anybody who is used to a language
where the power operator is written '^' gets painfully acquainted with
operator.xor at some point. I bet that thousands of man-hours have been
wasted answering the question "Why does 2^2 return 0 instead of 4?"

What specific confusions ate likely with this proposal?

Yes, in a general sense that could be an issue with singer operators, but
we are talking about a specific proposal here, so if you have such
criticisms you need to explain how they are relevant to the current
proposal.

Any new operator increases the total complexity of the language, takes
valuable developer time to implement, uses up some memory in the
interpreter, etc. There are costs. They are small but they affect everyone
in the Python ecosystem. Given that the benefits will only be felt by a
small subset of the Python community, they had better be significant.

First, I don't think the scientific community is a small subset, I think it
is a pretty large subset. Even when they don't directly use linear algebra
in their code, a huge improvement in an operation that underlies a lot of
other common data analysis operations will have a positive impact. The
discussions covered this on a lot of detail. This operator will greatly
simplify and improve key parts of the scientific python software stack,
which benefits all scientific python users.

Second, you need to demonstrate that these changes will be anything other
than negligible. For the memory example, are we talking megabytes,
kilobytes, bytes? The question isn't just whether there are effects, the
question is whether the effects are actually large enough to be noticeable
in the real world. If someone really cared about every last byte they
wouldn't be using python to begin with.

Third, in terms of developer time, you can't just look at the time needed
to implement the operator, you need to compare that to the developer time
need to maintain the status quo. Again, the mailing list discussions
looked at this and there is a lot if time wasted dealing with the current
situation. And it is ongoing, as long as the status quo is maintained,
further development in the scientific software stack will have to work
around the current limitations.

toddrjen commented Mar 19, 2014

On Mar 19, 2014 5:13 AM, "Ronan Lamy" notifications@github.com wrote:

@alan-isaac

In what sense? How many Python users have any real awareness of the
bitwise operators (which have additionally been completely and usefully
hijacked for other uses).

Every user of a library that hijacks these operators needs to be aware of
both the hijacked and original meanings. Anybody who is used to a language
where the power operator is written '^' gets painfully acquainted with
operator.xor at some point. I bet that thousands of man-hours have been
wasted answering the question "Why does 2^2 return 0 instead of 4?"

What specific confusions ate likely with this proposal?

Yes, in a general sense that could be an issue with singer operators, but
we are talking about a specific proposal here, so if you have such
criticisms you need to explain how they are relevant to the current
proposal.

Any new operator increases the total complexity of the language, takes
valuable developer time to implement, uses up some memory in the
interpreter, etc. There are costs. They are small but they affect everyone
in the Python ecosystem. Given that the benefits will only be felt by a
small subset of the Python community, they had better be significant.

First, I don't think the scientific community is a small subset, I think it
is a pretty large subset. Even when they don't directly use linear algebra
in their code, a huge improvement in an operation that underlies a lot of
other common data analysis operations will have a positive impact. The
discussions covered this on a lot of detail. This operator will greatly
simplify and improve key parts of the scientific python software stack,
which benefits all scientific python users.

Second, you need to demonstrate that these changes will be anything other
than negligible. For the memory example, are we talking megabytes,
kilobytes, bytes? The question isn't just whether there are effects, the
question is whether the effects are actually large enough to be noticeable
in the real world. If someone really cared about every last byte they
wouldn't be using python to begin with.

Third, in terms of developer time, you can't just look at the time needed
to implement the operator, you need to compare that to the developer time
need to maintain the status quo. Again, the mailing list discussions
looked at this and there is a lot if time wasted dealing with the current
situation. And it is ongoing, as long as the status quo is maintained,
further development in the scientific software stack will have to work
around the current limitations.

@rkern

This comment has been minimized.

Show comment
Hide comment
@rkern

rkern Mar 19, 2014

Member

@rlamy I think that it's worth noting that no one on python-ideas, except perhaps yourself, has objected to the PEP on these grounds or asked us to do more soul searching about whether we really need the operator. These are people most of whom will never use matrix multiplication. It's nice to be concerned for them, but they can speak for themselves and have spoken in favor of the PEP.

Member

rkern commented Mar 19, 2014

@rlamy I think that it's worth noting that no one on python-ideas, except perhaps yourself, has objected to the PEP on these grounds or asked us to do more soul searching about whether we really need the operator. These are people most of whom will never use matrix multiplication. It's nice to be concerned for them, but they can speak for themselves and have spoken in favor of the PEP.

@rkern

This comment has been minimized.

Show comment
Hide comment
@rkern

rkern Mar 19, 2014

Member

@toddrjen Implementing @ and @= will require two new pointers added to the PyNumber_Methods table, thus increasing the size of many type objects by 8 or 16 bytes (e.g. all sequences that implement + for concatenation). This does not affect instances, only types, but with things like namedtuple, dynamically creating instances is getting more common.

Member

rkern commented Mar 19, 2014

@toddrjen Implementing @ and @= will require two new pointers added to the PyNumber_Methods table, thus increasing the size of many type objects by 8 or 16 bytes (e.g. all sequences that implement + for concatenation). This does not affect instances, only types, but with things like namedtuple, dynamically creating instances is getting more common.

@argriffing

This comment has been minimized.

Show comment
Hide comment
@argriffing

argriffing Mar 25, 2014

Contributor

@ should definitely have right precedence.

@pchanial they are suggesting left precedence and asking for compelling arguments against it on the mailing list

Contributor

argriffing commented Mar 25, 2014

@ should definitely have right precedence.

@pchanial they are suggesting left precedence and asking for compelling arguments against it on the mailing list

@njsmith

This comment has been minimized.

Show comment
Hide comment
@njsmith

njsmith Apr 6, 2014

Member

I've just added text for "same-left" associativity/precedence:
http://mail.scipy.org/pipermail/numpy-discussion/2014-April/069834.html
and sent the updated version of the PEP in to the official repository.

Member

njsmith commented Apr 6, 2014

I've just added text for "same-left" associativity/precedence:
http://mail.scipy.org/pipermail/numpy-discussion/2014-April/069834.html
and sent the updated version of the PEP in to the official repository.

converting back and forth all the time, is incredibly cumbersome and
impossible to get right at any scale. Functions that defensively try
to handle both types as input and DTRT, find themselves floundering
into a swamp of ``isinstance`` and ``if`` statements.

This comment has been minimized.

@larsmans

larsmans Apr 11, 2014

Contributor

Actually handling both types as input is not so hard if you do it like scikit-learn, were matrices may come in, but never come out. The trouble starts when you also produce np.matrix when you get it as input.

@larsmans

larsmans Apr 11, 2014

Contributor

Actually handling both types as input is not so hard if you do it like scikit-learn, were matrices may come in, but never come out. The trouble starts when you also produce np.matrix when you get it as input.

This comment has been minimized.

@stefanv

stefanv Apr 11, 2014

Contributor

The point is still that libraries have to be aware of this double-class
convention, otherwise only the broken sub-class solutions works (or
doesn't).

@stefanv

stefanv Apr 11, 2014

Contributor

The point is still that libraries have to be aware of this double-class
convention, otherwise only the broken sub-class solutions works (or
doesn't).

This comment has been minimized.

@rkern

rkern Apr 11, 2014

Member

Dealing with matrix in this fashion is not really an additional overhead to allowing lists or other sequences as inputs: you just always use np.asarray() on the inputs. If everyone did this, though, matrix would be useless since it would disappear whenever you passed one to a function. scikit-learn accepts this; it doesn't care about supporting matrix users except to transparently consume their data. It also doesn't have many functions that one would really use in expressions, so it's not much loss for matrix users. But there are a lot of other functions that do need to output matrix objects to be useful to matrix users, and this is difficult so few people do it right. That's why the status quo sucks. You can choose between having an operator for matrix multiplication by using the matrix type or having the full range of the community's code available to you. You can't have both. And just to be clear, there is nothing about this situation that depends on the implementation details of numpy.matrix. Subclass or not, all of these issues remain as long as you have two types that you want to interoperate.

@rkern

rkern Apr 11, 2014

Member

Dealing with matrix in this fashion is not really an additional overhead to allowing lists or other sequences as inputs: you just always use np.asarray() on the inputs. If everyone did this, though, matrix would be useless since it would disappear whenever you passed one to a function. scikit-learn accepts this; it doesn't care about supporting matrix users except to transparently consume their data. It also doesn't have many functions that one would really use in expressions, so it's not much loss for matrix users. But there are a lot of other functions that do need to output matrix objects to be useful to matrix users, and this is difficult so few people do it right. That's why the status quo sucks. You can choose between having an operator for matrix multiplication by using the matrix type or having the full range of the community's code available to you. You can't have both. And just to be clear, there is nothing about this situation that depends on the implementation details of numpy.matrix. Subclass or not, all of these issues remain as long as you have two types that you want to interoperate.

This comment has been minimized.

@larsmans

larsmans Apr 11, 2014

Contributor

@rkern I agree with you. What I was really trying to point out is that DTRT is ambiguous in this context, because to a new user there may seem to be two "obvious" ways of "doing it right".

@larsmans

larsmans Apr 11, 2014

Contributor

@rkern I agree with you. What I was really trying to point out is that DTRT is ambiguous in this context, because to a new user there may seem to be two "obvious" ways of "doing it right".

This comment has been minimized.

@rkern

rkern Apr 11, 2014

Member

Point.

@rkern

rkern Apr 11, 2014

Member

Point.

This comment has been minimized.

@njsmith

njsmith Apr 11, 2014

Member

Ah, I see. That sentence was meant as an alternative to the previous one,
i.e., the first point is that keeping track of which functions expect what
is impossible, so the obvious alternative is to write functions in such a
way that you don't need to keep track, but that is also impossible...

On Fri, Apr 11, 2014 at 1:27 PM, Lars Buitinck notifications@github.comwrote:

In doc/neps/return-of-revenge-of-matmul-pep.rst:

+possible to switch between the conventions, because numpy provides two
+different types with different __mul__ methods. For
+numpy.ndarray objects, * performs elementwise multiplication,
+and matrix multiplication must use a function call (numpy.dot).
+For numpy.matrix objects, * performs matrix multiplication,
+and elementwise multiplication requires function syntax. Writing code
+using numpy.ndarray works fine. Writing code using
+numpy.matrix also works fine. But trouble begins as soon as we
+try to integrate these two pieces of code together. Code that expects
+an ndarray and gets a matrix, or vice-versa, may crash or
+return incorrect results. Keeping track of which functions expect
+which types as inputs, and return which types as outputs, and then
+converting back and forth all the time, is incredibly cumbersome and
+impossible to get right at any scale. Functions that defensively try
+to handle both types as input and DTRT, find themselves floundering
+into a swamp of isinstance and if statements.

@rkern https://github.com/rkern I agree with you. What I was really
trying to point out is that DTRT is ambiguous in this context, because to a
new user there may seem to be two "obvious" ways of "doing it right".


Reply to this email directly or view it on GitHubhttps://github.com//pull/4351/files#r11529482
.

Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

@njsmith

njsmith Apr 11, 2014

Member

Ah, I see. That sentence was meant as an alternative to the previous one,
i.e., the first point is that keeping track of which functions expect what
is impossible, so the obvious alternative is to write functions in such a
way that you don't need to keep track, but that is also impossible...

On Fri, Apr 11, 2014 at 1:27 PM, Lars Buitinck notifications@github.comwrote:

In doc/neps/return-of-revenge-of-matmul-pep.rst:

+possible to switch between the conventions, because numpy provides two
+different types with different __mul__ methods. For
+numpy.ndarray objects, * performs elementwise multiplication,
+and matrix multiplication must use a function call (numpy.dot).
+For numpy.matrix objects, * performs matrix multiplication,
+and elementwise multiplication requires function syntax. Writing code
+using numpy.ndarray works fine. Writing code using
+numpy.matrix also works fine. But trouble begins as soon as we
+try to integrate these two pieces of code together. Code that expects
+an ndarray and gets a matrix, or vice-versa, may crash or
+return incorrect results. Keeping track of which functions expect
+which types as inputs, and return which types as outputs, and then
+converting back and forth all the time, is incredibly cumbersome and
+impossible to get right at any scale. Functions that defensively try
+to handle both types as input and DTRT, find themselves floundering
+into a swamp of isinstance and if statements.

@rkern https://github.com/rkern I agree with you. What I was really
trying to point out is that DTRT is ambiguous in this context, because to a
new user there may seem to be two "obvious" ways of "doing it right".


Reply to this email directly or view it on GitHubhttps://github.com//pull/4351/files#r11529482
.

Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

This comment has been minimized.

@cjw43

cjw43 Apr 11, 2014

On 11-Apr-2014 7:39 AM, Stefan van der Walt wrote:
  In doc/neps/return-of-revenge-of-matmul-pep.rst:
  > +possible to switch between the conventions, because numpy provides two

+different types with different __mul__ methods. For
+numpy.ndarray objects, * performs elementwise multiplication,
+and matrix multiplication must use a function call (numpy.dot).
+For numpy.matrix objects, * performs matrix multiplication,
+and elementwise multiplication requires function syntax. Writing code
+using numpy.ndarray works fine. Writing code using
+numpy.matrix also works fine. But trouble begins as soon as we
+try to integrate these two pieces of code together. Code that expects
+an ndarray and gets a matrix, or vice-versa, may crash or
+return incorrect results. Keeping track of which functions expect
+which types as inputs, and return which types as outputs, and then
+converting back and forth all the time, is incredibly cumbersome and
+impossible to get right at any scale. Functions that defensively try
+to handle both types as input and DTRT, find themselves floundering
+into a swamp of isinstance and if statements.

  The point is still that libraries have
    to be aware of this double-class
    convention, otherwise only the broken sub-class solutions works
    (or
    doesn't).


Stefan,
Thanks for this clarification.  
You say "Code that expects  an ``ndarray`` and gets a ``matrix``, or
vice-versa, may crash or return incorrect 
results.
"
I would look at this as an error in the existing code.  Would it not
be simpler to correct this than construct code for a new operator? 
Has this been explored?  It's been a long time since I looked at the
code.
It would help if some example of the need for "converting back and
forth" could be given
Colin W.
  —
    Reply to this email directly or view
      it on GitHub.
@cjw43

cjw43 Apr 11, 2014

On 11-Apr-2014 7:39 AM, Stefan van der Walt wrote:
  In doc/neps/return-of-revenge-of-matmul-pep.rst:
  > +possible to switch between the conventions, because numpy provides two

+different types with different __mul__ methods. For
+numpy.ndarray objects, * performs elementwise multiplication,
+and matrix multiplication must use a function call (numpy.dot).
+For numpy.matrix objects, * performs matrix multiplication,
+and elementwise multiplication requires function syntax. Writing code
+using numpy.ndarray works fine. Writing code using
+numpy.matrix also works fine. But trouble begins as soon as we
+try to integrate these two pieces of code together. Code that expects
+an ndarray and gets a matrix, or vice-versa, may crash or
+return incorrect results. Keeping track of which functions expect
+which types as inputs, and return which types as outputs, and then
+converting back and forth all the time, is incredibly cumbersome and
+impossible to get right at any scale. Functions that defensively try
+to handle both types as input and DTRT, find themselves floundering
+into a swamp of isinstance and if statements.

  The point is still that libraries have
    to be aware of this double-class
    convention, otherwise only the broken sub-class solutions works
    (or
    doesn't).


Stefan,
Thanks for this clarification.  
You say "Code that expects  an ``ndarray`` and gets a ``matrix``, or
vice-versa, may crash or return incorrect 
results.
"
I would look at this as an error in the existing code.  Would it not
be simpler to correct this than construct code for a new operator? 
Has this been explored?  It's been a long time since I looked at the
code.
It would help if some example of the need for "converting back and
forth" could be given
Colin W.
  —
    Reply to this email directly or view
      it on GitHub.

This comment has been minimized.

@njsmith

njsmith Apr 11, 2014

Member

On Fri, Apr 11, 2014 at 2:43 PM, Colin J. Williams <notifications@github.com

wrote:

Stefan, Thanks for this clarification. You say "Code that expects an
ndarray and gets a matrix, or vice-versa, may crash or return
incorrect results. " I would look at this as an error in the existing
code. Would it not be simpler to correct this than construct code for a
new operator? Has this been explored?

No, it wouldn't be simpler. Fixing every piece of code to handle both types
as inputs and outputs means that every function must have special checks,
which creates a combinatorial explosion in code complexity and is a serious
obstacle to maintainability, testing, etc.

This is why if you look at actual code there are few (possibly zero)
libraries that actually handle mixed ndarray and matrix objects correctly
(even numpy itself doesn't in many cases) -- it's just not doable. Everyone
prefers to use ndarray objects with the cumbersome dot() function call
syntax for matrix multiplication. It's annoying, but it's still better than
checking that every single code path can handle arrays and matrices (and
combinations thereof etc). (In fact, there are some places where
scikit-learn is forced to handle matrix objects because scipy.sparse only
provides a matrix-style API, and even in these cases they don't actually
use '*' for matrix multiplication -- which is the point that started this
whole thread. Instead they have a special safe_sparse_dot() function which
provides a dot()-style API that works for both arrays and matrices.)

@njsmith

njsmith Apr 11, 2014

Member

On Fri, Apr 11, 2014 at 2:43 PM, Colin J. Williams <notifications@github.com

wrote:

Stefan, Thanks for this clarification. You say "Code that expects an
ndarray and gets a matrix, or vice-versa, may crash or return
incorrect results. " I would look at this as an error in the existing
code. Would it not be simpler to correct this than construct code for a
new operator? Has this been explored?

No, it wouldn't be simpler. Fixing every piece of code to handle both types
as inputs and outputs means that every function must have special checks,
which creates a combinatorial explosion in code complexity and is a serious
obstacle to maintainability, testing, etc.

This is why if you look at actual code there are few (possibly zero)
libraries that actually handle mixed ndarray and matrix objects correctly
(even numpy itself doesn't in many cases) -- it's just not doable. Everyone
prefers to use ndarray objects with the cumbersome dot() function call
syntax for matrix multiplication. It's annoying, but it's still better than
checking that every single code path can handle arrays and matrices (and
combinations thereof etc). (In fact, there are some places where
scikit-learn is forced to handle matrix objects because scipy.sparse only
provides a matrix-style API, and even in these cases they don't actually
use '*' for matrix multiplication -- which is the point that started this
whole thread. Instead they have a special safe_sparse_dot() function which
provides a dot()-style API that works for both arrays and matrices.)

This comment has been minimized.

@stefanv

stefanv Apr 11, 2014

Contributor

@cjw43 Here's a library's code snippet for squaring the elements of an input array:

def square_it(A):
    return A * A

That code is now broken, even though the library author has never even heard of a Matrix class. The new, correct code, becomes:

def square_it(A):
    A = np.asarray(A)
    return A * A

That's not a problem as long as all consumers out there are aware of this possibility, but you are forcing a convention on programmers you don't control, and one which is auxiliary their intention. Contrast that to the following library code:

def square_it(A):
    return A * A

We know what this does. It squares the elements of the array.

def square_it(A):
    return A @ A

We know what this does, it performs a dot product between the two matrices.

@stefanv

stefanv Apr 11, 2014

Contributor

@cjw43 Here's a library's code snippet for squaring the elements of an input array:

def square_it(A):
    return A * A

That code is now broken, even though the library author has never even heard of a Matrix class. The new, correct code, becomes:

def square_it(A):
    A = np.asarray(A)
    return A * A

That's not a problem as long as all consumers out there are aware of this possibility, but you are forcing a convention on programmers you don't control, and one which is auxiliary their intention. Contrast that to the following library code:

def square_it(A):
    return A * A

We know what this does. It squares the elements of the array.

def square_it(A):
    return A @ A

We know what this does, it performs a dot product between the two matrices.

@samwyse

This comment has been minimized.

Show comment
Hide comment
@samwyse

samwyse Apr 21, 2014

This is probably a day late and a dollar short (as my Mom used to say) but has any serious consideration ever been given to defining dot (".") as an inner product operator? That's the way that APL does it, but every time it's brought up, it is dismissed out of hand. Yes, it would give an additional, different, meaning to ".", but "%" is used for very different purposes without too much confusion. Uisng the string "+._" in Python code is a syntax error, so we don't have to worry about breaking pre-existing code. I envision A+._B as syntactic sugar for

inner_product(operator.add, operator.mul)(A, B)

where inner_product is defined (for simple sequence objects) as equivalent to this:

class inner_product(object):
    def __init__(self, f, g):
        self.f, self.g = f, g
    def __call__(self, a, b):
        return reduce(self.f, (self.g(*pair) for pair in zip(a, b)))

The compiler would be allowed to instantiate +.* only once per sequence type, no matter how many times it appeared in the code, however non-standard sequence types (such as numpy's ndarrays) would be able to provide alternative implementations which would only be used by those types. (I envision a dict attached to the PySequenceMethods struct that would cache, for example, inner_product(operator.add, operator.mul) under the key '"+.*")

samwyse commented Apr 21, 2014

This is probably a day late and a dollar short (as my Mom used to say) but has any serious consideration ever been given to defining dot (".") as an inner product operator? That's the way that APL does it, but every time it's brought up, it is dismissed out of hand. Yes, it would give an additional, different, meaning to ".", but "%" is used for very different purposes without too much confusion. Uisng the string "+._" in Python code is a syntax error, so we don't have to worry about breaking pre-existing code. I envision A+._B as syntactic sugar for

inner_product(operator.add, operator.mul)(A, B)

where inner_product is defined (for simple sequence objects) as equivalent to this:

class inner_product(object):
    def __init__(self, f, g):
        self.f, self.g = f, g
    def __call__(self, a, b):
        return reduce(self.f, (self.g(*pair) for pair in zip(a, b)))

The compiler would be allowed to instantiate +.* only once per sequence type, no matter how many times it appeared in the code, however non-standard sequence types (such as numpy's ndarrays) would be able to provide alternative implementations which would only be used by those types. (I envision a dict attached to the PySequenceMethods struct that would cache, for example, inner_product(operator.add, operator.mul) under the key '"+.*")

@seberg

This comment has been minimized.

Show comment
Hide comment
@seberg

seberg Apr 21, 2014

Member

@samwyse, I am not sure how this could be an option. . already is the __getattr__ operator. There would be no way I can see to distinguish between A @ B and A . B (or as it would normally be written: A.B), neither for the user nor for the language.

Member

seberg commented Apr 21, 2014

@samwyse, I am not sure how this could be an option. . already is the __getattr__ operator. There would be no way I can see to distinguish between A @ B and A . B (or as it would normally be written: A.B), neither for the user nor for the language.

@njsmith

This comment has been minimized.

Show comment
Hide comment
@njsmith

njsmith Apr 21, 2014

Member

Hi Sam,

I think the challenge you'd face if you tried proposing this to
python-ideas/-dev is to justify what all the extra complexity gets you.
This proposal would require a lot of extra machinery that Python doesn't
otherwise use (there are no other "meta-operators" like this), and it's not
clear that the extra complexity provides extra value -- like, it allows a
lot of other weird operators like *.+ and /./ and so forth, but would
anyone actually use these? If you look at the discussion of the PEP on the
Python lists, the core devs spoke explicitly about how they felt it hit the
"sweet spot" of being just the minimal bit of added functionality needed to
provide 99% of the value.

-n

On Mon, Apr 21, 2014 at 5:39 PM, Sam Denton notifications@github.comwrote:

This is probably a day late and a dollar short (as my Mom used to say) but
has any serious consideration ever been given to defining dot (".") as an
inner product operator? That's the way that APL does it, but every time
it's brought up, it is dismissed out of hand. Yes, it would give an
additional, different, meaning to ".", but "%" is used for very different
purposes without too much confusion. Uisng the string "+._" in Python code
is a syntax error, so we don't have to worry about breaking pre-existing
code. I envision A+._B as syntactic sugar for

inner_product(operator.add, operator.mul)(A, B)

where inner_product is defined (for simple sequence objects) as equivalent
to this:

class inner_product(object):
def init(self, f, g):
self.f, self.g = f, g
def call(self, a, b):
return reduce(self.f, (self.g(*pair) for pair in zip(a, b)))

The compiler would be allowed to instantiate +.* only once per sequence
type, no matter how many times it appeared in the code, however
non-standard sequence types (such as numpy's ndarrays) would be able to
provide alternative implementations which would only be used by those
types. (I envision a dict attached to the PySequenceMethods struct that
would cache, for example, inner_product(operator.add, operator.mul) under
the key '"+.*")


Reply to this email directly or view it on GitHubhttps://github.com//pull/4351#issuecomment-40950255
.

Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

Member

njsmith commented Apr 21, 2014

Hi Sam,

I think the challenge you'd face if you tried proposing this to
python-ideas/-dev is to justify what all the extra complexity gets you.
This proposal would require a lot of extra machinery that Python doesn't
otherwise use (there are no other "meta-operators" like this), and it's not
clear that the extra complexity provides extra value -- like, it allows a
lot of other weird operators like *.+ and /./ and so forth, but would
anyone actually use these? If you look at the discussion of the PEP on the
Python lists, the core devs spoke explicitly about how they felt it hit the
"sweet spot" of being just the minimal bit of added functionality needed to
provide 99% of the value.

-n

On Mon, Apr 21, 2014 at 5:39 PM, Sam Denton notifications@github.comwrote:

This is probably a day late and a dollar short (as my Mom used to say) but
has any serious consideration ever been given to defining dot (".") as an
inner product operator? That's the way that APL does it, but every time
it's brought up, it is dismissed out of hand. Yes, it would give an
additional, different, meaning to ".", but "%" is used for very different
purposes without too much confusion. Uisng the string "+._" in Python code
is a syntax error, so we don't have to worry about breaking pre-existing
code. I envision A+._B as syntactic sugar for

inner_product(operator.add, operator.mul)(A, B)

where inner_product is defined (for simple sequence objects) as equivalent
to this:

class inner_product(object):
def init(self, f, g):
self.f, self.g = f, g
def call(self, a, b):
return reduce(self.f, (self.g(*pair) for pair in zip(a, b)))

The compiler would be allowed to instantiate +.* only once per sequence
type, no matter how many times it appeared in the code, however
non-standard sequence types (such as numpy's ndarrays) would be able to
provide alternative implementations which would only be used by those
types. (I envision a dict attached to the PySequenceMethods struct that
would cache, for example, inner_product(operator.add, operator.mul) under
the key '"+.*")


Reply to this email directly or view it on GitHubhttps://github.com//pull/4351#issuecomment-40950255
.

Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

@samwyse

This comment has been minimized.

Show comment
Hide comment
@samwyse

samwyse Apr 21, 2014

@seberg: I'm proposing "." as a meta-operator, which would not be used by itself. That particular character is currently used to access attributes, and also to separate the integral and fractional parts of floating point numbers. This would be a third use.

samwyse commented Apr 21, 2014

@seberg: I'm proposing "." as a meta-operator, which would not be used by itself. That particular character is currently used to access attributes, and also to separate the integral and fractional parts of floating point numbers. This would be a third use.

@samwyse

This comment has been minimized.

Show comment
Hide comment
@samwyse

samwyse Apr 21, 2014

@njsmith: Way back when I learned APL, I recall lots of ways that f,g could be used besides matrix multiplication. For example, '<.&' would tell you if every number in a list was smaller that the corresponding number in another list. That said, my use of '&' in my example (as a logical and) has different semantics from Python's definition ('&' is bitwise and). Really, you'd want to use '<.and' which I suspect would really only be parsable because of 'and' being a keyword; and yes, Python already allows 'all(x < y for x, y in zip(a,b))' which means the same thing (albeit not as concisely). Iverson provides several examples, but they tend to use APL's 'max' and 'min' operators, which in Python aren't even keywords, and so would look to the compiler a lot like attribute access; I suspect that a proposal to implement GCC's minimum and maximum operators ('<?' and '>?') would be met with cat-calls.

samwyse commented Apr 21, 2014

@njsmith: Way back when I learned APL, I recall lots of ways that f,g could be used besides matrix multiplication. For example, '<.&' would tell you if every number in a list was smaller that the corresponding number in another list. That said, my use of '&' in my example (as a logical and) has different semantics from Python's definition ('&' is bitwise and). Really, you'd want to use '<.and' which I suspect would really only be parsable because of 'and' being a keyword; and yes, Python already allows 'all(x < y for x, y in zip(a,b))' which means the same thing (albeit not as concisely). Iverson provides several examples, but they tend to use APL's 'max' and 'min' operators, which in Python aren't even keywords, and so would look to the compiler a lot like attribute access; I suspect that a proposal to implement GCC's minimum and maximum operators ('<?' and '>?') would be met with cat-calls.

@njsmith

This comment has been minimized.

Show comment
Hide comment
@njsmith

njsmith Apr 21, 2014

Member

I think a relevant data point here is that back in the day, Guido was
reluctant to even add the power operator ** -- it didn't show up until
Python 1.4. Since then, the only new punctuation operators have been //
(which was necessary to resolve the int/float mess), and now @. So yeah, I
don't think a proposal for a full APL-like system, min/max operators, etc.,
is likely to go very far :-).

On Mon, Apr 21, 2014 at 8:17 PM, Sam Denton notifications@github.comwrote:

@njsmith https://github.com/njsmith: Way back when I learned APL, I
recall lots of ways that f,g could be used besides matrix multiplication.
For example, '<.&' would tell you if every number in a list was smaller
that the corresponding number in another list. That said, my use of '&' in
my example (as a logical and) has different semantics from Python's
definition ('&' is bitwise and). Really, you'd want to use '<.and' which I
suspect would really only be parsable because of 'and' being a keyword; and
yes, Python already allows 'all(x < y for x, y in zip(a,b))' which means
the same thing (albeit not as concisely). Iverson provides several
examples, but they tend to use APL's 'max' and 'min' operators, which in
Python aren't even keywords, and so would look to the com;iler a lot like
attribute access; I suspect that a proposal to implement GCC's minimum and
maximum operators ('<?' and '>?') would be met with cat-cal ls.


Reply to this email directly or view it on GitHubhttps://github.com//pull/4351#issuecomment-40966469
.

Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

Member

njsmith commented Apr 21, 2014

I think a relevant data point here is that back in the day, Guido was
reluctant to even add the power operator ** -- it didn't show up until
Python 1.4. Since then, the only new punctuation operators have been //
(which was necessary to resolve the int/float mess), and now @. So yeah, I
don't think a proposal for a full APL-like system, min/max operators, etc.,
is likely to go very far :-).

On Mon, Apr 21, 2014 at 8:17 PM, Sam Denton notifications@github.comwrote:

@njsmith https://github.com/njsmith: Way back when I learned APL, I
recall lots of ways that f,g could be used besides matrix multiplication.
For example, '<.&' would tell you if every number in a list was smaller
that the corresponding number in another list. That said, my use of '&' in
my example (as a logical and) has different semantics from Python's
definition ('&' is bitwise and). Really, you'd want to use '<.and' which I
suspect would really only be parsable because of 'and' being a keyword; and
yes, Python already allows 'all(x < y for x, y in zip(a,b))' which means
the same thing (albeit not as concisely). Iverson provides several
examples, but they tend to use APL's 'max' and 'min' operators, which in
Python aren't even keywords, and so would look to the com;iler a lot like
attribute access; I suspect that a proposal to implement GCC's minimum and
maximum operators ('<?' and '>?') would be met with cat-cal ls.


Reply to this email directly or view it on GitHubhttps://github.com//pull/4351#issuecomment-40966469
.

Nathaniel J. Smith
Postdoctoral researcher - Informatics - University of Edinburgh
http://vorpus.org

@cjw43

This comment has been minimized.

Show comment
Hide comment
@cjw43

cjw43 Apr 22, 2014

I see the relative merits of a
    Matrix Class and the proposed operator as having no been
    adequately explored.
    The destruction of the Class was not made clear in the PEP.
    Colin W.
On 4/21/2014 12:39 PM, Sam Denton
  wrote:

  This is probably a day late and a dollar short (as my Mom used
    to say) but has any serious consideration ever been given to
    defining dot (".") as an inner product operator? That's the way
    that APL does it, but every time it's brought up, it is
    dismissed out of hand. Yes, it would give an additional,
    different, meaning to ".", but "%" is used for very different
    purposes without too much confusion. Uisng the string "+.*" in
    Python code is a syntax error, so we don't have to worry about
    breaking pre-existing code. I envision A+.*B as syntactic sugar
    for
  inner_product(operator.add, operator.mul)(A, B)

  where inner_product is defined (for simple sequence objects) as
    equivalent to this:
  class inner_product(object):
def __init__(self, f, g):
    self.f, self.g = f, g
def __call__(self, a, b):
    return reduce(self.f, (self.g(*pair) for pair in zip(a, b)))

  The compiler would be allowed to instantiate +.* only once per
    sequence type, no matter how many times it appeared in the code,
    however non-standard sequence types (such as numpy's ndarrays)
    would be able to provide alternative implementations which would
    only be used by those types. (I envision a dict attached to the
    PySequenceMethods struct that would cache, for example,
    inner_product(operator.add, operator.mul) under the key '"+.*")
  —
    Reply to this email directly or view
      it on GitHub.

cjw43 commented Apr 22, 2014

I see the relative merits of a
    Matrix Class and the proposed operator as having no been
    adequately explored.
    The destruction of the Class was not made clear in the PEP.
    Colin W.
On 4/21/2014 12:39 PM, Sam Denton
  wrote:

  This is probably a day late and a dollar short (as my Mom used
    to say) but has any serious consideration ever been given to
    defining dot (".") as an inner product operator? That's the way
    that APL does it, but every time it's brought up, it is
    dismissed out of hand. Yes, it would give an additional,
    different, meaning to ".", but "%" is used for very different
    purposes without too much confusion. Uisng the string "+.*" in
    Python code is a syntax error, so we don't have to worry about
    breaking pre-existing code. I envision A+.*B as syntactic sugar
    for
  inner_product(operator.add, operator.mul)(A, B)

  where inner_product is defined (for simple sequence objects) as
    equivalent to this:
  class inner_product(object):
def __init__(self, f, g):
    self.f, self.g = f, g
def __call__(self, a, b):
    return reduce(self.f, (self.g(*pair) for pair in zip(a, b)))

  The compiler would be allowed to instantiate +.* only once per
    sequence type, no matter how many times it appeared in the code,
    however non-standard sequence types (such as numpy's ndarrays)
    would be able to provide alternative implementations which would
    only be used by those types. (I envision a dict attached to the
    PySequenceMethods struct that would cache, for example,
    inner_product(operator.add, operator.mul) under the key '"+.*")
  —
    Reply to this email directly or view
      it on GitHub.
@rkern

This comment has been minimized.

Show comment
Hide comment
@rkern

rkern Apr 22, 2014

Member

@cjw43 There is no "destruction of the Class" to be discussed in the PEP. If you are talking about the possible deprecation of the matrix class, it will only come quite a few years from now, if at all, and is irrelevant to the PEP as it is entirely internal to numpy.

Please, please, please use the Github interface for commenting or else trim your email replies to just your content. But as the PEP has already been accepted, continuing to press this particular point does not appear to be a good use of anyone's time. The decisionmakers in the core Python dev team consider the amount of discussion in the PEP as it stands perfectly adequate.

Member

rkern commented Apr 22, 2014

@cjw43 There is no "destruction of the Class" to be discussed in the PEP. If you are talking about the possible deprecation of the matrix class, it will only come quite a few years from now, if at all, and is irrelevant to the PEP as it is entirely internal to numpy.

Please, please, please use the Github interface for commenting or else trim your email replies to just your content. But as the PEP has already been accepted, continuing to press this particular point does not appear to be a good use of anyone's time. The decisionmakers in the core Python dev team consider the amount of discussion in the PEP as it stands perfectly adequate.

@charris

This comment has been minimized.

Show comment
Hide comment
@charris

charris Jul 6, 2014

Member

@njsmith Ready to merge?

Member

charris commented Jul 6, 2014

@njsmith Ready to merge?

@njsmith

This comment has been minimized.

Show comment
Hide comment
@njsmith

njsmith Jul 6, 2014

Member

Oh right, this PR.

Sure, I guess that makes sense to save the history, though I suppose the
version in the PEPs repository is the canonical one now. They should be
identical though.
On 6 Jul 2014 17:42, "Charles Harris" notifications@github.com wrote:

@njsmith https://github.com/njsmith Ready to merge?


Reply to this email directly or view it on GitHub
#4351 (comment).

Member

njsmith commented Jul 6, 2014

Oh right, this PR.

Sure, I guess that makes sense to save the history, though I suppose the
version in the PEPs repository is the canonical one now. They should be
identical though.
On 6 Jul 2014 17:42, "Charles Harris" notifications@github.com wrote:

@njsmith https://github.com/njsmith Ready to merge?


Reply to this email directly or view it on GitHub
#4351 (comment).

charris added a commit that referenced this pull request Jul 6, 2014

Merge pull request #4351 from njsmith/matmul-pep
A new PEP for infix matrix multiplication

@charris charris merged commit 1c71d46 into numpy:master Jul 6, 2014

1 check passed

continuous-integration/travis-ci The Travis CI build passed
Details
@charris

This comment has been minimized.

Show comment
Hide comment
@charris

charris Jul 6, 2014

Member

History saved! Thanks for that.

Member

charris commented Jul 6, 2014

History saved! Thanks for that.

charris added a commit to charris/numpy that referenced this pull request Jul 7, 2014

Merge pull request numpy#4351 from njsmith/matmul-pep
A new PEP for infix matrix multiplication
@jiahao

This comment has been minimized.

Show comment
Hide comment
@jiahao

jiahao Nov 24, 2015

Super late to the party, but I'd like to cross-reference JuliaLang/julia#4774 here, where it looks like very similar discussions have taken place in the context of how to define vector transposition.

jiahao commented Nov 24, 2015

Super late to the party, but I'd like to cross-reference JuliaLang/julia#4774 here, where it looks like very similar discussions have taken place in the context of how to define vector transposition.

@owendavey

This comment has been minimized.

Show comment
Hide comment
@owendavey

owendavey Jan 30, 2016

Re the use of a potential @@ operator. In a lot of mathematical contexts, we actually consider exp(A) for some matrix A. exp(A)=I + A + A@@2/2! + A@@3/3! + A@@4/4! + .... continuing like the Taylor Series. Other useful contexts for the @@ operator exist, but it is definitely specifically for math people and may be noise to most non-scientific programmers.

owendavey commented Jan 30, 2016

Re the use of a potential @@ operator. In a lot of mathematical contexts, we actually consider exp(A) for some matrix A. exp(A)=I + A + A@@2/2! + A@@3/3! + A@@4/4! + .... continuing like the Taylor Series. Other useful contexts for the @@ operator exist, but it is definitely specifically for math people and may be noise to most non-scientific programmers.

@dimpase

This comment has been minimized.

Show comment
Hide comment
@dimpase

dimpase Jan 30, 2016

On 30 January 2016 at 17:12, owendavey notifications@github.com wrote:

Re the use of a potential @@ operator. In a lot of mathematical contexts,
we actually consider exp(A) for some matrix A. exp(A)=I + A A@@2/2! + A@@3/3!

  • A@@4/4! + .... continuing like the Taylor Series. Other useful contexts
    for the @@ operator exist, but it is definitely specifically for math
    people and may be noise to most non-scientific programmers.

In computer graphics, one multiplies matrices a lot. Considering that this
is certainly going beyond "science",
I don't quite understand what you meant to say.

dimpase commented Jan 30, 2016

On 30 January 2016 at 17:12, owendavey notifications@github.com wrote:

Re the use of a potential @@ operator. In a lot of mathematical contexts,
we actually consider exp(A) for some matrix A. exp(A)=I + A A@@2/2! + A@@3/3!

  • A@@4/4! + .... continuing like the Taylor Series. Other useful contexts
    for the @@ operator exist, but it is definitely specifically for math
    people and may be noise to most non-scientific programmers.

In computer graphics, one multiplies matrices a lot. Considering that this
is certainly going beyond "science",
I don't quite understand what you meant to say.

@owendavey

This comment has been minimized.

Show comment
Hide comment
@owendavey

owendavey Jan 30, 2016

@dimpase - I was not referring to matrix multiplication. I was referring to matrix exponentiation - hence the double @ operator which above seems to have been considered not important enough to be added in addition to the single @ operator of matrix multiplication for this PEP.

owendavey commented Jan 30, 2016

@dimpase - I was not referring to matrix multiplication. I was referring to matrix exponentiation - hence the double @ operator which above seems to have been considered not important enough to be added in addition to the single @ operator of matrix multiplication for this PEP.

@rkern

This comment has been minimized.

Show comment
Hide comment
@rkern

rkern Jan 30, 2016

Member

The discussion here has ended. The PEP has been accepted and implemented. If you would like to propose or discuss the merits of further changes, please find an appropriate mailing list. Thanks.

Member

rkern commented Jan 30, 2016

The discussion here has ended. The PEP has been accepted and implemented. If you would like to propose or discuss the merits of further changes, please find an appropriate mailing list. Thanks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment