-
Notifications
You must be signed in to change notification settings - Fork 16
ENH: vectorize cov
#507
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: vectorize cov
#507
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should address the lint failures.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks Matt, LGTM!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BTW @lucascolley, while we were at it, I wanted to ask about 1) enforcing use of double precision and 2) squeezing all singleton dimensions at the end.
I've heard it argued that double precision arithmetic is essential for accurate covariance calculations.
Double precision arithmetic is essentially always important for "accurate" calculations. Personally, I've never used float32 intentionally, but I understand it's important, and I don't see why fundamental libraries should make the choice for the user.
As for the "indescriminate squeeze", I can see why you might want to eliminate a dimension if the covariance between two univariate samples were viewed as a reducing operation. But that calculation is not really possible with one-argument cov, and there is no reason to use cov to take the variance of a single univariate sample. So really, I don't see any good use case for 1D input anyway, and for any other dimensionality, it's not good to eliminate singleton batch dimensions.
+1
Agreed. |
|
Does |
Closes gh-502