-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
least squares implementation #57
Comments
Using the Cholesky is typically much faster and most often the statistical error is much larger than error due to roundoff so in many cases it can preferable with the faster version. |
No, Cholesky is notoriously bad. No serious library I know of (in other languages) uses it. It is very easy to come up with examples like using MultivariateStats
function makeX(ϵ, N = 10)
X = zeros(N, 3)
k = N ÷ 2
X[:, 1] = 1.0
X[:, 2] = 1:N
X[1:k, 3] = 1:k
X[(k+1):N, 3] = ((k+1):N) + ϵ
X
end
X = makeX(1e-6)
β = Float64.(1:3)
Y = X * β
β1 = llsq(X, Y; bias = false)
β2 = X \ Y
norm(β-β1, Inf) # ouch, 0.2
norm(β-β2, Inf) # 1e-10 but the worse thing is that this phenomenon happens a lot in practice, if you have a few tens of covariates. |
Well, in your example you chose zero statistical error so, of course, the errors due to rounding will dominate. If you data matrix is practically singular and your explained variable is in the range of the data matrix then sure, use QR, but if one the conditions doesn't hold then it typically doesn't matter much and the speedup can be quite significant.
What is the basis of this statement? SAS's |
I am not sure I understand you argument. Are you saying that because estimation has statistical error anyway, we might as well ignore potentially significant numerical error? I can't check SAS because it is closed source, and I don't have time to dive into SPARK's source (it would be great if you could provide a link to the relevant part, since you seem to be familiar with it). But R uses QR, of course Julia uses QR, GSL uses SVD (probably a bit overcautious). You are of course correct about speed, My understanding is that this is pretty standard, numerical analysis textbooks always caution about following the naive formulas for regression, and recommend at least QR. But if your mind is set about this, feel free to close this issue, I will just write my own routines. |
I think the fear of rounding errors of least squares is exaggerated in statistical applications. If your design matrix is so ill-conditioned then your estimates will, in general, be so uncertain that the rounding errors don't matter. Do you often have data where the outcome is in the range of the regressors, i.e. where the residual is zero? I think it can happen in experimental settings but in an experimental setting, there is not really a good excuse for having an ill-conditioned design matrix. Indeed, I should have provided some links. Here is a brief description the computational method of prog reg in SAS and if I read the source correctly then this thing here is called when there is no regularization in SPARK's regression.
I don't have strong feelings about this package (I mainly use |
I think you are correct about the ML focus. Perhaps an explanation of favoring speed over accuracy in the README would clarify this. |
GLM.jl offers both |
AFAICT the implementation of linear regression forms the
X'X
matrix then calculates(X'X)⁻¹(X'y)
using Cholesky factorization.I am curious why this was chosen. The the best of my knowledge, orthogonal (QR) methods are more stable (if a bit more costly), and of course SVD is the best solution for nearly rank-deficient matrices.
The text was updated successfully, but these errors were encountered: