You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Most of the computation in fitting models with the MixedModels package occurs in two block matrices, A and L. Given a value of a parameter vector the array L is updated from information stored in A and then converted to the lower Cholesky factor from which the objective is evaluated. During the process of optimizing the objective with respect to the parameters this process can be repeated thousands of times.
The block structure and block sizes of A and L are the same. Both the blocking structure and the overall sizes of these block matrices are square. A is symmetric and L is lower triangular.
The efficiency of the method derives from the fact that the [1, 1] block in both matrices is always the largest block and is either diagonal or, at most, block diagonal with small, square, similarly-sized diagonal blocks.
At present I am using a funky, home-baked block matrix representation and I would like to use the capabilities of BlockArrays instead. It seems that if I define these as
A =BlockArray(AbstractMatrix{T}, sz, sz)
L =similar(A)
for j in1:nt, i in j:nt
Aij =densify(trms[i]'trms[j])
setblock!(A, Aij, i, j)
setblock!(L, deepcopy(Aij), i, j)
end
where sz is a vector of block sizes and nt is the number of blocks in each dimension, the [1, 1] block ends up being converted from Diagonal to Matrix.
Is this a requirement of the BlockMatrix type? That is, are the blocks in a BlockMatrix required to be homogeneous in actual type?
If not, could someone give me a hint of how I could avoid the conversion of blocks that are diagonal or SparseMatrixCSC to full storage?
The alternative is to define my own block matrix type as a subtype of AbstractBlockMatrix. I haven't yet been successful doing that.
The text was updated successfully, but these errors were encountered:
I just realized that I may have been misled by the way a matrix was being printed. I need to do some more checking to see if my problem is indeed a problem.
Most of the computation in fitting models with the
MixedModels package
occurs in two block matrices,A
andL
. Given a value of a parameter vector the arrayL
is updated from information stored inA
and then converted to the lower Cholesky factor from which the objective is evaluated. During the process of optimizing the objective with respect to the parameters this process can be repeated thousands of times.The block structure and block sizes of
A
andL
are the same. Both the blocking structure and the overall sizes of these block matrices are square.A
is symmetric andL
is lower triangular.The efficiency of the method derives from the fact that the
[1, 1]
block in both matrices is always the largest block and is either diagonal or, at most, block diagonal with small, square, similarly-sized diagonal blocks.At present I am using a funky, home-baked block matrix representation and I would like to use the capabilities of
BlockArrays
instead. It seems that if I define these aswhere
sz
is a vector of block sizes andnt
is the number of blocks in each dimension, the [1, 1] block ends up being converted fromDiagonal
toMatrix
.Is this a requirement of the
BlockMatrix
type? That is, are the blocks in aBlockMatrix
required to be homogeneous in actual type?If not, could someone give me a hint of how I could avoid the conversion of blocks that are diagonal or
SparseMatrixCSC
to full storage?The alternative is to define my own block matrix type as a subtype of
AbstractBlockMatrix
. I haven't yet been successful doing that.The text was updated successfully, but these errors were encountered: