-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matrix dialect #180
Matrix dialect #180
Conversation
The current issue is in Fixed in fd73b1e |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Really cool :)
Side note: I need to adjust my email settings. Sometimes I only see that you tagged me as a reviewer days later ... Sorry, for that. |
A simple matrix dialect.
It exposes a matrix type
Mat: Π [n: .Nat, S: «n; .Nat», T: *] -> *
such thatMat (n, s, T)
is an n-dimensional tensor with size s_0 * ... * s_{n-1}.The matrix operations are:
All operations are inside the memory monad, allowing for a matrix implementation involving side effects.
In fact, the current matrices are nested pointers to arrays that are manipulated in-place.
An alternative might be a immutable array implementation like skew binary random access list or one array implementation from haskell (e.g. diff arrays)
MapReduce
mapReduce
is inspired by the einstein sum notation and implementations likeIt takes m matrices, a zero, and a combination function.
The combination function takes the accumulated (initially zero) and elements from the input matrices and returns the new accumulator.
The result is a matrix.
Pseudocode:
Optimization Pipeline
The matrix operations and type are translated using a staging approach that allows intercepting the process at different levels.
High-Level Rewrites
First, high-level operations like transpose, sum, and prod are rewritten into the mapReduce form.
To do so, pre-defined functions of the form
internal_mapRed_matrix_[name]
are looked up. The functions should agree on the type with the corresponding axiom.High-Level Externalization
Alternatively, certain operations like prod could be dispatched to external libraries like blas.
This is however not implemented in the current version.
Medium-Level Lowering
The next step is to lower
mapReduce
to affine for loops.The conceptual idea corresponds to the pseudocode above.
Low-Level Lowering
The last step is to eliminate all remnants of the matrix dialect.
We remove the remaining
internal_mapRed_
functions (due to a missing association dialect).Afterward, we lower the low-level matrix operations and types.
init
is replaced withalloc
read
becomeslea
+load
insert
becomeslea
+store
constMat
becomesalloc
+pack
+store
Low-Level Functional Lowering
We could lower the matrix to a functional array representation like Haskell arrays or random access lists at this point.
Additional Operations
One could implement further operations either deeply or shallowly:
Known Issues
Edge cases like zero inputs or outputs are not handled correctly in every case for
mapReduce
.