Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MachSuite Tracker #45

Closed
sa2257 opened this issue Jan 31, 2019 · 11 comments
Closed

MachSuite Tracker #45

sa2257 opened this issue Jan 31, 2019 · 11 comments
Assignees

Comments

@sa2257
Copy link
Collaborator

sa2257 commented Jan 31, 2019

This is to keep track of the status of MachSuites apps implementation. Subsequent posts will track each app.

Some translations:

  • C = "Something Compiles to Fuse"
  • T = "The Fuse program was tested with the harness"
  • R = "The Fuse program is a straightforward 'translation' from the original C"
App C T R Notes
gemm ncubed Yes Simple kernel
stencil2d
gemm block
stencil3d
fft Yes
kmp Yes
radix sort Yes
merge sort
spmv Yes
bfs Yes
md Yes
backprop
nw Yes
aes
viterbi
@rachitnigam

This comment has been minimized.

@sa2257
Copy link
Collaborator Author

sa2257 commented Jan 31, 2019

gemm ncubed

  • Remove library inclusion #include "apcint.h" It is needed by the host file too. So the header file seems a better place for it.
  • Local float variables? The fix Rachit mentioned works for this.
  • reduction operation on temp. Reductions on local variables don't throw an error right now. It should make the distinction once reduction operators are implemented.
  • should get an error for multi-write due to unrolling multiwrite example as discussed during the 1/31 meeting.

@sampsyo
Copy link
Contributor

sampsyo commented Jan 31, 2019

Hi! I don't have a strong feeling about whether MachSuite goes in #23 or its own issue (here). But either way, let's list all the benchmarks. This can help you prioritize the ones to work on next and keep a global view of how far along you are.

For the list of "gaps" indicating what we need before we can fully implement the benchmark, let's try to be as specific as possible and, when it's available, link to the existing issue that tracks that problem. For example, does "Local float variables?" mean that it's impossible to declare local variables with type float? If so, that probably deserves its own issue (or at least a full explanation here).

@rachitnigam
Copy link
Member

Yeah, I'd prefer specific issues that block benchmarks from being written.

@sa2257
Copy link
Collaborator Author

sa2257 commented Feb 1, 2019

Hi, so Rachit prefers to have specific issues listed in #23. I'll repurpose this issue to keep track of MachSuite benchmark implementation.

@sa2257
Copy link
Collaborator Author

sa2257 commented Feb 11, 2019

Language Features Required for MachSuite Apps

This post lists language features required to write each MachSuite app in Fuse.

  1. gemm/ncubed ("Naive, O(n^3) algorithm for dense matrix multiplication."):
    • nested loops
    • reductions
  2. gemm/blocked ("A blocked version of matrix multiplication, with better locality."):
    • the features needed for gemm/ncubed
    • tiling (requires index addition to access arrays; therefore, views are needed)
  3. fft/strided ("Recursive formulation of the Fast Fourier Transform."):
    • the features needed for gemm/ncubed
    • exponential (T: would you mind clarifying what this is @sa2257 ?)
      (S: The app uses a complex for loop to avoid exponentials. Uncertain whether that's the best strategy for us. Maybe LUTs can be used to hardcode exponentials)
    • serialization
    • bitwise operations
    • the C code has more complex loops than Fuse supports; for example:
      for(span=FFT_SIZE>>1; span; span>>=1, log++) { ... }
  4. stencil/stencil2d ("A two-dimensional stencil computation, using a 9-point square stencil."):
    • the features needed for gemm/ncubed
    • index addition (views needed?)
    • mismatching matrix multiply (filtering)
  5. stencil/stencil3d ("A three-dimensional stencil computation, using a 7-point von Neumann stencil."):
    • the features needed for stencil/stencil2d
  6. spmv/ellpack ("Sparse matrix-vector multiplication, using fixed-size neighbor lists."):
    • the features needed for gemm/ncubed
    • indirection (T: @sa2257, I'm not sure what this is referring to):
      (S: si := val[j] * vec[cols[j]]; in spmv.fuse line 22, Do we support reasoning about this? Also, I haven't yet reasoned out whether we can find parallelism in this.):
  7. spmv/crs ("Sparse matrix-vector multiplication, using variable-length neighbor lists.")
    • the features needed for spmv/ellpack
    • index addition
  8. kmp ("The Knuth-Morris-Pratt string matching algorithm."):
    • stencil/stencil2d
    • while loops
    • if conditions
  9. fft/transpose ("A two-level FFT optimized for a small, fixed-size butterfly."):
    • the features needed for fft/strided
    • LUTs
  10. sort/merge ("The mergesort algorithm, on an integer array."):
    • the features needed for stencil/stencil2d
    • if methods
    • a method of serialization
    • recursion
  11. sort/radix ("Sorts an integer array by comparing 4-bits blocks at a time."):
    • the features needed for sort/merge
    • bitwise operation support
  12. bfs/bulk ("Data-oriented version of breadth-first search."):
    • sort/merge
    • some kind of "and" type like a struct; need a good way to represent nodes of a graph
  13. bfs/queue ("The “expanding-horizon” version of breadth-first search."):
    • the features needed for bfs/bulk
    • queues

Apps left:

  • md
  • backprop
  • nw
  • aes
  • viterbi

@sa2257
Copy link
Collaborator Author

sa2257 commented Feb 11, 2019

Optimizations Needed
gemm/ncubed - resource allocation (multiplier), memory allocation, array partitioning, loop unroll, pipeline
gemm/blocked - resource allocation (multiplier), memory allocation, array partitioning, loop unroll with tiling, pipeline
stencil/stencil2d - resource allocation (multiplier), memory allocation, array partitioning, loop unroll, pipeline
stencil/stencil3d - resource allocation (multiplier), memory allocation, array partitioning, loop unroll, pipeline
spmv/ellpack - resource allocation (multiplier), memory allocation, array partitioning, loop unroll, pipeline
spmv/crs - resource allocation (multiplier), memory allocation, array partitioning, loop unroll, pipeline
kmp - memory allocation, array partitioning, loop unroll, pipeline
bfs/bulk - memory allocation, array partitioning, loop unroll, pipeline
bfs/queue - memory allocation, array partitioning, loop unroll, pipeline
fft/strided - resource allocation (multiplier), memory allocation, array partitioning, loop unroll, pipeline
fft/transpose - allocation (multiplier and adder), memory allocation, array partitioning, loop unroll, pipeline
sort/merge - memory allocation, array partitioning, loop unroll, pipeline
sort/radix - memory allocation, array partitioning, loop unroll, pipeline

@sampsyo
Copy link
Contributor

sampsyo commented Feb 11, 2019

Cool! Maybe a big table would be a good way to represent this information? The rows could be benchmarks, and the columns could be features. Then the cell could contain a checkmark if the benchmark needs that feature. (There could be two groups of columns: expressiveness/features and optimizations.) This way, we could easily see what will affect the most benchmarks.

Some things where I could use expanded definitions:

  • "Memory allocation": Do the benchmarks call malloc? If so, do they really need to do this, or is it just a convenience?
  • "Resource allocation (multiplier)": What does this look like? What's the effect on the generated hardware?
  • "Serialization": Is this like data structure serialization, or like preventing parallelism?
  • "LUTs": What do these look like in the code? (Presumably you don't mean FPGA LUTs?)
  • "mismatching matmul (filtering) views": This could use a little more explanation—is this within scope for our current thoughts on views, or beyond them?
  • "queues": Are these memories, accessed in a special way? Do they also do synchronization?

@sa2257
Copy link
Collaborator Author

sa2257 commented Feb 11, 2019

Okay. Sounds good.

  • Memory and resource allocation means just binding a resource to a particular array/ function. Examples in the user guide are very ambiguous. Still trying to tease out how to use them.
  • Serialization, I simply mean some way to ensure a certain block of logic happen strictly after another block. Blocking statements.
  • Some data structure that will be already be populated at the time of execution. For example, to get sine and cosine, it is easier to store them than implement a function to generate them.
  • I'm not entirely sure if it's within the scope, for instance to run a 3x1 filter on a 32 element array, inner unroll 3 should allow 32 element array access if it's banked by 3 (mismatch with 32) or larger. (we can doctor the array size for simplicity)
  • I need to understand how queues get implemented in HLS. HLS allows FIFOs. Maybe it'll be a simple use of that. But I'm not clear how it gets implemented in MachSuite.

@sa2257
Copy link
Collaborator Author

sa2257 commented Feb 11, 2019

This table outlines the features needed for the benchmark applications:

Benchmarks Nested Loops Reductions Tiling Blocking Operations Index Addition Filter while loops if conditions Bitwise operations Indirection unroll Banking Pipelining Queues LUTs Resource Bind Recursion
gemm/ncubed - - -
spatial/fir - -
gemm/blocked - -
stencil/stencil2d -
stencil/stencil3d -
fft/strided -
fft/transpose -
spmv/ellpack -
spmv/crs
kmp/kmp
sort/merge
sort/radix
bfs/bulk
bfs/queue
spatial/kmeans
spatial/gda
spatial/bs
spatial/pagerank
spatial/sw
spatial/tq6
md/knn
md/grid
backprop/backprop
nw/nw
aes/aes
viterbi/viterbi

@rachitnigam
Copy link
Member

I assume a combination of https://github.com/cucapra/fuse-benchmarks/issues/74 and direct issues are being used to track this now. @sa2257 reopen if you still need this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants