Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Introduce forEach multi-stage domain specific language #4

Merged
merged 13 commits into from Nov 5, 2018

Conversation

mratsim
Copy link
Owner

@mratsim mratsim commented Nov 4, 2018

The goal is to remove reduceEach which was a workaround for Nim limitations in nim-lang/Nim#9490. This was fixed in nim-lang/Nim#9493.

The domain specific language should allow a "multi-stage" parallel section with a variadic for loop similar to:

proc reduction_localvar(s: seq[int]): int =
  omp_parallel:
    ### initialization
    var local_sum = 0

    ### for loop
    for i in `||`(0, s.len-1, "for"):
      local_sum += s[i]

    ### Finalization
    omp_critical:
      result += local_sum

Currently reduceEach require nb_chunks to be passed as params to omp_parallel_chunks (9ba351a) but this workaround was removed (dbd483c). Furthermore reduceEach requires allocation of a temporary seq while the DSL would leave partial_sums' tradeoffs at the user discretion:

  • padding avoids false sharing and locks but requires temporary seq allocation which might be slow or restricted (embedded devices) and will require an extra pass on the temporary seq.
  • OpenMP critical requires locking
  • OpenMP atomic is only supported for a narrow subset of operations
  • Nim builtin atomics

Parallel reduction on multiple tensors is used for vector dot product and all loss functions.

Also closes #3

@mratsim mratsim merged commit cd96db9 into master Nov 5, 2018
@mratsim mratsim deleted the foreach-dsl branch November 5, 2018 09:58
mratsim added a commit that referenced this pull request Nov 5, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Update for devel OpenMP
1 participant