-
-
Notifications
You must be signed in to change notification settings - Fork 228
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Possible anomaly running the mt_word_counter example #199
Comments
The following workaround resolves the issue; output completing in 0.001 seconds. 97,98c97,98
< [&word_counts](std::vector<std::string>&& lines) {
< for (auto& line : lines) {
---
> [&word_counts, &lines_array, i] {
> for (auto& line : lines_array[i]) {
121,122c121
< },
< std::move(lines_array[i])
---
> } Ah, C++ boggles my mind :). Thank you, for the parallel hash map library. |
Previously, seeing output in 8 threads: Clear Linux (left), Fedora 38 (right)
24 threads: Clear Linux (left), Fedora 38 (right)
|
I just realized that the individual threads can update a local map. Subsequently, the threads update the shared map after exiting the loop. This is something I tried in order to scale better. The gist was updated to showcase both 8 threads: Clear Linux (left), Fedora 38 (right)
24 threads: Clear Linux (left), Fedora 38 (right)
|
I'm closing the issue. I was able to work around the anomaly. |
I created a new gist C++ parallel chunking demonstrations, inspired by your
mt_word_counter.cc
example.Once parallel processing is completed, the time to output takes
mt_word_counter
longer than expected. I'm not sure if this is a bug or nothing more than an anomaly. This anomaly caused me to create another chunking variant to factor out OpenMP. Bothomp_word_counter
andthr_word_counter
work as expected, time wise.There are less than 8,000 (or 7,652 to be exact) key-value pairs in the
word_counts
map table.I also ran on two different Linux distributions.
8 threads: Clear Linux (left), Fedora 38 (right)
24 threads: Clear Linux (left), Fedora 38 (right)
The text was updated successfully, but these errors were encountered: