Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Partially parallel context clustering. #465

Merged
merged 1 commit into from
Feb 4, 2022
Merged

Partially parallel context clustering. #465

merged 1 commit into from
Feb 4, 2022

Conversation

Olga-Yakovleva
Copy link
Member

  • Only each parameter type is clustered independently in this version, not each state, so it will only be able to take advantage of up to 4 cores.
  • The Parallel::ForkManager Perl module is now required to run the training script.
  • Change the value of the $NUMPROC variable in scripts/Config.pm to let the script run the clustering steps in parallel. See above, no sense in setting it to a value grater than 4.
  • Remember that it will mean more memory usage, so be careful. If the training crashes, it's very likely your server ran out of memory executing those steps at the same time.

* Only each parameter type is clustered independently in this version, not each state, so it will only be able to take advantage of up to 4 cores.
* The Parallel::ForkManager Perl module is now required to run the training script.
* Change the value of the $NUMPROC variable in scripts/Config.pm to let the script run the clustering steps in parallel. See above, no sense in setting it to a value grater than 4.
* Remember that it will mean more memory usage, so be careful. If the training crashes, it's very likely your server ran out of memory executing those steps at the same time.
@Olga-Yakovleva Olga-Yakovleva enabled auto-merge (squash) February 4, 2022 16:00
@Olga-Yakovleva Olga-Yakovleva merged commit 60232b0 into master Feb 4, 2022
@Olga-Yakovleva Olga-Yakovleva deleted the par-cx-cl branch February 4, 2022 16:12
zstanecic pushed a commit to zstanecic/RHVoice that referenced this pull request May 16, 2022
* Only each parameter type is clustered independently in this version, not each state, so it will only be able to take advantage of up to 4 cores.
* The Parallel::ForkManager Perl module is now required to run the training script.
* Change the value of the $NUMPROC variable in scripts/Config.pm to let the script run the clustering steps in parallel. See above, no sense in setting it to a value grater than 4.
* Remember that it will mean more memory usage, so be careful. If the training crashes, it's very likely your server ran out of memory executing those steps at the same time.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant