Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cupSODA multi-GPU and chunk support #409

merged 5 commits into from Mar 21, 2019


None yet
3 participants
Copy link

commented Feb 2, 2019

Add multi-GPU support and chunk support to cupSODA. Chunking allows the user to specify the maximum number of simulations per GPU per run. Larger numbers of simulations are split into chunks. This is useful for overcoming GPU RAM limitations.


This comment has been minimized.

Copy link

commented Feb 3, 2019

Coverage Status

Coverage decreased (-0.2%) to 78.814% when pulling 3c82ae3 on alubbock:cupsoda_multigpu into 3c68232 on pysb:master.

Copy link

left a comment

Looks good, just a small init cleanup suggestion.

@@ -250,58 +334,57 @@ def run(self, tspan=None, initials=None, param_values=None):

if isinstance(self.gpu, collections.Iterable):

This comment has been minimized.

Copy link

jmuhlich Mar 21, 2019


This kind of normalization probably belongs in __init__ -- can you just apply this directly to self.gpu so it's always a list by the time the code gets to run?

@alubbock alubbock merged commit 91fefad into pysb:master Mar 21, 2019

3 checks passed

continuous-integration/appveyor/pr AppVeyor build succeeded
continuous-integration/travis-ci/pr The Travis CI build passed
coverage/coveralls Coverage decreased (-0.2%) to 78.814%

@alubbock alubbock deleted the alubbock:cupsoda_multigpu branch Mar 21, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.