New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
parallelization simulation based indexes and inference wrappers #35
Comments
this might be relevant which also raises the issue that we might want to allow a |
I was just surfing on the web reading some posts and read something that let me wondering if we could use some kind of just-in-time for these functions: http://numba.pydata.org/numba-doc/0.17.0/user/jit.html or something similar to what it is possible to do in R with for loops with the compiler package: https://www.r-statistics.com/2012/04/speed-up-your-r-code-using-a-just-in-time-jit-compiler/ |
i'm familiar with numba in concept but never been sure what kind of code its able to speed up. maybe we could try some speed tests? |
solved for inference classes by #174 |
resolved by #183 |
Some indexes such as Modified Dissimilarity (Dct), Modified Gini (Gct) and Bias-Corrected Dissimilarity (Dbc) could be leveraged to work in parallel since they rely on independent draws of probability distributions and recalculating the index.
Also, the inference wrappers (
Infer_Segregation
andCompare_Segregation
) could be leveraged to work in parallel since they rely on independent simulations framework.A possibility to implement is to use Dask (https://github.com/dask/dask), concurrent.futures (https://docs.python.org/3/library/concurrent.futures.html), etc.
The text was updated successfully, but these errors were encountered: