New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Precision targets for bootstrap functions #131
Comments
Hi @HDembinski , I would be happy to work on this. Could you please guide me what do I need to do (and also please let me know any references on this topic ) ? Thanks, |
Hi Aman, thank you for your interest and the offer! I am trying to recover my thoughts on this issue, because I eventually abandoned this idea. The problem is: the number of samples that need to be generated to achieve a given precision grows quadratically e.g. like I will close this issue now. |
Oh I see. |
Hi @HDembinski Is there anything else that I can contribute to for this package |
@amanmdesai Apart from the other three open issues I have nothing on my mind right now. All three are not easy targets for a simple contribution, though. If you want to tackle one of them anyway, let me know, I can tell you what should be done. Otherwise, you may have some ideas on your own what could be improved in resample. Open a new issue if you want to discuss some idea for a new feature or improvement. If you are not set on contributing to resample, but also open to contributing to other libraries: I am also maintaining iminuit, pyhepmc, numba-stats, and jacobi. |
Hi @HDembinski I have found two issues, one in iminuit and one numba-stats that I would be like to contribute to. |
I worked out how to iterate the permutation tests until a precision target is reached.
It would be great to implement such a functionality also for the functions
bootstrap.bias
bootstrap.bias_corrected
bootstrap.variance
bootstrap.confidence_interval
This means adding keywords
precision
andmax_size
to the functions and to deprecatesize
(which would act likemax_size=size
,precision=0
). We cannot do this forbootstrap.bootstrap
, because we don't know what the user is computing.With the keyword
return_error
(default False) we can optionally return the calculated uncertainty in a backward compatible way.The text was updated successfully, but these errors were encountered: