Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

randomize subject data sets to even out 'cost' of big data sets #11

Open
ctb opened this issue Nov 11, 2022 · 0 comments
Open

randomize subject data sets to even out 'cost' of big data sets #11

ctb opened this issue Nov 11, 2022 · 0 comments

Comments

@ctb
Copy link
Owner

ctb commented Nov 11, 2022

the paper benchmarking revealed that big data sets consume the most time b/c of their load, as well as the most memory; this is both obvious and might be easy to address by randomizing the input list.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant