-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Discussion] I made a graph that allows me to estimate about how big a cubes_n.npy file will get (in bytes) when given n cubes. #14
Comments
Here is how to optimize the current storage code, but this still doesn't solve the fast growth of the sheer number of cubes: np.save(cache_path, np.packbits(np.asarray(polycubes, dtype=np.int8), axis=-1), allow_pickle=False) notes:
|
About storing the cubesNote:
For n=16 (the current record):The theoretical minimum to store 50 billion DIFFERENT THINGS (not even polycubes, but at least their ids): For n=20:Let's say we want to make progress until n=20. Assuming the number of polycubes grows by a factor of 7 with each n. n_cubes = 50e9 * 7**4 # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8
need_bytes / 1e12 # terabytes ~ 700 TB For n=30:n_cubes = 50e9 * 7**14 # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8
need_bytes / 1e21 # zettabytes ~ 317 ZB This number is on the scale of the whole internet.
|
And not to mention that you have to store all of that in ram before you actually write it, meaning that if it remains uncompressed, your computer (or program) will crash if you make the polycube too big. Even then it's not a matter of "if", it's a matter of "when". Compressing it will only make it crash earlier and may make it run slower. I'm not against the idea of compression, these are just things to consider. |
You don't actually have to store it all in RAM at the same time. Polycubes can be processed and counted separately from each other (it will just take longer), and it can be distributed across multiple machines. See an example algorithm I wrote here: mikepound/opencubes#7 (comment) (maybe an even better approach exists). |
https://www.desmos.com/calculator/fea4uymhix
According to this graph, file sizes will get ridiculously large even by the 12th iteration. Perhaps the storage format should be optimized?
The text was updated successfully, but these errors were encountered: