Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] I made a graph that allows me to estimate about how big a cubes_n.npy file will get (in bytes) when given n cubes. #14

Open
TheoCGaming opened this issue Jul 13, 2023 · 4 comments

Comments

@TheoCGaming
Copy link

TheoCGaming commented Jul 13, 2023

https://www.desmos.com/calculator/fea4uymhix
According to this graph, file sizes will get ridiculously large even by the 12th iteration. Perhaps the storage format should be optimized?

@VladimirFokow
Copy link

VladimirFokow commented Jul 14, 2023

Here is how to optimize the current storage code, but this still doesn't solve the fast growth of the sheer number of cubes:

np.save(cache_path, np.packbits(np.asarray(polycubes, dtype=np.int8), axis=-1), allow_pickle=False)

notes:

  • it uses np.packbits
  • allow_pickle by default is True. But we don't have object arrays, so we don't need pickles.
  • the np.load should also have allow_pickle=False. Its output should then be processed with np.unpackbits() with axis=-1, and also you'll have to undo the effect of packing a size that is not a multiple of 8; possibly with size parameter, or manually crop the last zeros

@VladimirFokow
Copy link

VladimirFokow commented Jul 14, 2023

About storing the cubes

Note:
Here I've ignored

  • lossless compression
  • information theory? - have no idea if it would be useful at all (to encode the polycubes more efficiently - to store them using less space. Have no idea how.)

For n=16 (the current record):

The theoretical minimum to store 50 billion DIFFERENT THINGS (not even polycubes, but at least their ids):
To uniquely identify each of the 50 billion things - we need at least log_2_{50e9} = 36 bits for each id. Needed storage space:
~ 50e9 * 36 bits = 225 GB

For n=20:

Let's say we want to make progress until n=20.

Assuming the number of polycubes grows by a factor of 7 with each n.

n_cubes = 50e9 * 7**4  # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8 

need_bytes / 1e12  # terabytes

~ 700 TB

For n=30:

n_cubes = 50e9 * 7**14  # approx.
n_bits = np.log2(n_cubes)
need_bytes = n_cubes * n_bits / 8 

need_bytes / 1e21  # zettabytes

~ 317 ZB

This number is on the scale of the whole internet.

  • So it's safe to assume that nobody will ever want to store all the cubes for large n.

@TheoCGaming
Copy link
Author

TheoCGaming commented Jul 15, 2023

And not to mention that you have to store all of that in ram before you actually write it, meaning that if it remains uncompressed, your computer (or program) will crash if you make the polycube too big. Even then it's not a matter of "if", it's a matter of "when". Compressing it will only make it crash earlier and may make it run slower.

I'm not against the idea of compression, these are just things to consider.

@VladimirFokow
Copy link

VladimirFokow commented Jul 15, 2023

you have to store all of that in ram before you actually write it

You don't actually have to store it all in RAM at the same time.

Polycubes can be processed and counted separately from each other (it will just take longer), and it can be distributed across multiple machines.

See an example algorithm I wrote here: mikepound/opencubes#7 (comment) (maybe an even better approach exists).
There is also a link to the paper which describes useful ideas for reaching n=16.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants