Theoretically compressing losslessly files to near 100%.
Because of Pi being infinte and never repeating, all possible combinations of 1 ad 0s are there, meaning all files are inside pi somewhere!
This means 1TB could be compressed to maybe a 1 megabyte, which is about 0.000001% of it's original size.
Maybe of biggest problem is that our computers are not fast enough to do this with current algorithms (ex: Chudnovsky). It won't take more than 5-6 bytes even with my new method. While old method would only do one maybe two letters if lucky.
I have no idea, it will probably contine to be impossible for a few more decades... ): Anyways, i will continue to optimize it, for example by lowering bits required each letter! (:
Well, my first attempt calculated pi in realtime while searching it which made it incredibly slow (20k in an hour, slowing down immensely each search). My new and more succesfull method requires an txt file with an enourmous index of pi, but this means it is super fast. This is because of importing 100000 digits of pi at a time is obviously much faster than trying to generate in realtime. The new speed is about 50-200k a second!
Most likely can your cpu calculate it faster than you can download, so I recommend y-cruncher. Minimum index size I would say is 500k which translates into 0.5gb and only takes 5 minutes.
https://en.wikipedia.org/wiki/Approximations_of_%CF%80
https://en.wikipedia.org/wiki/Category:Pi_algorithms
http://www.angio.net/pi/whynotpi.html