Skip to content

Compression Algorithm

SilasLock edited this page Jan 10, 2018 · 6 revisions

I've designed a lossy compression algorithm that outperforms standard JPEG compression for a wide class of images. It uses certain properties of Hilbert curves to capitalize on the fact that approximations to a function using the first few coefficients of the Fourier transform converge faster if the function is continuous. Because of this, I'm calling this method "Hilbert compression" until I can think of a better name.

Here's Hilbert compression compared to JPEG compression: Comparison of methods Both are compressed to around 6% of their original size, yet JPEG compression is already starting to show its flaws. The 8 by 8 squares of pixels which are individually compressed in the left-most image have become unpleasantly visible, while Hilbert compression displays no such artifacts.

Hilbert compression also retains a decent level of image quality for very heavily compressed images. Compressed to 2% The left-most image is the original, while the right-most image has been compressed to a mere 2% of its original size. While there is some loss in quality, it is negligible when one considers how much information is being thrown away.

If you're interested in learning how this compression algorithm works, feel free to contact me.

Clone this wiki locally