-
Notifications
You must be signed in to change notification settings - Fork 65
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add compression to embedding export #53
Comments
Initial check with the use case four fasta, lengthes [300, 544, 184, 1584, 518] goes from 37MB to 26MB - That's significantly more than I expected and really good for this small sample. The reduced embeddings go from 24KB to 32KB, but at that size it's not even a real benchmark. What I don't understand from using the documentation is whether the compression is applied across datasets or for each dataset individually. That will most likely decide whether it is at all useful for the reduced embeddings. Code used: import sys
import h5py
from tqdm import tqdm
lengthes = []
with h5py.File(sys.argv[1], "r") as uncompressed, h5py.File(sys.argv[2], "w") as compressed:
for key, value in tqdm(uncompressed.items()):
if len(value.shape) == 3:
lengthes.append(value.shape[1])
compressed.create_dataset(key, data=value, compression="gzip")
print(lengthes) |
This already sounds exceptionally good. These are actually the biggest files we produce, so we don't even have to worry about zipping results via pipeline after the run for space reasons if we can directly compress the embeddings. The only other "worthy" big file produced is the pairwise distance matrix (which on some internal swissprot vs human test is occupying upwards of 10GB in CSV form; thus --> might want to save this as h5 soon). |
New numbers! Reduced embeddings: 148M -> 207M (yes, this file has apparently grown by compression). Normal embeddings: 19G -> 17G; 17G also by just gzipping the whole file, while the latter was faster. Both took a couple of minutes. |
Do you still want this now that we have |
I guess that we can zip files, too. Closing for now |
An easy improvement when storing
embeddings_file
andreduced_embeddings_file
, supported out of the box, may impact speed (but that's acceptable).https://docs.h5py.org/en/stable/high/dataset.html#filter-pipeline
Also, since at it: double check that stored dataset uses the most fitting dtype.
P.S.: preference for
gzip
P.P.S.: would be nice to run this as a test to see "how much it buys". Easy test: take an h5 file and copy all datasets into a new h5 file applying compression. Then we see if this is useful...
The text was updated successfully, but these errors were encountered: