-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Investigate if we should skip zipping of parquet dependency table #397
Comments
Related to #181 |
To answer the first question, we create a PARQUET file and a corresponding ZIP file, and compare their sizes. import audb
import audeer
import os
deps = audb.dependencies("musan", version="1.0.0")
parquet_file = "deps.parquet"
zip_file = "deps.zip"
deps.save(parquet_file)
audeer.create_archive(".", parquet_file, zip_file)
parquet_size = os.stat(parquet_file).st_size
zip_size = os.stat(zip_file).st_size
print(f"Parquet file size: {parquet_size >> 10:.0f} kB")
print(f"Zip file size: {zip_size >> 10:.0f} kB") returns
I repeated it with
|
Regarding the second question, we would need to change the following code block in Lines 753 to 759 in fa14acc
There we could simply use Slightly more complicated will be the case of loading the dependency table, as we might have a ZIP file or a PARQUET file on the server, which is not ideal. The affected code block is in Lines 275 to 282 in fa14acc
There we could first try to load the PARQUET file (or check if it exists), and otherwise load the ZIP file. Then there are also two parts inside Lines 492 to 500 in fa14acc
Lines 550 to 560 in fa14acc
|
To answer the third question, I created the benchmark script shown below, that tests different ways to store and load the dependency table on a dataset containing 292,381 files. When running the script, it returns:
The zipped CSV file, currently used to store the dependency table of the same dataset has a size of 14390 kB. Benchmark scriptimport os
import time
import zipfile
import pyarrow.parquet
import audb
import audeer
parquet_file = "deps.parquet"
zip_file = "deps.zip"
def clear():
for file in [parquet_file, zip_file]:
if os.path.exists(file):
os.remove(file)
deps = audb.dependencies("librispeech", version="3.1.0")
print("parquet snappy")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="snappy")
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
t0 = time.time()
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(parquet_file).st_size
print(f"File size: {size >> 10:.0f} kB")
print()
print("parquet snappy + zip no compression")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="snappy")
with zipfile.ZipFile(zip_file, "w", zipfile.ZIP_STORED) as zf:
full_file = audeer.path(".", parquet_file)
zf.write(full_file, arcname=parquet_file)
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
t0 = time.time()
audeer.extract_archive(zip_file, ".")
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(zip_file).st_size
print(f"File size: {size >> 10:.0f} kB")
print()
print("parquet snappy + zip")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="snappy")
with zipfile.ZipFile(zip_file, "w", zipfile.ZIP_DEFLATED) as zf:
full_file = audeer.path(".", parquet_file)
zf.write(full_file, arcname=parquet_file)
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
os.remove(parquet_file)
t0 = time.time()
audeer.extract_archive(zip_file, ".")
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(zip_file).st_size
print(f"File size: {size >> 10:.0f} kB")
print()
print("parquet gzip")
clear()
t0 = time.time()
table = deps._dataframe_to_table(deps._df, file_column=True)
pyarrow.parquet.write_table(table, parquet_file, compression="GZIP")
t = time.time() - t0
print(f"Writing time: {t:.4f} s")
t0 = time.time()
table = pyarrow.parquet.read_table(parquet_file)
df = deps._table_to_dataframe(table)
t = time.time() - t0
print(f"Reading time: {t:.4f} s")
size = os.stat(parquet_file).st_size
print(f"File size: {size >> 10:.0f} kB") "zip no compression" is referring to the solution proposed in #181, to still be able to upload the files as ZIP files to the server. In #181 we discuss media files, for which it is important to store them in a ZIP file, as we also have to preserve the underlying folder structure. This is not the case for the dependency table, and also the file extension will always be the same for the dependency table. Our current approach is "parquet snappy + zip". If we switch to any of the other approaches, reading time would be halved. |
In general I think that disk storage is normally quite cheap, so I would find it a good move to be able to read data faster. So I would be open to depart from "parquet snappy + zip" and optimize for reading time by going into snappy direction. The SOV post here suggests too that excessive zipping is for cold data. I think we have something in between - luke warm data - but CPU is normally more expensive. Apart from that the SOV post discusses "splittability". Concerning determinism in order to be able to md5sum a file I have not been able to answer. |
I agree, that compressing the PARQUET file with SNAPPY and storing it directly on the backend seems to be the best solution. |
We decided to no longer zip the dependency table, and store it instead directly on the sever as implemented in #398. |
In #372 we introduced storing the dependency table as a PARQUET file, instead of a CSV file.
When the file is uploaded to the server, still a ZIP file is created first. As PARQUET comes already with compression, we should check:
The text was updated successfully, but these errors were encountered: