You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Sep 1, 2022. It is now read-only.
I would like to be able to sequentially read records from a large grib2 file (too large to read all at once) and extract data for a particular variable. I use NetcdfFile.openInMemory(String, byte []) to do that. However, because even for small records the call writes out an index file to disk reading the whole file takes hours.
Is there any way to disable the index file creation? Or any way to circumvent openInMemory() by using some low-level methods?
The text was updated successfully, but these errors were encountered:
NetcdfFile.open will write disk indices, but never reads all data into
memory at once. So normal netcdf calls should work just fine no matter how
big the data file.
use Grib1RecordScanner or Grib2RecordScanner for one record at a time
processing.
I would like to be able to sequentially read one records from a large
grib2 file (too large to read all at once) and extract data for a
particular variable. I use NetcdfFile.openInMemory(String, byte []) to do
that. However, because even for small records the call writes out an index
file to disk reading the whole file takes hours.
Is there any way to disable the index file creation? Or any way to
circumvent openInMemory() by using some low-level methods?
—
Reply to this email directly or view it on GitHub #93.
Thanks, John. Actually I am trying to process all the records in parallel using spark. The disk reads and writes add a lot of overhead to processing the small records.
Thanks for the suggestion about Grib2RecordScanner. Are there any helper classes that you can recommend to look at that make it easy to parse the Grib2Record objects and extract variables, x, y, time and values?
Thanks, John. Actually I am trying to process all the records in parallel
using spark. The disk reads and writes add a lot of overhead to processing
the small records.
Thanks for the suggestion about Grib2RecordScanner. Are there any helper
classes that you can recommend to look at that make it easy to parse the
Grib2Record objects and extract variables, x, y, time and values?
—
Reply to this email directly or view it on GitHub #93 (comment).
I would like to be able to sequentially read records from a large grib2 file (too large to read all at once) and extract data for a particular variable. I use NetcdfFile.openInMemory(String, byte []) to do that. However, because even for small records the call writes out an index file to disk reading the whole file takes hours.
Is there any way to disable the index file creation? Or any way to circumvent openInMemory() by using some low-level methods?
The text was updated successfully, but these errors were encountered: