Skip to content
This repository has been archived by the owner on Sep 1, 2022. It is now read-only.

Disable NetcdfFile.openInMemory() writing index file to disk #93

Closed
innuo opened this issue Jan 8, 2015 · 3 comments
Closed

Disable NetcdfFile.openInMemory() writing index file to disk #93

innuo opened this issue Jan 8, 2015 · 3 comments

Comments

@innuo
Copy link

innuo commented Jan 8, 2015

I would like to be able to sequentially read records from a large grib2 file (too large to read all at once) and extract data for a particular variable. I use NetcdfFile.openInMemory(String, byte []) to do that. However, because even for small records the call writes out an index file to disk reading the whole file takes hours.

Is there any way to disable the index file creation? Or any way to circumvent openInMemory() by using some low-level methods?

@JohnLCaron
Copy link
Collaborator

  1. NetcdfFile.open will write disk indices, but never reads all data into
    memory at once. So normal netcdf calls should work just fine no matter how
    big the data file.

  2. use Grib1RecordScanner or Grib2RecordScanner for one record at a time
    processing.

John

On Thu, Jan 8, 2015 at 8:28 AM, Harsha Veeramachaneni <
notifications@github.com> wrote:

I would like to be able to sequentially read one records from a large
grib2 file (too large to read all at once) and extract data for a
particular variable. I use NetcdfFile.openInMemory(String, byte []) to do
that. However, because even for small records the call writes out an index
file to disk reading the whole file takes hours.

Is there any way to disable the index file creation? Or any way to
circumvent openInMemory() by using some low-level methods?


Reply to this email directly or view it on GitHub
#93.

@innuo
Copy link
Author

innuo commented Jan 9, 2015

Thanks, John. Actually I am trying to process all the records in parallel using spark. The disk reads and writes add a lot of overhead to processing the small records.

Thanks for the suggestion about Grib2RecordScanner. Are there any helper classes that you can recommend to look at that make it easy to parse the Grib2Record objects and extract variables, x, y, time and values?

@JohnLCaron
Copy link
Collaborator

we dont officialy support those low-level APIs, so you have to look at the
source, but it should be straightforward.

Just look for other examples of use of RecordScanners, eg GribIndex.

On Fri, Jan 9, 2015 at 2:19 PM, Harsha Veeramachaneni <
notifications@github.com> wrote:

Thanks, John. Actually I am trying to process all the records in parallel
using spark. The disk reads and writes add a lot of overhead to processing
the small records.

Thanks for the suggestion about Grib2RecordScanner. Are there any helper
classes that you can recommend to look at that make it easy to parse the
Grib2Record objects and extract variables, x, y, time and values?


Reply to this email directly or view it on GitHub
#93 (comment).

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants