pre-millennial time-series files no longer need to be broken up into smaller segments... #755
Labels
enhancement
new capability or improved behavior of existing capability
Milestone
Previously we broke up the pre-millennial time series files into smaller segments of 250 years. We are now using 64-bit offset NetCDF for the time-series files and so we don't have the previous limit on file size. So I don't think there is a reason to break it up this way anymore. The resulting file will be about 100GB, but that isn't as big of a problem as it used to be, and it's probably easier to manage one big file rather than four files, as well as requiring simulations to be broken up into four segments. Since, it only reads each time segment into memory while running the model, I don't see a reason that we have to break it up anymore now that we can read larger files.
@dlawrenncar @lawrencepj1
The text was updated successfully, but these errors were encountered: