You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Even for high frequency data, the 'ts' output option is still an average over the specified sub-annual range. It would be nice to be able to save the data at every timestep on which it was created. This is a necessary condition for using aospy-generated data as input for other aospy functions, c.f. #3. The logic should be relatively straightforward to implement: compute the function at each timestep as usual, and then just skip all further time reductions.
Perhaps not right away, but ultimately there would need to be the ability to specify (or automatically determine) time chunks for splitting the output into different files. For high frequency data, e.g. 3hr or 6hr, even a 1-2 yr file is on the order of 1 GB. So without some chunking, the files will become huge for longer time durations.
The text was updated successfully, but these errors were encountered:
Even for high frequency data, the
'ts'
output option is still an average over the specified sub-annual range. It would be nice to be able to save the data at every timestep on which it was created. This is a necessary condition for using aospy-generated data as input for other aospy functions, c.f. #3. The logic should be relatively straightforward to implement: compute the function at each timestep as usual, and then just skip all further time reductions.Perhaps not right away, but ultimately there would need to be the ability to specify (or automatically determine) time chunks for splitting the output into different files. For high frequency data, e.g. 3hr or 6hr, even a 1-2 yr file is on the order of 1 GB. So without some chunking, the files will become huge for longer time durations.
The text was updated successfully, but these errors were encountered: