-
-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Example on using preprocess
with mfdataset
#2313
Comments
There is a related question on SO. I think it is a good idea to add an example to our doc. |
Edit: Copied and pasted from a duplicate issue I opened. Closing that and moving convo here. @jhamman's SO answer circa 2018 helped me this week https://stackoverflow.com/a/51714004/6046019 I wonder if it's worth (not sure where) providing an example of how to use Add an Examples entry to the doc string? (http://xarray.pydata.org/en/latest/generated/xarray.open_mfdataset.html / Line 895 in 5296ed1
While not a small example (as the remote files are large) this is how I used it:
with one file looking like:
A smaller example could be (WIP; note I was hoping ds would concat along t but it doesn't do what I expect)
|
Seconding @dcherian's comment in #4901 on an example for
On that note, the example above seems to work with some slight changes:
|
Hello: I have to find maximum precipitation of each year (for example: 2007 and 2008, Dataset link are: 2007 and 2008). I have done this using resample method (i.e. Following along SO, I am wondering if I can use preprocess to find maximum (or minimum or average) for each file first and then concatenate it using time dimension. I tried the following code and was not successful. Can someone help me with this?
|
I bet you need to |
I'll like to work on this @TomNicholas, where do I start from? |
I wrote this little notebook today while trying to get some satellite data in form that was nice to work with: https://gist.github.com/dcherian/66269bc2b36c2bc427897590d08472d7
I think it would make a useful example for the docs.
A few questions:
Also open to other feedback...
The text was updated successfully, but these errors were encountered: