New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ENH: add iris/sigmet xarray backend #520
Conversation
c93f029
to
8189ca3
Compare
Pinging some folks who might be interested in testing this branch. Any comments, suggestions and other reports are very much appreciated. This will need quite some more work before merge. Todo:
|
Hello Kai! I tried to test the functions you provided in the first post. ~/wradlib-iris_backend/wradlib-iris_backend/wradlib/io/iris.py in open_iris_mfdataset(filename_or_obj, group, **kwargs) ~/wradlib-iris_backend/wradlib-iris_backend/wradlib/io/xarray.py in open_radar_mfdataset(paths, **kwargs) ~/wradlib-iris_backend/wradlib-iris_backend/wradlib/io/xarray.py in _get_h5group_names(filename, engine) ValueError: wradlib: unknown engine The second options kind of confused me. Which Hope this helps. You can maybe comment, what I am doing wrong with calling the second options (the |
Codecov Report
@@ Coverage Diff @@
## main #520 +/- ##
==========================================
- Coverage 89.43% 88.46% -0.98%
==========================================
Files 36 36
Lines 8592 8874 +282
==========================================
+ Hits 7684 7850 +166
- Misses 908 1024 +116
Flags with carried forward coverage won't be shown. Click here to find out more.
Continue to review full report at Codecov.
|
@jorahu I've fixed some issues and the tests. Should work a bit faster too, since the coordinate data (azimuth, elevation etc.) Is only decoded once per sweep. |
86ce25d
to
9ea9039
Compare
This is now ready for merge. It might still have some rough edges though. |
Sorry for the delayed answer. I was out of office for some time. Thanks for the detailed notebook, it worked well. Except for some files, I was unable to read them for some reason. TypeError: The DType <class 'numpy.dtype[void]'> could not be promoted by <class 'numpy.dtype[float64]'>. This means that no common DType exists for the given inputs. For example they cannot be stored in a single array unless the dtype is Got the same error while trying to read RHI files. Other options seemed to work without issues. But am I missing something about the overall metadata? RAW files include a myriad of metadata, but could not find it from the xarray datasets (the RadarVolume objects). Is is just not there (yet?) or I just can't find them? |
@jorahu Thanks for testing. Are you able to share some of the problematic files? Regarding metadata, it's not there atm. Let's see what WMO/CfRadial2 will bring us: https://community.wmo.int/wmo-jet-owr-seminar-series-weather-radar-data-exchange |
Sent you 2 files by email. |
@jorahu Those files have been very helpful. OK the issue is with that for the ODIM/GAMIC reader I've introduced a magic reindexing of the major angle (azimuth or elevation) to overcome issues with jittering values in subsequent timesteps. Now, the problem is that the used default value of tolerance (0.4) does not work for all datasets. So I invented another kwarg for parametrizing.
This will not reindex the angle and obtain the data as is. If you give a floating point value, the angle will be reindexed with some assumptions (resolution, number of rays etc.) with a nearest neighbour search. In some cases this search doesn't find a value within the tolerance and fills the voids with NaN. This fires the error you are seeing, because the time array is filled with NaT at some places. With your data it doesn't work at all with any given tolerance number, so there might be another issue/bug in the processing. I'll dig further. |
I take that back, maybe it is introduced by this part of the code # calculate beam center values
zero_index = np.where(azi_stop < azi_start)
azi_stop[zero_index[0]] += 360
az = (azi_start + azi_stop) / 2.0
az[az >= 360] -= 360 |
Root cause is finally some inconsistencies in the azimuth values of both azi_start n (left) and azi_stop (right):
This will not be an easy task to work around this. Maybe I have to apply the reindexing to both arrays separately before merging,,, |
@jorahu I changed the reader implementation a bit, but the output should be the same. |
Ah, the "0" rays, I just drop them now as they do not contain any data. That should be fixed by reindexing anyway. |
@jorahu You might test the latest changes in this branch. I think we are approaching a usable state.
This error was induced by It is checked if |
361e915
to
2497f45
Compare
Final step: splitted and recreated commits in reasonable order. |
2497f45
to
138c31d
Compare
…lisecond resolution if available, fix `decode_time` to correctly decode milliseconds and timezone, minor fixes
138c31d
to
bc33444
Compare
Another set of minor tweaks. Moved xarray Variable creation from @jorahu Would it be possible to get the permission to use the two files (PPI/RHI) as test data via |
bc33444
to
1c7ec6a
Compare
@jorahu I'll merge this into main now, as we've approached a useful state. A example notebook will be added to https://github.com/wradlib/wradlib-notebooks soon. Please open an issue, if you find any flaws in the implementation. Plan is to get this out with wradlib 1.12 |
Sorry for the delay again. Yes, of course you can use the files. I can provide more, if needed (to test the mfdataset/timeseries reading for example). |
@jorahu I'll add the two files for testing to the wradlib-data repo, thanks! Great to hear, that no immediate crashes have been observed. The problem with reading multiple files with different scan setups will need some testing. From your explanation I think this might not be an easy task, it might even be not possible at all. At least your proposed solution to only read similar files should work well. For testing the multi-file reader it would be great to have a one or two hour dataset, PPI and RHI. It would be nice, if there would be some precipitation going on over that time. The idea would be to make a dedicated repository ( |
This PR covers #361.
open_iris_dataset(filename)
,open_iris_mfdataset(filename)
- reads sweeps intoRadarVolume
xr.open_dataset(filename, engine="iris", group=1)
,xr.open_mfdataset(filename, engine="iris", group=1)
reads sweep (group) intoxr.Dataset
Please refer to function docstring for further explanations.
Update: Fix group numbers, they are int.