Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle archives when downloading met data #114

Closed
buseyet opened this issue Feb 22, 2019 · 5 comments
Closed

Handle archives when downloading met data #114

buseyet opened this issue Feb 22, 2019 · 5 comments
Labels
bug Something isn't working enhancement New feature or request
Milestone

Comments

@buseyet
Copy link

buseyet commented Feb 22, 2019

When I run the GIS4WRF Amsterdam Tutorial for 1 day and 6 hours (from July 15, 2018, at 12:00 to July 16, 2018, 18:00), I get a Python Error:

RuntimeError: /path/to/file/gis4wrf/datasets/met/ds083.3/Analysis/201807151200-201807161800/gdas1.fnl0p25.2018071512-2018071618.f00.grib2.spasub.yetisti348231.tar is a grib file, but no raster dataset was successfully identified.

It creates a tar file because the file is large.

I use macOS 10.13.6

Is there a limitation of time or area to use GIS4WRF?

Because I need to run for a year later on.

@dmey
Copy link
Contributor

dmey commented Feb 24, 2019

We currently do not handle this case -- that is, when met data are downloaded as archive.

This is definately somethig that we will look into implementing in the next couple of releses but I cannot give you definite date at the moment.

If this is a blocking issue for you at the moment, I would suggest configuring the domain in GIS4WRF so you can get the extent of you domain and dowload the met data manually from NCAR RDA (you can set the extent there too so you won't have to download data for the whole globe). After you have downloaded the data, uncompress them and move all your grib files under gis4wrf/datasets/met/ds083.3/Analysis/<START_DATE>-<END_DATE>/ where <START_DATE>-<END_DATE> is the name of the folder in format YYYYMMDDHHMM-YYYYMMDDHHMM e.g. 201807151200-201807161800 for the data downloaded in the tutorial.

Having said that, we would also welcome any contributions to the project. If can start by looking at the Development notes or at these two functions resposible to your specific question to familiarise yourself with the code:

def rda_download_dataset(request_id: str, auth: tuple, path: Path) -> Iterable[Tuple[float,float,str]]:
path_tmp = path.with_name(path.name + '_tmp')
if path_tmp.exists():
remove_dir(path_tmp)
path_tmp.mkdir(parents=True)
urls = rda_get_urls_from_request_id(request_id, auth)
with requests_retry_session() as session:
login_data = {'email': auth[0], 'passwd': auth[1], 'action': 'login'}
response = session.post('https://rda.ucar.edu/cgi-bin/login', login_data)
response.raise_for_status()
for i, url in enumerate(urls):
file_name = url.split('/')[-1]
for file_progress in download_file_with_progress(url, path_tmp / file_name, session=session):
dataset_progress = (i + file_progress) / len(urls)
yield dataset_progress, file_progress, url

and

def download_file_with_progress(url: str, path: str, session=None) -> Iterable[float]:
new_session = session is None
if new_session:
session = requests_retry_session()
try:
response = session.get(url, stream=True)
response.raise_for_status()
total = response.headers.get('content-length')
if total is not None:
total = int(total)
downloaded = 0
with open(path, 'wb') as f:
for data in response.iter_content(chunk_size=1024*1024):
downloaded += len(data)
f.write(data)
if total is not None:
yield downloaded / total
if total is None:
yield 1.0
else:
assert total == downloaded, f'Did not receive all data: {total} != {downloaded}'
finally:
if new_session:
session.close()

@dmey dmey added bug Something isn't working enhancement New feature or request labels Feb 24, 2019
@dmey dmey added this to the Future milestone Feb 24, 2019
@dmey dmey changed the title Longer time interval than the tutorial Handle archives when downloading met data Feb 24, 2019
@pulsinger
Copy link

pulsinger commented Feb 26, 2019

I've got to same runtime error as above. When trying to run a new custom domain (e.g. in Germany), the meteo data (from NCAR, ds.083.3) are downloaded succesufully for the selected domain and time range selected. However they remain in .tar in the specific folder <START_DATE>-<END_DATE> and are not being extracted. Hence, the metgrid can't locate and map the data automatically the files and no VTables are generated.

Here is the error:
RuntimeError: C:\Users******\Documents\gis4wrf\datasets\met\ds083.3\Analysis\201806010000-201806080000\gdas1.fnl0p25.2018060100-2018060800.f00.grib2.spasub.******349000.tar is a grib file, but no raster dataset was successfully identified

The data from NCAR are actually there but the whole process stops because of the above. Data are just for 1 week.

There is an option to add the meteo data manualy for the domain, however it requires the proper VTable ... which is being not generated in the previous step.

@buseyet
Copy link
Author

buseyet commented Feb 26, 2019

The above solution works. Download the data from NCAR RDA website, put it the unziped version of it in the file met/ds.083.3/analysis file with the name <START_DATE>-<END_DATE>, then in the model choose that and click "use dataset selection from list".

@pulsinger
Copy link

It worked when I unziped the data, save the project as GIS4WRF project, quit QGIS and started it again. Then data were visibile again and I could run the model.

@letmaik letmaik modified the milestones: Future, 0.14.0 Mar 9, 2019
@letmaik letmaik closed this as completed in 9376e0a Mar 9, 2019
@letmaik
Copy link
Contributor

letmaik commented Mar 9, 2019

This issue is fixed and will be part of the next release.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

4 participants