This preprocessing library takes source 1-meter digital elevation model (DEM) data and splits, crops, buffers, and reprojects it to individual hydrologic basins (identified by their unique identifier, the "HUC12" ID).
This preprocessing script also produces ancillary data products corresponding to each new HUC12 DEM raster to describe their sub-basins (ie "catchments"), their streams (ie "flowlines"), and the roughness of each streambed.
Taken together, these are the major inputs needed to run GeoFlood, which creates short-term flood projections.
The underlying Python library is available on PyPi and can be installed by:
pip install dem2basin
The recommended way to run dem2basin.py
:
python3 dem2basin.py \
--shapefile study_area_polygon.shp \
--huc12 WBD-HUC12s.shp \
--nhd NHD_catchments_and_flowlines.gdb/ \
--raster TNRIS-LIDAR-Datasets/ \
--availability TNRIS-LIDAR-Dataset_availability.shp \
--directory HUC12-DEM_outputs/ \
--restart dem2basin-study_area.pickle
There are 5 required inputs.
--shapefile
Study area polygon vector GIS file: a vector GIS file of a single polygon which defines the study area--huc12
USGS Watershed Boundary Dataset (WBD): a HUC12 vector GIS file from USGS's WBD or a subset of the WBD--nhd
NHD Medium Resolution (MR): a GeoDataBase of NHD MR catchments and flowlines--raster
TNRIS Lidar: the parent directory to a collection of TNRIS Lidar datasets--availability
TNRIS Lidar availability: a vector GIS file of TNRIS Lidar availability, provided by TNRIS here
--directory
Outputs directory: a directory to store outputs, which will each be sorted by HUC12--restart
Restart file: a Python Pickle file from which you can restart the preprocessing if it's interrupted--overwrite
Overwrite flag: optional flag to overwrite all files found in output directory--overwrite_rasters
Overwrite rasters flag: optional flag to overwrite just the raster outputs--overwrite_flowlines
Overwrite flowlines flag: optional flag to overwrite just the flowline outputs--overwrite_catchments
Overwrite catchments flag: optional flag to overwrite just the catchment outputs--overwrite_roughnesses
Overwrite roughness table flag: optional flag to overwrite the roughness table--log
Log file: a file to store runtime log
There are 4 outputs per HUC12.
- Cropped & buffered DEM:
- buffered 500m
- cropped to each HUC12 intersecting the study area
- at least 1m resolution
- mosaicked with preference for lowest resolution tiles
- reprojected to the study area's projections
- corresponding NHD MR flowlines:
- subset of NHD MR flowlines
- each flowline's median point along the line lies within the HUC12
- reprojected to the study area's projections
- corresponding NHD MR catchments:
- subset of NHD MR catchments
- correspond with the NHD MR flowlines above
- reprojected to the study area's projections
- Manning's n roughness table:
- organized by flowline using their ComIDs
- vary by stream order
Here is an example of these outputs, originally visualized by Prof David Maidment.
Already preprocessed DEMs are now available for the vast majority of Texas's HUC12s if you are a TACC user. You can request a TACC account here.
- The DEMs are not provided for any HUC12s that have any gap in 1m resolution data.
- All of the DEMS are reprojected to WGS 84 / UTM 14N, even if the HUC12 is outside of UTM 14.
The DEMs are located on Stampede2 at /scratch/projects/tnris/dhl-flood-modelling/TX-HUC12-DEM_outputs
.
Please submit a ticket if you have trouble accessing this data. You may also contact me directly at @dhardestylewis or dhl@tacc.utexas.edu
These HUC12 DEMs are available right now on Stampede2.
These HUC12 DEMs have been successfully preprocessed in the past, and will soon be available once again on Stampede2. If you need any of these right now, please contact me.
If you would like an understanding of the preprocessing workflow, I provide a simplified but representative example in this Jupyter notebook. This Jupyter notebook was presented at the inaugural TACC Institute on Planet Texas 2050 Cyberecosystem Tools in August, 2020. Please contact me if you would like a recording.