PAVICS climate datasets are hosted on a THREDDS data server at https://pavics.ouranos.ca/thredds. Although THREDDS provides a user-interface for browsing datasets, it is often more practical to navigate the catalogue programatically. The tutorial introduces the siphon library to browse the THREDDS catalog, and xarray to open a streaming connection to the remote data.
+
+
+
+
+
+
+
+
+
PAVICS climate datasets are hosted on a THREDDS data server at https://pavics.ouranos.ca/thredds. Although THREDDS provides a user-interface for browsing datasets, it is often more practical to navigate the catalogue programatically. The tutorial introduces the siphon library to browse the THREDDS catalog, and xarray to open a streaming connection to the remote data.
More specifically, the tutorial demonstrates how to access an ensemble of climate simulations, namely Ouranos' standard ensemble of bias-adjusted climate scenarios version 1.0 (cb-oura-1.0), using python commands. The ensemble contains 22 bias-adjusted CMIP5 simulations in netCDF format, each with three variables (tasmin, tasmax, pr) and dimensions of longitude, latitude and time (1064, 700, 55175). The server can provide multiple data access and metadata services but in the following steps the tutorial will focus on the OPeNDAP service, where instead of downloading huge volumes of data locally, only the relevant portion is accessed using a standard called Data Access Protocol (DAP).
-To conserve any modifications to tutorial notebooks in the PAVICS JupyterLab they need to be copied into your writable-workspace directory.
+To conserve any modifications to tutorial notebooks in the PAVICS JupyterLab they need to be copied into your writable-workspace directory.
+
+
+
+
-
-
-
In [1]:
-
-
-
fromsiphon.catalogimportTDSCatalog
+
+
In [1]:
+
+
+
fromsiphon.catalogimportTDSCatalog
-url="https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/catalog/datasets/simulations/bias_adjusted/cmip5/ouranos/cb-oura-1.0/catalog.xml"# TEST_USE_PROD_DATA
+url="https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/catalog/datasets/simulations/bias_adjusted/cmip5/ouranos/cb-oura-1.0/catalog.xml"# TEST_USE_PROD_DATA# Create Catalogcat=TDSCatalog(url)# List of datasets
-print(f"Number of datasets: {len(cat.datasets)}")
+print(f"Number of datasets: {len(cat.datasets)}")# Access mechanisms - here we are interested in OPENDAP, a data streaming protocolcds=cat.datasets[0]
-print(f"Access URLs: {tuple(cds.access_urls.keys())}")
+print(f"Access URLs: {tuple(cds.access_urls.keys())}")
# NBVAL_IGNORE_OUTPUTimportxarrayasxr# This does not download the entire dataset, just the metadata and attributes describing the content.
-ds=xr.open_dataset(cds.access_urls["OPENDAP"],chunks="auto")
+ds=xr.open_dataset(cds.access_urls["OPENDAP"],chunks="auto")# What we see here is an in-memory representation of the full content, the actual data is still on the server.ds
-
-
-
-
-
-
-
-
-
-
Out[2]:
-
-
-
-
+
+
+
+
+
+
+
+
Out[2]:
+
-
-
-
+
+
+
-
-
-
In [3]:
-
-
-
# Extract a subset of the file
+
+
In [3]:
+
+
+
# Extract a subset of the file# Again, this only creates an in-memory representation of the data
-sub=ds.pr.sel(time="2050").sel(lon=-80,lat=46,method="nearest")
+sub=ds.pr.sel(time="2050").sel(lon=-80,lat=46,method="nearest")# The data is only downloaded when we actuall need it for a computation.sub.mean(keep_attrs=True).compute()
Upcoming versions of PAVICS will include a data catalog faciliting queries for simulations that meet certain criteria (model, GHG emission scenario, variable, etc). For now, one simple way to filter datasets if to look for patterns in their filename. For example, the following shows how to get all simulations driven by GHG emission scenario RCP 4.5.
Upcoming versions of PAVICS will include a data catalog faciliting queries for simulations that meet certain criteria (model, GHG emission scenario, variable, etc). For now, one simple way to filter datasets if to look for patterns in their filename. For example, the following shows how to get all simulations driven by GHG emission scenario RCP 4.5.
Filtering datasets based on f
day_BNU-ESM_historical+rcp45_r1i1p1_na10kgrid_qm-moving-50bins-detrend_1950-2100.ncml,
day_ACCESS1-3_historical+rcp45_r1i1p1_na10kgrid_qm-moving-50bins-detrend_1950-2100.ncml]
-Info!NetCDF Markup Language (NcML): PAVICS datasets are often provided via user-friendly NcML aggregations, a convenient way to combine multiple files into a single, logical dataset. For example, the particular catalog used in this tutorial accesses 22 climate datasets, where each simulation consists of a NcML aggregation. From a user's standpoint each cb-oura-1.0 simulation is accessible as a single .ncml, where in reality it consists of an aggregation of individual yearly netcdf files for each of the three variables (tasmin, tasmax, pr) over the period 1950-2100.
-
+Info!NetCDF Markup Language (NcML): PAVICS datasets are often provided via user-friendly NcML aggregations, a convenient way to combine multiple files into a single, logical dataset. For example, the particular catalog used in this tutorial accesses 22 climate datasets, where each simulation consists of a NcML aggregation. From a user's standpoint each cb-oura-1.0 simulation is accessible as a single .ncml, where in reality it consists of an aggregation of individual yearly netcdf files for each of the three variables (tasmin, tasmax, pr) over the period 1950-2100.
+
+
Info! writeable-workspace location. To conserve any modifications and to avoid permission errors for notebooks which write output to disk it is necessary to copy tutorial notebooks from the "pavics-homepage" folder to a location within your "writeable-workspace"
In this second tutorial we will demonstate PAVICS subsetting tools, again accessing Ouranos' cb-oura-1.0 ensemble. PAVICS subsetting relies on the clisops library enabling data extraction by:
This tutorial uses clisops for subsetting operations, and geopandas to manipulate region geometries. We re-use part of the data-access tutorial to select a dataset from cb-oura-1.0 datasets from the PAVICS THREDDS server.
-To conserve any modifications to tutorial notebooks in the PAVICS JupyterLab they need to be copied into your writable-workspace directory.
+To conserve any modifications to tutorial notebooks in the PAVICS JupyterLab they need to be copied into your writable-workspace directory.
+
+
+
+
-
-
-
In [1]:
-
-
-
importwarnings
+
+
In [1]:
+
+
+
importwarningsimportxarrayasxrfromIPython.displayimportdisplay# Fancy representation of xarray objectsfromsiphon.catalogimportTDSCatalog
-warnings.simplefilter("ignore")
-url="https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/catalog/datasets/simulations/bias_adjusted/cmip5/ouranos/cb-oura-1.0/catalog.xml"# TEST_USE_PROD_DATA
+warnings.simplefilter("ignore")
+url="https://pavics.ouranos.ca/twitcher/ows/proxy/thredds/catalog/datasets/simulations/bias_adjusted/cmip5/ouranos/cb-oura-1.0/catalog.xml"# TEST_USE_PROD_DATA# Create Catalogcat=TDSCatalog(url)# DAP link for this demo
-ds_url=cat.datasets[0].access_urls["OPENDAP"]
+ds_url=cat.datasets[0].access_urls["OPENDAP"]# xarray.Datasetds=xr.open_dataset(ds_url,chunks=dict(time=256*2,lon=32,lat=32))display(ds)a=ds.tasmin.isel(time=0).plot(figsize=(10,4))
Ouranos standard ensemble of bias-adjusted climate scenarios version 1.0 (cb-oura-1.0)
history :
2011-06-01T01:08:07Z CMOR rewrote data to comply with CF standards and CMIP5 requirements.
+
Conventions :
CF-1.5
title :
Ouranos standard ensemble of bias-adjusted climate scenarios version 1.0 (cb-oura-1.0)
history :
2011-06-01T01:08:07Z CMOR rewrote data to comply with CF standards and CMIP5 requirements.
2016-01-18T18:16:47: Interpolate to nrcan_livneh grid.
2016-02-10T09:50:14: Bias correction using nrcan_livneh.
institution :
Ouranos Consortium on Regional Climatology and Adaptation to Climate Change
In order to satisfy growing demand and ensure the availability of climate scenarios that meet the needs of vulnerability, impact and adaptation (VIA) studies, a standard ensemble of bias-adjusted climate scenarios has been produced. Resulting climate scenarios are constructed using outputs of 11 individual global climate models and two future greenhouse gas emissions scenarios (RCP 4.5 and RCP 8.5) to cover the range of plausible of future changes and post-processed to refine the spatial scales and correct biases. The selection of a subset of simulations from the entire CMIP5 ensemble was carried out using a cluster analysis. The selection criteria include projected monthly changes in maximum and minimum temperatures as well as in total precipitations for two future horizons (2041 to 2070 and 2071 to 2100)
bias_adjustment_method :
1D-Quantile Mapping
bias_adjustment_reference :
http://doi.org/10.1002/2015JD023890
project_id :
CMIP5
license_type :
permissive non-commercial
license :
Creative Commons Attribution-NonCommercial 4.0 International Public License: https://creativecommons.org/licenses/by-nc/4.0/legalcode
terms_of_use :
In addition to the provided licence, the data used for the realization of climate scenarios are subject to the conditions of use of each organization that is the source of this data, and that you must respect. For more details, please refer to:https://pcmdi.llnl.gov/mips/cmip5/terms-of-use.html
attribution :
Use of this dataset should be acknowledged as 'Data produced and provided by the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change'. Furthermore, the modeling groups from which the bias-adjusted climate scenarios were consrtucted must also be acknowledged, please refer to: The Coupled Model Intercomparison Project https://pcmdi.llnl.gov/mips/cmip5/citation.html
frequency :
day
modeling_realm :
atmos
target_dataset :
CANADA : ANUSPLIN interpolated Canada daily 300 arc second climate grids; USA : Livneh_et_al_2013
target_dataset_references :
CANADA : https://doi.org/10.1175/2011BAMS3132.1; USA : https://doi.org/10.1175/JCLI-D-12-00508.1
driving_institution :
Norwegian Climate Centre
driving_institute_id :
NCC
+30-day moving window 50-bins quantile mapping with detrending.
In order to satisfy growing demand and ensure the availability of climate scenarios that meet the needs of vulnerability, impact and adaptation (VIA) studies, a standard ensemble of bias-adjusted climate scenarios has been produced. Resulting climate scenarios are constructed using outputs of 11 individual global climate models and two future greenhouse gas emissions scenarios (RCP 4.5 and RCP 8.5) to cover the range of plausible of future changes and post-processed to refine the spatial scales and correct biases. The selection of a subset of simulations from the entire CMIP5 ensemble was carried out using a cluster analysis. The selection criteria include projected monthly changes in maximum and minimum temperatures as well as in total precipitations for two future horizons (2041 to 2070 and 2071 to 2100)
bias_adjustment_method :
1D-Quantile Mapping
bias_adjustment_reference :
http://doi.org/10.1002/2015JD023890
project_id :
CMIP5
license_type :
permissive non-commercial
license :
Creative Commons Attribution-NonCommercial 4.0 International Public License: https://creativecommons.org/licenses/by-nc/4.0/legalcode
terms_of_use :
In addition to the provided licence, the data used for the realization of climate scenarios are subject to the conditions of use of each organization that is the source of this data, and that you must respect. For more details, please refer to:https://pcmdi.llnl.gov/mips/cmip5/terms-of-use.html
attribution :
Use of this dataset should be acknowledged as 'Data produced and provided by the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change'. Furthermore, the modeling groups from which the bias-adjusted climate scenarios were consrtucted must also be acknowledged, please refer to: The Coupled Model Intercomparison Project https://pcmdi.llnl.gov/mips/cmip5/citation.html
frequency :
day
modeling_realm :
atmos
target_dataset :
CANADA : ANUSPLIN interpolated Canada daily 300 arc second climate grids; USA : Livneh_et_al_2013
target_dataset_references :
CANADA : https://doi.org/10.1175/2011BAMS3132.1; USA : https://doi.org/10.1175/JCLI-D-12-00508.1
The subset_gridpoint function returns the grid-cell whose center lies closest to the coordinates (latitude, longitude) given. When multiple coordinates are given, the various grid points are ordered along a new site dimension.
The subset_gridpoint function returns the grid-cell whose center lies closest to the coordinates (latitude, longitude) given. When multiple coordinates are given, the various grid points are ordered along a new site dimension.
Use cases:
Compare gridded model output to observations at different weather station locations
Extract climate time series for cities or other sites of interest
In order to satisfy growing demand and ensure the availability of climate scenarios that meet the needs of vulnerability, impact and adaptation (VIA) studies, a standard ensemble of bias-adjusted climate scenarios has been produced. Resulting climate scenarios are constructed using outputs of 11 individual global climate models and two future greenhouse gas emissions scenarios (RCP 4.5 and RCP 8.5) to cover the range of plausible of future changes and post-processed to refine the spatial scales and correct biases. The selection of a subset of simulations from the entire CMIP5 ensemble was carried out using a cluster analysis. The selection criteria include projected monthly changes in maximum and minimum temperatures as well as in total precipitations for two future horizons (2041 to 2070 and 2071 to 2100)
bias_adjustment_method :
1D-Quantile Mapping
bias_adjustment_reference :
http://doi.org/10.1002/2015JD023890
project_id :
CMIP5
license_type :
permissive non-commercial
license :
Creative Commons Attribution-NonCommercial 4.0 International Public License: https://creativecommons.org/licenses/by-nc/4.0/legalcode
terms_of_use :
In addition to the provided licence, the data used for the realization of climate scenarios are subject to the conditions of use of each organization that is the source of this data, and that you must respect. For more details, please refer to:https://pcmdi.llnl.gov/mips/cmip5/terms-of-use.html
attribution :
Use of this dataset should be acknowledged as 'Data produced and provided by the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change'. Furthermore, the modeling groups from which the bias-adjusted climate scenarios were consrtucted must also be acknowledged, please refer to: The Coupled Model Intercomparison Project https://pcmdi.llnl.gov/mips/cmip5/citation.html
frequency :
day
modeling_realm :
atmos
target_dataset :
CANADA : ANUSPLIN interpolated Canada daily 300 arc second climate grids; USA : Livneh_et_al_2013
target_dataset_references :
CANADA : https://doi.org/10.1175/2011BAMS3132.1; USA : https://doi.org/10.1175/JCLI-D-12-00508.1
driving_institution :
Norwegian Climate Centre
driving_institute_id :
NCC
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
-
-
-
-
-
-
+
+
+
+
+
+
The distance to the closest grid point can sometimes be so large that's it's not meaningful anymore. To avoid such cases, a maximum distance can be set using the tolerance argument. Also, the actual distance can also be added as a coordinate for manual inspection.
Ouranos standard ensemble of bias-adjusted climate scenarios version 1.0 (cb-oura-1.0)
history :
2011-06-01T01:08:07Z CMOR rewrote data to comply with CF standards and CMIP5 requirements.
+
Conventions :
CF-1.5
title :
Ouranos standard ensemble of bias-adjusted climate scenarios version 1.0 (cb-oura-1.0)
history :
2011-06-01T01:08:07Z CMOR rewrote data to comply with CF standards and CMIP5 requirements.
2016-01-18T18:16:47: Interpolate to nrcan_livneh grid.
2016-02-10T09:50:14: Bias correction using nrcan_livneh.
institution :
Ouranos Consortium on Regional Climatology and Adaptation to Climate Change
In order to satisfy growing demand and ensure the availability of climate scenarios that meet the needs of vulnerability, impact and adaptation (VIA) studies, a standard ensemble of bias-adjusted climate scenarios has been produced. Resulting climate scenarios are constructed using outputs of 11 individual global climate models and two future greenhouse gas emissions scenarios (RCP 4.5 and RCP 8.5) to cover the range of plausible of future changes and post-processed to refine the spatial scales and correct biases. The selection of a subset of simulations from the entire CMIP5 ensemble was carried out using a cluster analysis. The selection criteria include projected monthly changes in maximum and minimum temperatures as well as in total precipitations for two future horizons (2041 to 2070 and 2071 to 2100)
bias_adjustment_method :
1D-Quantile Mapping
bias_adjustment_reference :
http://doi.org/10.1002/2015JD023890
project_id :
CMIP5
license_type :
permissive non-commercial
license :
Creative Commons Attribution-NonCommercial 4.0 International Public License: https://creativecommons.org/licenses/by-nc/4.0/legalcode
terms_of_use :
In addition to the provided licence, the data used for the realization of climate scenarios are subject to the conditions of use of each organization that is the source of this data, and that you must respect. For more details, please refer to:https://pcmdi.llnl.gov/mips/cmip5/terms-of-use.html
attribution :
Use of this dataset should be acknowledged as 'Data produced and provided by the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change'. Furthermore, the modeling groups from which the bias-adjusted climate scenarios were consrtucted must also be acknowledged, please refer to: The Coupled Model Intercomparison Project https://pcmdi.llnl.gov/mips/cmip5/citation.html
frequency :
day
modeling_realm :
atmos
target_dataset :
CANADA : ANUSPLIN interpolated Canada daily 300 arc second climate grids; USA : Livneh_et_al_2013
target_dataset_references :
CANADA : https://doi.org/10.1175/2011BAMS3132.1; USA : https://doi.org/10.1175/JCLI-D-12-00508.1
driving_institution :
Norwegian Climate Centre
driving_institute_id :
NCC
+30-day moving window 50-bins quantile mapping with detrending.
In order to satisfy growing demand and ensure the availability of climate scenarios that meet the needs of vulnerability, impact and adaptation (VIA) studies, a standard ensemble of bias-adjusted climate scenarios has been produced. Resulting climate scenarios are constructed using outputs of 11 individual global climate models and two future greenhouse gas emissions scenarios (RCP 4.5 and RCP 8.5) to cover the range of plausible of future changes and post-processed to refine the spatial scales and correct biases. The selection of a subset of simulations from the entire CMIP5 ensemble was carried out using a cluster analysis. The selection criteria include projected monthly changes in maximum and minimum temperatures as well as in total precipitations for two future horizons (2041 to 2070 and 2071 to 2100)
bias_adjustment_method :
1D-Quantile Mapping
bias_adjustment_reference :
http://doi.org/10.1002/2015JD023890
project_id :
CMIP5
license_type :
permissive non-commercial
license :
Creative Commons Attribution-NonCommercial 4.0 International Public License: https://creativecommons.org/licenses/by-nc/4.0/legalcode
terms_of_use :
In addition to the provided licence, the data used for the realization of climate scenarios are subject to the conditions of use of each organization that is the source of this data, and that you must respect. For more details, please refer to:https://pcmdi.llnl.gov/mips/cmip5/terms-of-use.html
attribution :
Use of this dataset should be acknowledged as 'Data produced and provided by the Ouranos Consortium on Regional Climatology and Adaptation to Climate Change'. Furthermore, the modeling groups from which the bias-adjusted climate scenarios were consrtucted must also be acknowledged, please refer to: The Coupled Model Intercomparison Project https://pcmdi.llnl.gov/mips/cmip5/citation.html
frequency :
day
modeling_realm :
atmos
target_dataset :
CANADA : ANUSPLIN interpolated Canada daily 300 arc second climate grids; USA : Livneh_et_al_2013
target_dataset_references :
CANADA : https://doi.org/10.1175/2011BAMS3132.1; USA : https://doi.org/10.1175/JCLI-D-12-00508.1
The subset_shape function can extract data within an arbitrary shape defined by one or multiple polygons. It crops the original grid to the polygon then masks everything outside it. Supported files types include most common GIS formats including shapefile, geojson, and geopackage.
The subset_shape function can extract data within an arbitrary shape defined by one or multiple polygons. It crops the original grid to the polygon then masks everything outside it. Supported files types include most common GIS formats including shapefile, geojson, and geopackage.
Use cases:
Analyze climate variables within a country, state or watershed
importgeopandasasgpdimportmatplotlib.pyplotasplt# Explore the polygon layerregions=gpd.GeoDataFrame.from_file(
- "/notebook_dir/pavics-homepage/tutorial_data/test_regions.geojson"
+ "/notebook_dir/pavics-homepage/tutorial_data/test_regions.geojson")# Display the first few columns
@@ -15869,24 +10172,19 @@
# Subset over all regionsds_poly=subset.subset_shape(
- ds,shape="/notebook_dir/pavics-homepage/tutorial_data/test_regions.geojson"
+ ds,shape="/notebook_dir/pavics-homepage/tutorial_data/test_regions.geojson")# use path to layera=ds_poly.tasmin.isel(time=0).plot(figsize=(10,4))
The subset_time fonction is used to extract a time span. It uses start_date and/or end_date arguments, which can be given as year, year-month or year-month-day strings. If the start or end date is not given, it defaults to the start or end date of the dataset respectively.
The subset_time fonction is used to extract a time span. It uses start_date and/or end_date arguments, which can be given as year, year-month or year-month-day strings. If the start or end date is not given, it defaults to the start or end date of the dataset respectively.
# Align data to a starting year
-ds_sub=subset.subset_time(ds,start_date="1981")
+
+
+
+
+
+
In [7]:
+
+
+
# Align data to a starting year
+ds_sub=subset.subset_time(ds,start_date="1981")print(
- f"Subset time using start_date only\nstart: {ds_sub.time.min().values};\tend: {ds_sub.time.max().values}\n"
+ f"Subset time using start_date only\nstart: {ds_sub.time.min().values};\tend: {ds_sub.time.max().values}\n")# Select a temporal slice
-ds_sub=subset.subset_time(ds,start_date="1981-08-05",end_date="2084-06-15")
+ds_sub=subset.subset_time(ds,start_date="1981-08-05",end_date="2084-06-15")print(
- f"Subset time using both start_date & end_date\nstart: {ds_sub.time.min().values};\tend: {ds_sub.time.max().values}"
+ f"Subset time using both start_date & end_date\nstart: {ds_sub.time.min().values};\tend: {ds_sub.time.max().values}")
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
Subset time using start_date only
start: 1981-01-01 00:00:00; end: 2100-12-31 00:00:00
@@ -16079,359 +10359,323 @@
When using subset_shape it can sometimes be useful to manipulate the polygon layer before subsetting. For example subsetting using a polygon layer with non-geographic coordinates will result in an error. By loading the layer first as a geodataframe we can perform a reprojection to geographic coordinates (longitude, latitude) on the fly usingto_crs()
When using subset_shape it can sometimes be useful to manipulate the polygon layer before subsetting. For example subsetting using a polygon layer with non-geographic coordinates will result in an error. By loading the layer first as a geodataframe we can perform a reprojection to geographic coordinates (longitude, latitude) on the fly usingto_crs()
regions_lcc=gpd.GeoDataFrame.from_file(
+ "/notebook_dir/pavics-homepage/tutorial_data/test_regions_lambert.geojson")plt.figure(figsize=(15,6))ax1=plt.subplot(1,2,1)regions_lcc.plot(ax=ax1)
-plt.title("Lambert projection")
+plt.title("Lambert projection")ax2=plt.subplot(1,2,2)regions_lcc.to_crs(epsg=4326).plot(ax=ax2)
-plt.title("WGS84 geographic")
+plt.title("WGS84 geographic")try:ds_poly1=subset.subset_shape(ds,shape=regions_lcc)except:print(
- f"There was a problemn... the polygon layer projection is {regions_lcc.crs.name}\ntry projecting to WGS 84 ... "
+ f"There was a problemn... the polygon layer projection is {regions_lcc.crs.name}\ntry projecting to WGS 84 ... ")
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
There was a problemn... the polygon layer projection is NAD83 / Canada Atlas Lambert
try projecting to WGS 84 ...
-
-
-
-
-
-
-
-
-
-
+
+
+
+
-
-
-
+
+
+
-
-
-
In [9]:
-
-
-
# use projected polygon layer to subset
+
+
In [9]:
+
+
+
# use projected polygon layer to subsetds_poly1=subset.subset_shape(ds,shape=regions_lcc.to_crs(epsg=4326))a=ds_poly1.tasmin.isel(time=0).plot(figsize=(10,4))
-plt.title("subset via geodataframe using reprojection on the fly")
-print("success")
+plt.title("subset via geodataframe using reprojection on the fly")
+print("success")
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
success
-
-
-
-
-
-
-
-
-
-
+
+
+
+
-
-
-
-
-
-
+
+
+
+
+
+
We may also wish to subset the climate only data using a selection of available polygons. For example use only the Montérégie and Estrie subregions.
subreg=regions_lcc.loc[(regions_lcc["Region"].isin(["Montérégie","Estrie"]))]ds_poly1=subset.subset_shape(ds,shape=subreg.to_crs(epsg=4326))a=ds_poly1.tasmin.isel(time=0).plot(figsize=(10,4))
-t=plt.title("Estrie / Montérégie subset via geodataframe w/ reprojection on the fly")
+t=plt.title("Estrie / Montérégie subset via geodataframe w/ reprojection on the fly")
mask=subset.create_mask(x_dim=ds_poly.lon,y_dim=ds_poly.lat,poly=regions).transpose()a=mask.plot(figsize=(10,4))
-t=plt.title("Mask of regions")
+t=plt.title("Mask of regions")
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+
+
+
-
-
-
-
+
+
+
+
+
+
Assign a region_id and region_name coordinates to the data using the mask
coordinates can be used to select or summarize data by region
-
-
-
-
In [12]:
-
-
-
ds_poly=ds_poly.assign_coords(region_id=mask)
+
+
+
+
+
+
In [12]:
+
+
+
ds_poly=ds_poly.assign_coords(region_id=mask)# create region_name coordinates
-name_array=xr.full_like(ds_poly.region_id,"",dtype=object)
-name_field="Region"
+name_array=xr.full_like(ds_poly.region_id,"",dtype=object)
+name_field="Region"fori,nameinenumerate(regions[name_field]):
- name_array=xr.where(ds_poly["region_id"]==i,name,name_array)
+ name_array=xr.where(ds_poly["region_id"]==i,name,name_array)ds_poly=ds_poly.assign_coords(region_name=name_array)# Plot only a single region by selecting w/ region_name coordinatea=(
- ds_poly.where(ds_poly.region_name=="Montérégie",drop=True)
+ ds_poly.where(ds_poly.region_name=="Montérégie",drop=True).tasmin.isel(time=0).plot(figsize=(10,4)))
Climate model data is often produced using rotated-pole coordinates resulting in irregular or curvlinear lat/lon grids. Considerable effort has been made in the subsetting tools to seamlessly handle this type of data
Climate model data is often produced using rotated-pole coordinates resulting in irregular or curvlinear lat/lon grids. Considerable effort has been made in the subsetting tools to seamlessly handle this type of data
Info! writeable-workspace location. To conserve any modifications and to avoid permission errors for notebooks which write output to disk it is necessary to copy tutorial notebooks from the "pavics-homepage" folder to a location within your "writeable-workspace"