Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improvements in preprocessing #236

Merged
merged 11 commits into from
Jun 9, 2017
251 changes: 49 additions & 202 deletions docs/notebooks/flowline_model.ipynb

Large diffs are not rendered by default.

218 changes: 201 additions & 17 deletions docs/notebooks/getting_started.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -437,7 +437,8 @@
" tasks.catchment_area,\n",
" tasks.initialize_flowlines,\n",
" tasks.catchment_width_geom,\n",
" tasks.catchment_width_correction\n",
" tasks.catchment_width_correction,\n",
" tasks.compute_downstream_bedshape\n",
" ]\n",
"for task in list_talks:\n",
" workflow.execute_entity_task(task, gdirs)"
Expand Down Expand Up @@ -530,7 +531,9 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from oggm.core.preprocessing.inversion import mass_conservation_inversion"
Expand Down Expand Up @@ -661,6 +664,26 @@
"ax2.set_xlim([0, 100]), ax2.set_ylim([0, 100]);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Finalize the inversion "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"# Use the optimal parameters for inveting all glaciers and apply a simple correction filter\n",
"workflow.execute_entity_task(tasks.volume_inversion, gdirs)\n",
"workflow.execute_entity_task(tasks.filter_inversion_output, gdirs)"
]
},
{
"cell_type": "markdown",
"metadata": {
Expand Down Expand Up @@ -929,28 +952,189 @@
"source": [
"# Reinitialize the model (important!)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"commit_model_08 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*1)\n",
"commit_model_1 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*1)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"commit_model_10 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*2)\n",
"commit_model_2 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*2)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"commit_model_12 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*3)\n",
"commit_model_3 = FluxBasedModel(fls, mb_model=today_model, glen_a=cfg.A*3)\n",
"# Run and store\n",
"years = np.arange(200) * 2\n",
"volume_08 = np.array([])\n",
"volume_10 = np.array([])\n",
"volume_12 = np.array([])\n",
"volume_1 = np.array([])\n",
"volume_2 = np.array([])\n",
"volume_3 = np.array([])\n",
"for y in years:\n",
" commit_model_1.run_until(y)\n",
" volume_1 = np.append(volume_1, commit_model_1.volume_m3)\n",
" commit_model_2.run_until(y)\n",
" volume_2 = np.append(volume_2, commit_model_2.volume_m3)\n",
" commit_model_3.run_until(y)\n",
" volume_3 = np.append(volume_3, commit_model_3.volume_m3)\n",
"# Plot\n",
"plt.figure(figsize=(8, 5))\n",
"plt.plot(years, volume_1, label='1.0 A')\n",
"plt.plot(years, volume_2, label='2.0 A')\n",
"plt.plot(years, volume_3, label='3.0 A')\n",
"plt.ylabel('Volume (m3)');\n",
"plt.xlabel('Time (years)');\n",
"plt.legend(loc='best');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Random runs "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Equilibrium runs of course are not realistic. The normal variability of climate can lead to retreats and advances without any external forcing. OGGM therfore implements a random mass-balance model, which simply shuffles the *observed* years during a selected period of time."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from oggm.core.models.massbalance import RandomMassBalanceModel"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Define the mass balance model\n",
"random_today = RandomMassBalanceModel(gdir, y0=1985, seed=0)\n",
"\n",
"# Plot th specifi mass-balance\n",
"h, w = gdir.get_inversion_flowline_hw()\n",
"years = np.arange(1000)\n",
"mb_ts = random_today.get_specific_mb(h, w, year=years)\n",
"plt.figure(figsize=(10, 4))\n",
"plt.plot(years, mb_ts);"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"As you can see, the mass-balance has no visible trend. The time-series are not stricly gaussians, since only \"observed\" years can happen: the randomness occurs in the sequence of the events.\n",
"\n",
"Let's make a run with this mass-balance:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_model = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3)\n",
"# Run and store\n",
"years = np.arange(500) * 2\n",
"volume = np.array([])\n",
"for y in years:\n",
" random_model.run_until(y)\n",
" volume = np.append(volume, random_model.volume_m3)\n",
"# Plot\n",
"plt.figure(figsize=(8, 5))\n",
"plt.plot(years, volume)\n",
"plt.ylabel('Volume (m3)');\n",
"plt.xlabel('Time (years)');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's see what influence does the Glen's parameter A have on the glacier evolution. Note that if we use the same mass-balance model for all runs they will all have the same random climate sequence! This is very useful for various reasons."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reinitialize the model (important!)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_model_1 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*1)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_model_2 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*2)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_model_3 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3)\n",
"# Run and store\n",
"years = np.arange(250) * 2\n",
"volume_1 = np.array([])\n",
"volume_2 = np.array([])\n",
"volume_3 = np.array([])\n",
"for y in years:\n",
" random_model_1.run_until(y)\n",
" volume_1 = np.append(volume_1, random_model_1.volume_m3)\n",
" random_model_2.run_until(y)\n",
" volume_2 = np.append(volume_2, random_model_2.volume_m3)\n",
" random_model_3.run_until(y)\n",
" volume_3 = np.append(volume_3, random_model_3.volume_m3)\n",
"# Plot\n",
"plt.figure(figsize=(8, 5))\n",
"plt.plot(years, volume_1, label='1.0 A')\n",
"plt.plot(years, volume_2, label='2.0 A')\n",
"plt.plot(years, volume_3, label='3.0 A')\n",
"plt.ylabel('Volume (m3)');\n",
"plt.xlabel('Time (years)');\n",
"plt.legend(loc='best');"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"After the spin-up time, the three models have quite similar evolutions but quite different volumes!\n",
"\n",
"Let's use different random series this time, keeping A constant:"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Reinitialize the model (important!)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_today = RandomMassBalanceModel(gdir, y0=1985, seed=1)\n",
"random_model_1 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_today = RandomMassBalanceModel(gdir, y0=1985, seed=2)\n",
"random_model_2 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3)\n",
"fls = gdir_hef.read_pickle('model_flowlines')\n",
"random_today = RandomMassBalanceModel(gdir, y0=1985, seed=3)\n",
"random_model_3 = FluxBasedModel(fls, mb_model=random_today, glen_a=cfg.A*3)\n",
"# Run and store\n",
"years = np.arange(250) * 2\n",
"volume_1 = np.array([])\n",
"volume_2 = np.array([])\n",
"volume_3 = np.array([])\n",
"for y in years:\n",
" commit_model_08.run_until(y)\n",
" volume_08 = np.append(volume_08, commit_model_08.volume_m3)\n",
" commit_model_10.run_until(y)\n",
" volume_10 = np.append(volume_10, commit_model_10.volume_m3)\n",
" commit_model_12.run_until(y)\n",
" volume_12 = np.append(volume_12, commit_model_12.volume_m3)\n",
" random_model_1.run_until(y)\n",
" volume_1 = np.append(volume_1, random_model_1.volume_m3)\n",
" random_model_2.run_until(y)\n",
" volume_2 = np.append(volume_2, random_model_2.volume_m3)\n",
" random_model_3.run_until(y)\n",
" volume_3 = np.append(volume_3, random_model_3.volume_m3)\n",
"# Plot\n",
"plt.figure(figsize=(8, 5))\n",
"plt.plot(years, volume_08, label='1.0 A')\n",
"plt.plot(years, volume_10, label='2.0 A')\n",
"plt.plot(years, volume_12, label='3.0 A')\n",
"plt.plot(years, volume_1, label='Seed 1')\n",
"plt.plot(years, volume_2, label='Seed 2')\n",
"plt.plot(years, volume_3, label='Seed 3')\n",
"plt.ylabel('Volume (m3)');\n",
"plt.xlabel('Time (years)');\n",
"plt.legend(loc='best');"
Expand Down
20 changes: 13 additions & 7 deletions oggm/cfg.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,9 +115,12 @@ def __setitem__(self, key, value):
_doc = 'The flowline catchments in the local projection.'
BASENAMES['flowline_catchments'] = ('flowline_catchments.shp', _doc)

_doc = 'The fcatchments interesctions in the local projection.'
_doc = 'The catchments intersections in the local projection.'
BASENAMES['catchments_intersects'] = ('catchments_intersects.shp', _doc)

_doc = 'The divides intersections in their lonlat projection.'
BASENAMES['divides_intersects'] = ('divides_intersects.shp', _doc)

_doc = 'A ``salem.Grid`` handling the georeferencing of the local grid.'
BASENAMES['glacier_grid'] = ('glacier_grid.json', _doc)

Expand All @@ -134,10 +137,9 @@ def __setitem__(self, key, value):
'will have their own area which is not obtained from the RGI file).'
BASENAMES['geometries'] = ('geometries.pkl', _doc)

_doc = 'A shapely.LineString of the coordinates of the downstream line ' \
'(flowing out of the glacier until the border of the domain) for ' \
'each divide.'
BASENAMES['downstream_line'] = ('downstream_line.pkl', _doc)
_doc = 'A geopandas dataframe containing all downstream lines, grouped per ' \
'intersection.'
BASENAMES['downstream_lines'] = ('downstream_lines.pkl', _doc)

_doc = 'A string with the source of the topo file (ASTER, SRTM, ...).'
BASENAMES['dem_source'] = ('dem_source.pkl', _doc)
Expand Down Expand Up @@ -196,6 +198,10 @@ def __setitem__(self, key, value):
_doc = 'List of flowlines ready to be run by the model.'
BASENAMES['model_flowlines'] = ('model_flowlines.pkl', _doc)

_doc = ('When using a linear mass-balance for the inversion, this dict stores '
'the optimal ela_h and grad.')
BASENAMES['linear_mb_params'] = ('linear_mb_params.pkl', _doc)

_doc = ''
BASENAMES['find_initial_glacier_params'] = ('find_initial_glacier_params.pkl',
_doc)
Expand All @@ -206,7 +212,8 @@ def __setitem__(self, key, value):
_doc = 'Calving output'
BASENAMES['calving_output'] = ('calving_output.pkl', _doc)

_doc = 'Array of the parabolic coefficents of the downstream lines'
_doc = 'Array of the parabolic coefficients for all centerlines, computed ' \
'from fitting a parabola to the topography.'
BASENAMES['downstream_bed'] = ('downstream_bed.pkl', _doc)


Expand Down Expand Up @@ -288,7 +295,6 @@ def initialize(file=None):
PARAMS[_k] = cp.as_bool(_k)

# Flowline model
PARAMS['bed_shape'] = cp['bed_shape']
_k = 'use_optimized_inversion_params'
PARAMS[_k] = cp.as_bool(_k)

Expand Down
Loading