Skip to content

Commit

Permalink
Fix spelling mistakes in Jupyter notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
EwoutH committed Oct 25, 2022
1 parent a6fe412 commit 9140b45
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/source/indepth_tutorial/open-exploration.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"metadata": {},
"source": [
"# Open exploration\n",
"In this second tuturial, I will showcase how to use the ema_workbench for performing open exploration. This tuturial will continue with the same example as used in the previos tuturial.\n",
"In this second tuturial, I will showcase how to use the ema_workbench for performing open exploration. This tuturial will continue with the same example as used in the previous tuturial.\n",
"\n",
"## some background\n",
"In exploratory modeling, we are interested in understanding how regions in the uncertainty space and/or the decision space map to the whole outcome space, or partitions thereof. There are two general approaches for investigating this mapping. The first one is through systematic sampling of the uncertainty or decision space. This is sometimes also known as open exploration. The second one is to search through the space in a directed manner using some type of optimization approach. This is sometimes also known as directed search. \n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@
"\n",
"The difference between `OutputSpaceExploration` and `AutoAdaptiveOutputSpaceExploration` is in the evolutionary operators. `AutoAdaptiveOutputSpaceExploration` uses auto adaptive operator selection as implemented in the BORG MOEA, while `OutputSpaceExploration` by default uses Simulated Binary crossover with polynomial mutation. Injection of new solutions is handled through auto adaptive population sizing and periodically starting with a new population if search is stalling. Below, examples are given of how to use both algorithms, as well as a a quick visualization of the convergence dynamics.\n",
"\n",
"For this example, we are using a stylized case study frequently used to develop and test decision making under deep uncertainty methods: the shallow lake problem. In this problem, a city has to decide on the ammount of polution they are going to put into a shallow lake per year. The city gets benefits from poluting the lake, but if an unknown threshold is crossed, the lake permenantly shifts to an undesirable poluted state. For further details on this case study, see for example [Quinn et al, 2017](https://doi.org/10.1016/j.envsoft.2017.02.017) and [Bartholomew et al, 2021](https://doi.org/10.1016/j.envsoft.2020.104699).\n"
"For this example, we are using a stylized case study frequently used to develop and test decision making under deep uncertainty methods: the shallow lake problem. In this problem, a city has to decide on the amount of pollution they are going to put into a shallow lake per year. The city gets benefits from polluting the lake, but if an unknown threshold is crossed, the lake permanently shifts to an undesirable polluted state. For further details on this case study, see for example [Quinn et al, 2017](https://doi.org/10.1016/j.envsoft.2017.02.017) and [Bartholomew et al, 2021](https://doi.org/10.1016/j.envsoft.2020.104699).\n"
]
},
{
Expand Down Expand Up @@ -476,4 +476,4 @@
},
"nbformat": 4,
"nbformat_minor": 5
}
}
8 changes: 4 additions & 4 deletions ema_workbench/examples/scenario_discovery.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running."
"The exploratory modeling workbench comes with a separate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running."
]
},
{
Expand All @@ -54,7 +54,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box. "
"Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimum coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box."
]
},
{
Expand Down Expand Up @@ -344,7 +344,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overal results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames."
"As we can see, we are unable to find a second box. The best coverage we can achieve is 0.35, which is well below the specified 0.8 threshold. Let's look at the final overall results from interactively fitting PRIM to the data. For this, we can use to convenience functions that transform the stats and boxes to pandas data frames."
]
},
{
Expand Down Expand Up @@ -908,4 +908,4 @@
},
"nbformat": 4,
"nbformat_minor": 4
}
}
4 changes: 2 additions & 2 deletions ema_workbench/examples/scenario_discovery_resampling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"the exploratory modeling workbench comes with a seperate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running."
"The exploratory modeling workbench comes with a separate analysis package. This analysis package contains prim. So let's import prim. The workbench also has its own logging functionality. We can turn this on to get some more insight into prim while it is running."
]
},
{
Expand All @@ -53,7 +53,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimium coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box. "
"Next, we need to instantiate the prim algorithm. To mimic the original work of Ben Bryant and Rob Lempert, we set the peeling alpha to 0.1. The peeling alpha determines how much data is peeled off in each iteration of the algorithm. The lower the value, the less data is removed in each iteration. The minimum coverage threshold that a box should meet is set to 0.8. Next, we can use the instantiated algorithm to find a first box."
]
},
{
Expand Down

0 comments on commit 9140b45

Please sign in to comment.