Skip to content

Commit

Permalink
Fixed Typos (#971)
Browse files Browse the repository at this point in the history
Corrected typos + minor other text changes
  • Loading branch information
mkhalid1 authored and norvig committed Oct 4, 2018
1 parent 2d78c1f commit 1584933
Showing 1 changed file with 23 additions and 14 deletions.
37 changes: 23 additions & 14 deletions knowledge_current_best.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -38,15 +38,15 @@
"source": [
"## OVERVIEW\n",
"\n",
"Like the [learning module](https://github.com/aimacode/aima-python/blob/master/learning.ipynb), this chapter focuses on methods for generating a model/hypothesis for a domain. Unlike though the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.\n",
"Like the [learning module](https://github.com/aimacode/aima-python/blob/master/learning.ipynb), this chapter focuses on methods for generating a model/hypothesis for a domain; however, unlike the learning chapter, here we use prior knowledge to help us learn from new experiences and find a proper hypothesis.\n",
"\n",
"### First-Order Logic\n",
"\n",
"Usually knowledge in this field is represented as **first-order logic**, a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called **goal predicate**, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\n",
"Usually knowledge in this field is represented as **first-order logic**; a type of logic that uses variables and quantifiers in logical sentences. Hypotheses are represented by logical sentences with variables, while examples are logical sentences with set values instead of variables. The goal is to assign a value to a special first-order logic predicate, called **goal predicate**, for new examples given a hypothesis. We learn this hypothesis by infering knowledge from some given examples.\n",
"\n",
"### Representation\n",
"\n",
"In this module, we use dictionaries to represent examples, with keys the attribute names and values the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\n",
"In this module, we use dictionaries to represent examples, with keys being the attribute names and values being the corresponding example values. Examples also have an extra boolean field, 'GOAL', for the goal predicate. A hypothesis is represented as a list of dictionaries. Each dictionary in that list represents a disjunction. Inside these dictionaries/disjunctions we have conjunctions.\n",
"\n",
"For example, say we want to predict if an animal (cat or dog) will take an umbrella given whether or not it rains or the animal wears a coat. The goal value is 'take an umbrella' and is denoted by the key 'GOAL'. An example:\n",
"\n",
Expand All @@ -73,15 +73,15 @@
"\n",
"### Overview\n",
"\n",
"In **Current-Best Learning**, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes. The example is consistent with the hypothesis, the example is a **false positive** (real value is false but got predicted as true) and **false negative** (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:\n",
"In **Current-Best Learning**, we start with a hypothesis and we refine it as we iterate through the examples. For each example, there are three possible outcomes: the example is consistent with the hypothesis, the example is a **false positive** (real value is false but got predicted as true) and the example is a **false negative** (real value is true but got predicted as false). Depending on the outcome we refine the hypothesis accordingly:\n",
"\n",
"* Consistent: We do not change the hypothesis and we move on to the next example.\n",
"* Consistent: We do not change the hypothesis and move on to the next example.\n",
"\n",
"* False Positive: We **specialize** the hypothesis, which means we add a conjunction.\n",
"\n",
"* False Negative: We **generalize** the hypothesis, either by removing a conjunction or a disjunction, or by adding a disjunction.\n",
"\n",
"When specializing and generalizing, we should take care to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specialization/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point."
"When specializing or generalizing, we should make sure to not create inconsistencies with previous examples. To avoid that caveat, backtracking is needed. Thankfully, there is not just one specialization or generalization, so we have a lot to choose from. We will go through all the specializations/generalizations and we will refine our hypothesis as the first specialization/generalization consistent with all the examples seen up to that point."
]
},
{
Expand Down Expand Up @@ -138,7 +138,7 @@
"source": [
"### Implementation\n",
"\n",
"As mentioned previously, examples are dictionaries (with keys the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n",
"As mentioned earlier, examples are dictionaries (with keys being the attribute names) and hypotheses are lists of dictionaries (each dictionary is a disjunction). Also, in the hypothesis, we denote the *NOT* operation with an exclamation mark (!).\n",
"\n",
"We have functions to calculate the list of all specializations/generalizations, to check if an example is consistent/false positive/false negative with a hypothesis. We also have an auxiliary function to add a disjunction (or operation) to a hypothesis, and two other functions to check consistency of all (or just the negative) examples.\n",
"\n",
Expand All @@ -148,7 +148,9 @@
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"metadata": {
"collapsed": true
},
"outputs": [
{
"data": {
Expand Down Expand Up @@ -370,7 +372,7 @@
"\n",
"We will take a look at two examples. The first is a trivial one, while the second is a bit more complicated (you can also find it in the book).\n",
"\n",
"First we have the \"animals taking umbrellas\" example. Here we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are `Species`, `Rain` and `Coat`. The possible values are `[Cat, Dog]`, `[Yes, No]` and `[Yes, No]` respectively. Below we give seven examples (with `GOAL` we denote whether an animal will take an umbrella or not):"
"Earlier, we had the \"animals taking umbrellas\" example. Now we want to find a hypothesis to predict whether or not an animal will take an umbrella. The attributes are `Species`, `Rain` and `Coat`. The possible values are `[Cat, Dog]`, `[Yes, No]` and `[Yes, No]` respectively. Below we give seven examples (with `GOAL` we denote whether an animal will take an umbrella or not):"
]
},
{
Expand Down Expand Up @@ -427,7 +429,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We got 5/7 correct. Not terribly bad, but we can do better. Let's run the algorithm and see how that performs."
"We got 5/7 correct. Not terribly bad, but we can do better. Lets now run the algorithm and see how that performs in comparison to our current result. "
]
},
{
Expand Down Expand Up @@ -472,7 +474,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[{'Rain': '!No', 'Species': 'Cat'}, {'Rain': 'Yes', 'Coat': 'Yes'}, {'Coat': 'Yes', 'Species': 'Cat'}]\n"
"[{'Species': 'Cat', 'Rain': '!No'}, {'Species': 'Dog', 'Coat': 'Yes'}, {'Coat': 'Yes'}]\n"
]
}
],
Expand Down Expand Up @@ -563,7 +565,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Say our initial hypothesis is that there should be an alternative option and let's run the algorithm."
"Say our initial hypothesis is that there should be an alternative option and lets run the algorithm."
]
},
{
Expand Down Expand Up @@ -613,7 +615,7 @@
"name": "stdout",
"output_type": "stream",
"text": [
"[{'Pat': '!Full', 'Alt': 'Yes'}, {'Hun': 'No', 'Res': 'No', 'Rain': 'No', 'Pat': '!None'}, {'Fri': 'Yes', 'Type': 'Thai', 'Bar': 'No'}, {'Fri': 'No', 'Type': 'Italian', 'Bar': 'Yes', 'Alt': 'No', 'Est': '0-10'}, {'Fri': 'No', 'Bar': 'No', 'Est': '0-10', 'Type': 'Thai', 'Rain': 'Yes', 'Alt': 'No'}, {'Fri': 'Yes', 'Bar': 'Yes', 'Est': '30-60', 'Hun': 'Yes', 'Rain': 'No', 'Alt': 'Yes', 'Price': '$'}]\n"
"[{'Alt': 'Yes', 'Type': '!Thai', 'Hun': '!No', 'Bar': '!Yes'}, {'Alt': 'No', 'Fri': 'No', 'Pat': 'Some', 'Price': '$', 'Type': 'Burger', 'Est': '0-10'}, {'Rain': 'Yes', 'Res': 'No', 'Type': '!Burger'}, {'Alt': 'No', 'Bar': 'Yes', 'Hun': 'Yes', 'Pat': 'Some', 'Price': '$$', 'Rain': 'Yes', 'Res': 'Yes', 'Est': '0-10'}, {'Alt': 'No', 'Bar': 'No', 'Pat': 'Some', 'Price': '$$', 'Est': '0-10'}, {'Alt': 'Yes', 'Hun': 'Yes', 'Pat': 'Full', 'Price': '$', 'Res': 'No', 'Type': 'Burger', 'Est': '30-60'}]\n"
]
}
],
Expand All @@ -627,6 +629,13 @@
"source": [
"It might be quite complicated, with many disjunctions if we are unlucky, but it will always be correct, as long as a correct hypothesis exists."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
Expand All @@ -645,7 +654,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.5.3"
"version": "3.6.5"
}
},
"nbformat": 4,
Expand Down

0 comments on commit 1584933

Please sign in to comment.