Skip to content

Commit

Permalink
fix code block with mkdocs
Browse files Browse the repository at this point in the history
  • Loading branch information
weixuanfu committed Jun 21, 2018
1 parent af2f8d1 commit 643509e
Show file tree
Hide file tree
Showing 5 changed files with 66 additions and 63 deletions.
4 changes: 2 additions & 2 deletions docs/api/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -255,7 +255,7 @@ <h1 id="classification">Classification</h1>
If not None, this setting will override the <em>generations</em> parameter and allow TPOT to run until <em>max_time_mins</em> minutes elapse.
</blockquote>

<strong>max_eval_time_mins</strong>: float, optional (default=5)
<strong>max_eval_time_mins</strong>: integer, optional (default=5)
<blockquote>
How many minutes TPOT has to evaluate a single pipeline.
<br /><br />
Expand Down Expand Up @@ -707,7 +707,7 @@ <h1 id="regression">Regression</h1>
If not None, this setting will override the <em>generations</em> parameter and allow TPOT to run until <em>max_time_mins</em> minutes elapse.
</blockquote>

<strong>max_eval_time_mins</strong>: float, optional (default=5)
<strong>max_eval_time_mins</strong>: integer, optional (default=5)
<blockquote>
How many minutes TPOT has to evaluate a single pipeline.
<br /><br />
Expand Down
2 changes: 1 addition & 1 deletion docs/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -213,5 +213,5 @@

<!--
MkDocs version : 0.17.4
Build Date UTC : 2018-06-21 11:42:22
Build Date UTC : 2018-06-21 13:18:32
-->
12 changes: 6 additions & 6 deletions docs/search/search_index.json

Large diffs are not rendered by default.

53 changes: 28 additions & 25 deletions docs/using/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -377,7 +377,7 @@ <h1 id="tpot-on-the-command-line">TPOT on the command line</h1>
<tr>
<td>-maxeval</td>
<td>MAX_EVAL_MINS</td>
<td>Any positive float</td>
<td>Any positive integer</td>
<td>How many minutes TPOT has to evaluate a single pipeline.
<br /><br />
Setting this parameter to higher values will allow TPOT to consider more complex pipelines but will also allow TPOT to run longer.</td>
Expand Down Expand Up @@ -472,43 +472,46 @@ <h1 id="tpot-on-the-command-line">TPOT on the command line</h1>

<h1 id="scoring-functions">Scoring functions</h1>
<p>TPOT makes use of <code>sklearn.model_selection.cross_val_score</code> for evaluating pipelines, and as such offers the same support for scoring functions. There are two ways to make use of scoring functions with TPOT:</p>
<ol>
<ul>
<li>
<p>You can pass in a string to the <code>scoring</code> parameter from the list above. Any other strings will cause TPOT to throw an exception.</p>
</li>
<li>
<p>You can pass the callable object/function with signature <code>scorer(estimator, X, y)</code>, where <code>estimator</code> is trained estimator to use for scoring, <code>X</code> are features that will be passed to <code>estimator.predict</code> and <code>y</code> are target values for <code>X</code>. To do this, you should implement your own function. See the example below for further explanation.</p>
</li>
</ol>
<p>```Python
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics.scorer import make_scorer</p>
<p>digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# Make a custom metric function
def my_custom_accuracy(y_true, y_pred):
return float(sum(y_pred == y_true)) / len(y_true)</p>
<p># Make a custom a scorer from the custom metric function
# Note: greater_is_better=False in make_scorer below would mean that the scoring function should be minimized.
my_custom_scorer = make_scorer(my_custom_accuracy, greater_is_better=True)</p>
<p>tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
scoring=my_custom_scorer)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')
```</p>
<ol>
</ul>
<pre><code class="Python">from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics.scorer import make_scorer

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# Make a custom metric function
def my_custom_accuracy(y_true, y_pred):
return float(sum(y_pred == y_true)) / len(y_true)

# Make a custom a scorer from the custom metric function
# Note: greater_is_better=False in make_scorer below would mean that the scoring function should be minimized.
my_custom_scorer = make_scorer(my_custom_accuracy, greater_is_better=True)

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
scoring=my_custom_scorer)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')
</code></pre>

<ul>
<li>
<p>You can pass a metric function with the signature <code>score_func(y_true, y_pred)</code> (e.g. <code>my_custom_accuracy</code> in the example above), where <code>y_true</code> are the true target values and <code>y_pred</code> are the predicted target values from an estimator. To do this, you should implement your own function. See the example above for further explanation. TPOT assumes that any function with "error" or "loss" in the function name is meant to be minimized (<code>greater_is_better=False</code> in <a href="http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html"><code>make_scorer</code></a>), whereas any other functions will be maximized. This scoring type was deprecated in version 0.9.1 and will be removed in version 0.11.</p>
</li>
<li>
<p><strong>my_module.scorer_name</strong>: You can also use a custom <code>score_func(y_true, y_pred)</code> or <code>scorer(estimator, X, y)</code> function through the command line by adding the argument <code>-scoring my_module.scorer</code> to your command-line call. TPOT will import your module and use the custom scoring function from there. TPOT will include your current working directory when importing the module, so you can place it in the same directory where you are going to run TPOT.
Example: <code>-scoring sklearn.metrics.auc</code> will use the function auc from sklearn.metrics module.</p>
</li>
</ol>
</ul>
<h1 id="built-in-tpot-configurations">Built-in TPOT configurations</h1>
<p>TPOT comes with a handful of default operators and parameter configurations that we believe work well for optimizing machine learning pipelines. Below is a list of the current built-in configurations that come with TPOT.</p>
<table>
Expand Down
58 changes: 29 additions & 29 deletions docs_sources/using.md
Original file line number Diff line number Diff line change
Expand Up @@ -350,35 +350,35 @@ A setting of 2 or higher will add a progress bar during the optimization procedu

TPOT makes use of `sklearn.model_selection.cross_val_score` for evaluating pipelines, and as such offers the same support for scoring functions. There are two ways to make use of scoring functions with TPOT:

1. You can pass in a string to the `scoring` parameter from the list above. Any other strings will cause TPOT to throw an exception.

2. You can pass the callable object/function with signature `scorer(estimator, X, y)`, where `estimator` is trained estimator to use for scoring, `X` are features that will be passed to `estimator.predict` and `y` are target values for `X`. To do this, you should implement your own function. See the example below for further explanation.

```Python
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics.scorer import make_scorer

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# Make a custom metric function
def my_custom_accuracy(y_true, y_pred):
return float(sum(y_pred == y_true)) / len(y_true)

# Make a custom a scorer from the custom metric function
# Note: greater_is_better=False in make_scorer below would mean that the scoring function should be minimized.
my_custom_scorer = make_scorer(my_custom_accuracy, greater_is_better=True)

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
scoring=my_custom_scorer)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')
```

3. You can pass a metric function with the signature `score_func(y_true, y_pred)` (e.g. `my_custom_accuracy` in the example above), where `y_true` are the true target values and `y_pred` are the predicted target values from an estimator. To do this, you should implement your own function. See the example above for further explanation. TPOT assumes that any function with "error" or "loss" in the function name is meant to be minimized (`greater_is_better=False` in [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html)), whereas any other functions will be maximized. This scoring type was deprecated in version 0.9.1 and will be removed in version 0.11.
- You can pass in a string to the `scoring` parameter from the list above. Any other strings will cause TPOT to throw an exception.

- You can pass the callable object/function with signature `scorer(estimator, X, y)`, where `estimator` is trained estimator to use for scoring, `X` are features that will be passed to `estimator.predict` and `y` are target values for `X`. To do this, you should implement your own function. See the example below for further explanation.

```Python
from tpot import TPOTClassifier
from sklearn.datasets import load_digits
from sklearn.model_selection import train_test_split
from sklearn.metrics.scorer import make_scorer

digits = load_digits()
X_train, X_test, y_train, y_test = train_test_split(digits.data, digits.target,
train_size=0.75, test_size=0.25)
# Make a custom metric function
def my_custom_accuracy(y_true, y_pred):
return float(sum(y_pred == y_true)) / len(y_true)

# Make a custom a scorer from the custom metric function
# Note: greater_is_better=False in make_scorer below would mean that the scoring function should be minimized.
my_custom_scorer = make_scorer(my_custom_accuracy, greater_is_better=True)

tpot = TPOTClassifier(generations=5, population_size=20, verbosity=2,
scoring=my_custom_scorer)
tpot.fit(X_train, y_train)
print(tpot.score(X_test, y_test))
tpot.export('tpot_mnist_pipeline.py')
```

- You can pass a metric function with the signature `score_func(y_true, y_pred)` (e.g. `my_custom_accuracy` in the example above), where `y_true` are the true target values and `y_pred` are the predicted target values from an estimator. To do this, you should implement your own function. See the example above for further explanation. TPOT assumes that any function with "error" or "loss" in the function name is meant to be minimized (`greater_is_better=False` in [`make_scorer`](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.make_scorer.html)), whereas any other functions will be maximized. This scoring type was deprecated in version 0.9.1 and will be removed in version 0.11.


* **my_module.scorer_name**: You can also use a custom `score_func(y_true, y_pred)` or `scorer(estimator, X, y)` function through the command line by adding the argument `-scoring my_module.scorer` to your command-line call. TPOT will import your module and use the custom scoring function from there. TPOT will include your current working directory when importing the module, so you can place it in the same directory where you are going to run TPOT.
Expand Down

0 comments on commit 643509e

Please sign in to comment.