Skip to content
This repository has been archived by the owner on Jan 13, 2024. It is now read-only.

Commit

Permalink
fix latex notebooks
Browse files Browse the repository at this point in the history
  • Loading branch information
sdpython committed Oct 17, 2016
1 parent af92a22 commit 1762332
Showing 1 changed file with 1 addition and 1 deletion.
Expand Up @@ -768,7 +768,7 @@
"\n",
"*arbre*\n",
"\n",
"On \u00e9crit rarement le crit\u00e8re \u00e0 optimiser pour l'arbre dans sa totalit\u00e9 mais seulement pour une feuille de l'arbre qu'on cherche \u00e0 d\u00e9couper. On optimise une m\u00e9trique ([Gini](https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity), [entropie](https://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain), [variance](https://en.wikipedia.org/wiki/Decision_tree_learning#Variance_reduction)). Si on note $f(k)$ la proportion d'\u00e9l\u00e9ments bien class\u00e9s par cette feuille. Le crit\u00e8re optimis\u00e9 est : $ - \\sum_k f(k)\\ln f(k)$."
"On \u00e9crit rarement le crit\u00e8re \u00e0 optimiser pour l'arbre dans sa totalit\u00e9 mais seulement pour une feuille de l'arbre qu'on cherche \u00e0 d\u00e9couper. On optimise une m\u00e9trique ([Gini](https://en.wikipedia.org/wiki/Decision_tree_learning#Gini_impurity), [entropie](https://en.wikipedia.org/wiki/Decision_tree_learning#Information_gain), [variance](https://en.wikipedia.org/wiki/Decision_tree_learning#Variance_reduction)). Si on note $f(k)$ la proportion d'\u00e9l\u00e9ments bien class\u00e9s par cette feuille. Le crit\u00e8re optimis\u00e9 est : $-\\sum_k f(k)\\ln f(k)$."
]
},
{
Expand Down

0 comments on commit 1762332

Please sign in to comment.