-
-
Notifications
You must be signed in to change notification settings - Fork 25.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DOC Pilot of annotating parameters with category #17929
base: main
Are you sure you want to change the base?
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,9 @@ | ||
.. role:: raw-html(raw) | ||
:format: html | ||
|
||
.. role:: raw-latex(raw) | ||
:format: latex | ||
|
||
.. |ControlParameter| replace:: :raw-html:`<a class="paramtype" href="../../glossary.html#term-control-parameter" title="Glossary on Control Parameter"><span class="badge badge-primary">Control</span></a>` :raw-latex:`{\small\sc [Control Parameter]}` | ||
.. |TuningParameter| replace:: :raw-html:`<a class="paramtype" href="../../glossary.html#term-control-parameter" title="Glossary on Tuning Parameter"><span class="badge badge-warning">Tuning</span></a>` :raw-latex:`{\small\sc [Tuning Parameter]}` | ||
.. |ResourcesParameter| replace:: :raw-html:`<a class="paramtype" href="../../glossary.html#term-control-parameter" title="Glossary on Resources Parameter"><span class="badge badge-secondary">Resources</span></a>` :raw-latex:`{\small\sc [Resources Parameter]}` |
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -1043,6 +1043,8 @@ class LogisticRegression(LinearClassifierMixin, | |
only supported by the 'saga' solver. If 'none' (not supported by the | ||
liblinear solver), no regularization is applied. | ||
|
||
|ControlParameter| | ||
|
||
.. versionadded:: 0.19 | ||
l1 penalty with SAGA solver (allowing 'multinomial' + L1) | ||
|
||
|
@@ -1051,18 +1053,26 @@ class LogisticRegression(LinearClassifierMixin, | |
l2 penalty with liblinear solver. Prefer dual=False when | ||
n_samples > n_features. | ||
|
||
|TuningParameter| | ||
|
||
tol : float, default=1e-4 | ||
Tolerance for stopping criteria. | ||
|
||
|TuningParameter| | ||
|
||
C : float, default=1.0 | ||
Inverse of regularization strength; must be a positive float. | ||
Like in support vector machines, smaller values specify stronger | ||
regularization. | ||
|
||
|TuningParameter| | ||
|
||
fit_intercept : bool, default=True | ||
Specifies if a constant (a.k.a. bias or intercept) should be | ||
added to the decision function. | ||
|
||
|ControlParameter| | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This as well. It does not influence the shape of the output, for example. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. And what of class_weight? There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would say that one is hard to tune, like, I have not heard about one tuning that with exact values for classes. But for So in a way, in my mental model, I see tuning like "can I fiddle with this value without having to change the rest of the pipeline". :-) (There are some dependencies between parameters though, which I might have to fix as a consequence.) |
||
|
||
intercept_scaling : float, default=1 | ||
Useful only when the solver 'liblinear' is used | ||
and self.fit_intercept is set to True. In this case, x becomes | ||
|
@@ -1076,6 +1086,8 @@ class LogisticRegression(LinearClassifierMixin, | |
To lessen the effect of regularization on synthetic feature weight | ||
(and therefore on the intercept) intercept_scaling has to be increased. | ||
|
||
|TuningParameter| | ||
|
||
class_weight : dict or 'balanced', default=None | ||
Weights associated with classes in the form ``{class_label: weight}``. | ||
If not given, all classes are supposed to have weight one. | ||
|
@@ -1087,13 +1099,17 @@ class LogisticRegression(LinearClassifierMixin, | |
Note that these weights will be multiplied with sample_weight (passed | ||
through the fit method) if sample_weight is specified. | ||
|
||
|ControlParameter| | ||
|
||
.. versionadded:: 0.17 | ||
*class_weight='balanced'* | ||
|
||
random_state : int, RandomState instance, default=None | ||
Used when ``solver`` == 'sag', 'saga' or 'liblinear' to shuffle the | ||
data. See :term:`Glossary <random_state>` for details. | ||
|
||
|ControlParameter| | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. This is one of those which might end up better having a completely custom type of a parameter, like There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
If we try to be too precise, we will make it tedious to contribute, and will not reach any users as they won't understand. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Maybe |
||
|
||
solver : {'newton-cg', 'lbfgs', 'liblinear', 'sag', 'saga'}, \ | ||
default='lbfgs' | ||
|
||
|
@@ -1113,6 +1129,8 @@ class LogisticRegression(LinearClassifierMixin, | |
features with approximately the same scale. You can | ||
preprocess the data with a scaler from sklearn.preprocessing. | ||
|
||
|TuningParameter| | ||
|
||
.. versionadded:: 0.17 | ||
Stochastic Average Gradient descent solver. | ||
.. versionadded:: 0.19 | ||
|
@@ -1123,6 +1141,8 @@ class LogisticRegression(LinearClassifierMixin, | |
max_iter : int, default=100 | ||
Maximum number of iterations taken for the solvers to converge. | ||
|
||
|TuningParameter| | ||
|
||
multi_class : {'auto', 'ovr', 'multinomial'}, default='auto' | ||
If the option chosen is 'ovr', then a binary problem is fit for each | ||
label. For 'multinomial' the loss minimised is the multinomial loss fit | ||
|
@@ -1131,6 +1151,8 @@ class LogisticRegression(LinearClassifierMixin, | |
'auto' selects 'ovr' if the data is binary, or if solver='liblinear', | ||
and otherwise selects 'multinomial'. | ||
|
||
|ControlParameter| | ||
|
||
.. versionadded:: 0.18 | ||
Stochastic Average Gradient descent solver for 'multinomial' case. | ||
.. versionchanged:: 0.22 | ||
|
@@ -1140,11 +1162,15 @@ class LogisticRegression(LinearClassifierMixin, | |
For the liblinear and lbfgs solvers set verbose to any positive | ||
number for verbosity. | ||
|
||
|ResourcesParameter| | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think sklearn might need also |
||
|
||
warm_start : bool, default=False | ||
When set to True, reuse the solution of the previous call to fit as | ||
initialization, otherwise, just erase the previous solution. | ||
Useless for liblinear solver. See :term:`the Glossary <warm_start>`. | ||
|
||
|ResourcesParameter| | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I think this should be double tagged as control as well. So this drastically changes behavior of the estimator if you are calling it multiple times. |
||
|
||
.. versionadded:: 0.17 | ||
*warm_start* to support *lbfgs*, *newton-cg*, *sag*, *saga* solvers. | ||
|
||
|
@@ -1156,13 +1182,17 @@ class LogisticRegression(LinearClassifierMixin, | |
context. ``-1`` means using all processors. | ||
See :term:`Glossary <n_jobs>` for more details. | ||
|
||
|ResourcesParameter| | ||
|
||
l1_ratio : float, default=None | ||
The Elastic-Net mixing parameter, with ``0 <= l1_ratio <= 1``. Only | ||
used if ``penalty='elasticnet'``. Setting ``l1_ratio=0`` is equivalent | ||
to using ``penalty='l2'``, while setting ``l1_ratio=1`` is equivalent | ||
to using ``penalty='l1'``. For ``0 < l1_ratio <1``, the penalty is a | ||
combination of L1 and L2. | ||
|
||
|TuningParameter| | ||
|
||
Attributes | ||
---------- | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hm, I would say this is a (even traditional) example of a tuning parameter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm... I suppose you're right.
I suspect that I over generated control parameters.