Skip to content

Commit

Permalink
[docs] Python wrapper doesn't support params in form of list of pairs (
Browse files Browse the repository at this point in the history
…#2078)

* fixed Python intro

* fixed typos

* scikit-learn added support of https
  • Loading branch information
StrikerRUS committed Apr 10, 2019
1 parent 691b842 commit b3c31c4
Show file tree
Hide file tree
Showing 3 changed files with 8 additions and 8 deletions.
8 changes: 4 additions & 4 deletions docs/Features.rst
Expand Up @@ -69,7 +69,7 @@ Optimization in Network Communication
-------------------------------------

It only needs to use some collective communication algorithms, like "All reduce", "All gather" and "Reduce scatter", in parallel learning of LightGBM.
LightGBM implement state-of-art algorithms\ `[9] <#references>`__.
LightGBM implements state-of-art algorithms\ `[9] <#references>`__.
These collective communication algorithms can provide much better performance than point-to-point communication.

Optimization in Parallel Learning
Expand Down Expand Up @@ -147,7 +147,7 @@ Data Parallel in LightGBM

We reduce communication cost of data parallel in LightGBM:

1. Instead of "Merge global histograms from all local histograms", LightGBM use "Reduce Scatter" to merge histograms of different (non-overlapping) features for different workers.
1. Instead of "Merge global histograms from all local histograms", LightGBM uses "Reduce Scatter" to merge histograms of different (non-overlapping) features for different workers.
Then workers find the local best split on local merged histograms and sync up the global best split.

2. As aforementioned, LightGBM uses histogram subtraction to speed up training.
Expand Down Expand Up @@ -244,9 +244,9 @@ Other Features

- Validation metric output during training

- Multi validation data
- Multiple validation data

- Multi metrics
- Multiple metrics

- Early stopping (both training and prediction)

Expand Down
6 changes: 3 additions & 3 deletions docs/Python-Intro.rst
Expand Up @@ -134,14 +134,14 @@ If you are concerned about your memory consumption, you can save memory by:
Setting Parameters
------------------

LightGBM can use either a list of pairs or a dictionary to set `Parameters <./Parameters.rst>`__.
LightGBM can use a dictionary to set `Parameters <./Parameters.rst>`__.
For instance:

- Booster parameters:

.. code:: python
param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param = {'num_leaves': 31, 'num_trees': 100, 'objective': 'binary'}
param['metric'] = 'auc'
- You can also specify multiple eval metrics:
Expand Down Expand Up @@ -176,7 +176,7 @@ A saved model can be loaded:

.. code:: python
bst = lgb.Booster(model_file='model.txt') #init model
bst = lgb.Booster(model_file='model.txt') # init model
CV
--
Expand Down
2 changes: 1 addition & 1 deletion python-package/lightgbm/engine.py
Expand Up @@ -353,7 +353,7 @@ def cv(params, train_set, num_boost_round=100,
folds : generator or iterator of (train_idx, test_idx) tuples, scikit-learn splitter object or None, optional (default=None)
If generator or iterator, it should yield the train and test indices for each fold.
If object, it should be one of the scikit-learn splitter classes
(http://scikit-learn.org/stable/modules/classes.html#splitter-classes)
(https://scikit-learn.org/stable/modules/classes.html#splitter-classes)
and have ``split`` method.
This argument has highest priority over other data split arguments.
nfold : int, optional (default=5)
Expand Down

0 comments on commit b3c31c4

Please sign in to comment.