Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[docs] Python wrapper doesn't support params in form of list of pairs #2078

Merged
merged 4 commits into from
Apr 10, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 4 additions & 4 deletions docs/Features.rst
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ Optimization in Network Communication
-------------------------------------

It only needs to use some collective communication algorithms, like "All reduce", "All gather" and "Reduce scatter", in parallel learning of LightGBM.
LightGBM implement state-of-art algorithms\ `[9] <#references>`__.
LightGBM implements state-of-art algorithms\ `[9] <#references>`__.
These collective communication algorithms can provide much better performance than point-to-point communication.

Optimization in Parallel Learning
Expand Down Expand Up @@ -147,7 +147,7 @@ Data Parallel in LightGBM

We reduce communication cost of data parallel in LightGBM:

1. Instead of "Merge global histograms from all local histograms", LightGBM use "Reduce Scatter" to merge histograms of different (non-overlapping) features for different workers.
1. Instead of "Merge global histograms from all local histograms", LightGBM uses "Reduce Scatter" to merge histograms of different (non-overlapping) features for different workers.
Then workers find the local best split on local merged histograms and sync up the global best split.

2. As aforementioned, LightGBM uses histogram subtraction to speed up training.
Expand Down Expand Up @@ -244,9 +244,9 @@ Other Features

- Validation metric output during training

- Multi validation data
- Multiple validation data

- Multi metrics
- Multiple metrics

- Early stopping (both training and prediction)

Expand Down
6 changes: 3 additions & 3 deletions docs/Python-Intro.rst
Original file line number Diff line number Diff line change
Expand Up @@ -134,14 +134,14 @@ If you are concerned about your memory consumption, you can save memory by:
Setting Parameters
------------------

LightGBM can use either a list of pairs or a dictionary to set `Parameters <./Parameters.rst>`__.
LightGBM can use a dictionary to set `Parameters <./Parameters.rst>`__.
For instance:

- Booster parameters:

.. code:: python

param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param = {'num_leaves': 31, 'num_trees': 100, 'objective': 'binary'}
param['metric'] = 'auc'

- You can also specify multiple eval metrics:
Expand Down Expand Up @@ -176,7 +176,7 @@ A saved model can be loaded:

.. code:: python

bst = lgb.Booster(model_file='model.txt') #init model
bst = lgb.Booster(model_file='model.txt') # init model

CV
--
Expand Down
2 changes: 1 addition & 1 deletion python-package/lightgbm/engine.py
Original file line number Diff line number Diff line change
Expand Up @@ -353,7 +353,7 @@ def cv(params, train_set, num_boost_round=100,
folds : generator or iterator of (train_idx, test_idx) tuples, scikit-learn splitter object or None, optional (default=None)
If generator or iterator, it should yield the train and test indices for each fold.
If object, it should be one of the scikit-learn splitter classes
(http://scikit-learn.org/stable/modules/classes.html#splitter-classes)
(https://scikit-learn.org/stable/modules/classes.html#splitter-classes)
and have ``split`` method.
This argument has highest priority over other data split arguments.
nfold : int, optional (default=5)
Expand Down