diff --git a/docs/Features.rst b/docs/Features.rst index a389c584244..7afd3038b62 100644 --- a/docs/Features.rst +++ b/docs/Features.rst @@ -69,7 +69,7 @@ Optimization in Network Communication ------------------------------------- It only needs to use some collective communication algorithms, like "All reduce", "All gather" and "Reduce scatter", in parallel learning of LightGBM. -LightGBM implement state-of-art algorithms\ `[9] <#references>`__. +LightGBM implements state-of-art algorithms\ `[9] <#references>`__. These collective communication algorithms can provide much better performance than point-to-point communication. Optimization in Parallel Learning @@ -147,7 +147,7 @@ Data Parallel in LightGBM We reduce communication cost of data parallel in LightGBM: -1. Instead of "Merge global histograms from all local histograms", LightGBM use "Reduce Scatter" to merge histograms of different (non-overlapping) features for different workers. +1. Instead of "Merge global histograms from all local histograms", LightGBM uses "Reduce Scatter" to merge histograms of different (non-overlapping) features for different workers. Then workers find the local best split on local merged histograms and sync up the global best split. 2. As aforementioned, LightGBM uses histogram subtraction to speed up training. @@ -244,9 +244,9 @@ Other Features - Validation metric output during training -- Multi validation data +- Multiple validation data -- Multi metrics +- Multiple metrics - Early stopping (both training and prediction) diff --git a/docs/Python-Intro.rst b/docs/Python-Intro.rst index b97db27556b..f9a15e80b28 100644 --- a/docs/Python-Intro.rst +++ b/docs/Python-Intro.rst @@ -134,14 +134,14 @@ If you are concerned about your memory consumption, you can save memory by: Setting Parameters ------------------ -LightGBM can use either a list of pairs or a dictionary to set `Parameters <./Parameters.rst>`__. +LightGBM can use a dictionary to set `Parameters <./Parameters.rst>`__. For instance: - Booster parameters: .. code:: python - param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'} + param = {'num_leaves': 31, 'num_trees': 100, 'objective': 'binary'} param['metric'] = 'auc' - You can also specify multiple eval metrics: @@ -176,7 +176,7 @@ A saved model can be loaded: .. code:: python - bst = lgb.Booster(model_file='model.txt') #init model + bst = lgb.Booster(model_file='model.txt') # init model CV -- diff --git a/python-package/lightgbm/engine.py b/python-package/lightgbm/engine.py index 9792d1c9d82..490338b2702 100644 --- a/python-package/lightgbm/engine.py +++ b/python-package/lightgbm/engine.py @@ -353,7 +353,7 @@ def cv(params, train_set, num_boost_round=100, folds : generator or iterator of (train_idx, test_idx) tuples, scikit-learn splitter object or None, optional (default=None) If generator or iterator, it should yield the train and test indices for each fold. If object, it should be one of the scikit-learn splitter classes - (http://scikit-learn.org/stable/modules/classes.html#splitter-classes) + (https://scikit-learn.org/stable/modules/classes.html#splitter-classes) and have ``split`` method. This argument has highest priority over other data split arguments. nfold : int, optional (default=5)