From 4611e200c5a642327eff7ffb5acb3db27d8fc872 Mon Sep 17 00:00:00 2001 From: "Y. Yu" <54338793+PursuitOfDataScience@users.noreply.github.com> Date: Thu, 2 Jun 2022 16:06:02 -0400 Subject: [PATCH] Update boost_tree.R Adding an extra line for `learn_rate` documentation, as some textbooks refer it as the shrinkage. You can view it [here](https://medium.com/data-design/let-me-learn-the-learning-rate-eta-in-xgboost-d9ad6ec78363#:~:text=The%20learning%20rate%20is%20the,the%20step%20weight%20is%200.25) --- R/boost_tree.R | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/R/boost_tree.R b/R/boost_tree.R index 005a9912c..bc7102373 100644 --- a/R/boost_tree.R +++ b/R/boost_tree.R @@ -29,7 +29,8 @@ #' @param tree_depth An integer for the maximum depth of the tree (i.e. number #' of splits) (specific engines only). #' @param learn_rate A number for the rate at which the boosting algorithm adapts -#' from iteration-to-iteration (specific engines only). +#' from iteration-to-iteration (specific engines only). This is sometimes referred to +#' as the shrinkage parameter. #' @param loss_reduction A number for the reduction in the loss function required #' to split further (specific engines only). #' @param sample_size A number for the number (or proportion) of data that is