Skip to content

Commit

Permalink
fixed some typos (dmlc#1814)
Browse files Browse the repository at this point in the history
  • Loading branch information
kashif authored and terrytangyuan committed Nov 25, 2016
1 parent be2f28e commit da2556f
Show file tree
Hide file tree
Showing 14 changed files with 32 additions and 38 deletions.
2 changes: 1 addition & 1 deletion demo/distributed-training/README.md
Expand Up @@ -18,6 +18,6 @@ Checkout [this tutorial](https://xgboost.readthedocs.org/en/latest/tutorials/aws

Model Analysis
--------------
XGBoost is exchangable across all bindings and platforms.
XGBoost is exchangeable across all bindings and platforms.
This means you can use python or R to analyze the learnt model and do prediction.
For example, you can use the [plot_model.ipynb](plot_model.ipynb) to visualize the learnt model.
7 changes: 3 additions & 4 deletions demo/kaggle-higgs/README.md
@@ -1,15 +1,15 @@
Highlights
=====
Higgs challenge ends recently, xgboost is being used by many users. This list highlights the xgboost solutions of players
* Blogpost by phunther: [Winning solution of Kaggle Higgs competition: what a single model can do](http://no2147483647.wordpress.com/2014/09/17/winning-solution-of-kaggle-higgs-competition-what-a-single-model-can-do/)
* Blogpost by phunther: [Winning solution of Kaggle Higgs competition: what a single model can do](http://no2147483647.wordpress.com/2014/09/17/winning-solution-of-kaggle-higgs-competition-what-a-single-model-can-do/)
* The solution by Tianqi Chen and Tong He [Link](https://github.com/hetong007/higgsml)

Guide for Kaggle Higgs Challenge
=====

This is the folder giving example of how to use XGBoost Python Module to run Kaggle Higgs competition

This script will achieve about 3.600 AMS score in public leadboard. To get start, you need do following step:
This script will achieve about 3.600 AMS score in public leaderboard. To get start, you need do following step:

1. Compile the XGBoost python lib
```bash
Expand All @@ -28,5 +28,4 @@ speedtest.py compares xgboost's speed on this dataset with sklearn.GBM

Using R module
=====
* Alternatively, you can run using R, higgs-train.R and higgs-pred.R.

* Alternatively, you can run using R, higgs-train.R and higgs-pred.R.
10 changes: 5 additions & 5 deletions demo/kaggle-otto/understandingXGBoostModel.Rmd
Expand Up @@ -152,9 +152,9 @@ Each group at each division level is called a branch and the deepest level is ca

In the final model, these *leafs* are supposed to be as pure as possible for each tree, meaning in our case that each *leaf* should be made of one class of **Otto** product only (of course it is not true, but that's what we try to achieve in a minimum of splits).

**Not all *splits* are equally important**. Basically the first *split* of a tree will have more impact on the purity that, for instance, the deepest *split*. Intuitively, we understand that the first *split* makes most of the work, and the following *splits* focus on smaller parts of the dataset which have been missclassified by the first *tree*.
**Not all *splits* are equally important**. Basically the first *split* of a tree will have more impact on the purity that, for instance, the deepest *split*. Intuitively, we understand that the first *split* makes most of the work, and the following *splits* focus on smaller parts of the dataset which have been misclassified by the first *tree*.

In the same way, in Boosting we try to optimize the missclassification at each round (it is called the *loss*). So the first *tree* will do the big work and the following trees will focus on the remaining, on the parts not correctly learned by the previous *trees*.
In the same way, in Boosting we try to optimize the misclassification at each round (it is called the *loss*). So the first *tree* will do the big work and the following trees will focus on the remaining, on the parts not correctly learned by the previous *trees*.

The improvement brought by each *split* can be measured, it is the *gain*.

Expand Down Expand Up @@ -200,7 +200,7 @@ This function gives a color to each bar. These colors represent groups of featur

From here you can take several actions. For instance you can remove the less important feature (feature selection process), or go deeper in the interaction between the most important features and labels.

Or you can just reason about why these features are so importat (in **Otto** challenge we can't go this way because there is not enough information).
Or you can just reason about why these features are so important (in **Otto** challenge we can't go this way because there is not enough information).

Tree graph
----------
Expand All @@ -217,7 +217,7 @@ xgb.plot.tree(feature_names = names, model = bst, n_first_tree = 2)

We are just displaying the first two trees here.

On simple models the first two trees may be enough. Here, it might not be the case. We can see from the size of the trees that the intersaction between features is complicated.
On simple models the first two trees may be enough. Here, it might not be the case. We can see from the size of the trees that the interaction between features is complicated.
Besides, **XGBoost** generate `k` trees at each round for a `k`-classification problem. Therefore the two trees illustrated here are trying to classify data into different classes.

Going deeper
Expand All @@ -226,6 +226,6 @@ Going deeper
There are 4 documents you may also be interested in:

* [xgboostPresentation.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/xgboostPresentation.Rmd): general presentation
* [discoverYourData.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/discoverYourData.Rmd): explaining feature analysus
* [discoverYourData.Rmd](https://github.com/dmlc/xgboost/blob/master/R-package/vignettes/discoverYourData.Rmd): explaining feature analysis
* [Feature Importance Analysis with XGBoost in Tax audit](http://fr.slideshare.net/MichaelBENESTY/feature-importance-analysis-with-xgboost-in-tax-audit): use case
* [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/): very good book to have a good understanding of the model
9 changes: 4 additions & 5 deletions demo/rank/README.md
@@ -1,22 +1,21 @@
Learning to rank
====
XGBoost supports accomplishing ranking tasks. In ranking scenario, data are often grouped and we need the [group information file](../../doc/input_format.md#group-input-format) to specify ranking tasks. The model used in XGBoost for ranking is the LambdaRank, this function is not yet completed. Currently, we provide pairwise rank.
XGBoost supports accomplishing ranking tasks. In ranking scenario, data are often grouped and we need the [group information file](../../doc/input_format.md#group-input-format) to specify ranking tasks. The model used in XGBoost for ranking is the LambdaRank, this function is not yet completed. Currently, we provide pairwise rank.

### Parameters
The configuration setting is similar to the regression and binary classification setting,except user need to specify the objectives:
The configuration setting is similar to the regression and binary classification setting, except user need to specify the objectives:

```
...
objective="rank:pairwise"
...
```
For more usage details please refer to the [binary classification demo](../binary_classification),
For more usage details please refer to the [binary classification demo](../binary_classification),

Instructions
====
The dataset for ranking demo is from LETOR04 MQ2008 fold1,
The dataset for ranking demo is from LETOR04 MQ2008 fold1,
You can use the following command to run the example

Get the data: ./wgetdata.sh
Run the example: ./runexp.sh

2 changes: 1 addition & 1 deletion doc/faq.md
Expand Up @@ -41,7 +41,7 @@ Most importantly, it pushes the limit of the computation resources we can use.

How can I port the model to my own system
-----------------------------------------
The model and data format of XGBoost is exchangable,
The model and data format of XGBoost is exchangeable,
which means the model trained by one language can be loaded in another.
This means you can train the model using R, while running prediction using
Java or C++, which are more common in production systems.
Expand Down
1 change: 0 additions & 1 deletion doc/get_started/index.md
Expand Up @@ -36,7 +36,6 @@ bst <- xgboost(data = train$data, label = train$label, max.depth = 2, eta = 1, n
nthread = 2, objective = "binary:logistic")
# predict
pred <- predict(bst, test$data)

```

## Julia
Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/contribute.md
Expand Up @@ -138,7 +138,7 @@ make the-markdown-to-make.md
- Add the generated figure to the ```dmlc/web-data``` repo.
- If you already cloned the repo to doc, this means a ```git add```
- Create PR for both the markdown and ```dmlc/web-data```
- You can also build the document locally by typing the followig command at ```doc```
- You can also build the document locally by typing the following command at ```doc```
```bash
make html
```
Expand Down
2 changes: 1 addition & 1 deletion doc/how_to/index.md
Expand Up @@ -6,7 +6,7 @@ This page contains guidelines to use and develop mxnets.
- [How to Install XGBoost](../build.md)

## Use XGBoost in Specific Ways
- [Parameter tunning guide](param_tuning.md)
- [Parameter tuning guide](param_tuning.md)
- [Use out of core computation for large dataset](external_memory.md)

## Develop and Hack XGBoost
Expand Down
5 changes: 2 additions & 3 deletions doc/input_format.md
Expand Up @@ -12,8 +12,7 @@ train.txt
1 0:0.01 1:0.3
0 0:0.2 1:0.3
```
Each line represent a single instance, and in the first line '1' is the instance label,'101' and '102' are feature indices, '1.2' and '0.03' are feature values. In the binary classification case, '1' is used to indicate positive samples, and '0' is used to indicate negative samples. We also support probability values in [0,1] as label, to indicate the probability of the instanc
e being positive.
Each line represent a single instance, and in the first line '1' is the instance label,'101' and '102' are feature indices, '1.2' and '0.03' are feature values. In the binary classification case, '1' is used to indicate positive samples, and '0' is used to indicate negative samples. We also support probability values in [0,1] as label, to indicate the probability of the instance being positive.

Additional Information
----------------------
Expand Down Expand Up @@ -54,4 +53,4 @@ train.txt.base_margin
1.0
3.4
```
XGBoost will take these values as intial margin prediction and boost from that. An important note about base_margin is that it should be margin prediction before transformation, so if you are doing logistic loss, you will need to put in value before logistic transformation. If you are using XGBoost predictor, use pred_margin=1 to output margin values.
XGBoost will take these values as initial margin prediction and boost from that. An important note about base_margin is that it should be margin prediction before transformation, so if you are doing logistic loss, you will need to put in value before logistic transformation. If you are using XGBoost predictor, use pred_margin=1 to output margin values.
4 changes: 2 additions & 2 deletions doc/jvm/index.md
Expand Up @@ -17,7 +17,7 @@ To publish the artifacts to your local maven repository, run

mvn install

Or, if you would like to skip tests, run
Or, if you would like to skip tests, run

mvn -DskipTests install

Expand All @@ -32,7 +32,7 @@ This command will publish the xgboost binaries, the compiled java classes as wel



After integrating with Dataframe/Dataset APIs of Spark 2.0, XGBoost4J-Spark only supports compile with Spark 2.x. You can build XGBoost4J-Spark as a component of XGBoost4J by running `mvn package`, and you can specify the version of spark with `mvn -Dspark.version=2.0.0 package`. (To continue working with Spark 1.x, the users are supposed to update pom.xml by modifying the properties like `spark.version`, `scala.version`, and `scala.binary.version`. Users also need to change the implemention by replacing SparkSession with SQLContext and the type of API parameters from Dataset[_] to Dataframe)
After integrating with Dataframe/Dataset APIs of Spark 2.0, XGBoost4J-Spark only supports compile with Spark 2.x. You can build XGBoost4J-Spark as a component of XGBoost4J by running `mvn package`, and you can specify the version of spark with `mvn -Dspark.version=2.0.0 package`. (To continue working with Spark 1.x, the users are supposed to update pom.xml by modifying the properties like `spark.version`, `scala.version`, and `scala.binary.version`. Users also need to change the implementation by replacing SparkSession with SQLContext and the type of API parameters from Dataset[_] to Dataframe)

Contents
--------
Expand Down
2 changes: 1 addition & 1 deletion doc/jvm/java_intro.md
Expand Up @@ -133,7 +133,7 @@ Booster booster = new Booster(param, "model.bin");
```

## Prediction
after training and loading a model, you use it to predict other data, the predict results will be a two-dimension float array (nsample, nclass) ,for predict leaf, it would be (nsample, nclass*ntrees)
after training and loading a model, you use it to predict other data, the predict results will be a two-dimension float array (nsample, nclass), for predict leaf, it would be (nsample, nclass*ntrees)
```java
DMatrix dtest = new DMatrix("test.svm.txt");
//predict
Expand Down
2 changes: 1 addition & 1 deletion doc/jvm/xgboost4j-intro.md
Expand Up @@ -26,7 +26,7 @@ They are also often [much more efficient](http://arxiv.org/abs/1603.02754).

The gap between the implementation fundamentals of the general data processing frameworks and the more specific machine learning libraries/systems prohibits the smooth connection between these two types of systems, thus brings unnecessary inconvenience to the end user. The common workflow to the user is to utilize the systems like Spark/Flink to preprocess/clean data, pass the results to machine learning systems like [XGBoost](https://github.com/dmlc/xgboost)/[MxNet](https://github.com/dmlc/mxnet)) via the file systems and then conduct the following machine learning phase. This process jumping across two types of systems creates certain inconvenience for the users and brings additional overhead to the operators of the infrastructure.

We want best of both worlds, so we can use the data processing frameworks like Spark and Flink toghether with
We want best of both worlds, so we can use the data processing frameworks like Spark and Flink together with
the best distributed machine learning solutions.
To resolve the situation, we introduce the new-brewed [XGBoost4J](https://github.com/dmlc/xgboost/tree/master/jvm-packages),
<b>XGBoost</b> for <b>J</b>VM Platform. We aim to provide the clean Java/Scala APIs and the integration with the most popular data processing systems developed in JVM-based languages.
Expand Down
18 changes: 8 additions & 10 deletions doc/jvm/xgboost4j_full_integration.md
@@ -1,6 +1,6 @@
## Introduction
## Introduction

On March 2016, we released the first version of [XGBoost4J](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), which is a set of packages providing Java/Scala interfaces of XGBoost and the integration with prevalent JVM-based distributed data processing platforms, like Spark/Flink.
On March 2016, we released the first version of [XGBoost4J](http://dmlc.ml/2016/03/14/xgboost4j-portable-distributed-xgboost-in-spark-flink-and-dataflow.html), which is a set of packages providing Java/Scala interfaces of XGBoost and the integration with prevalent JVM-based distributed data processing platforms, like Spark/Flink.

The integrations with Spark/Flink, a.k.a. <b>XGBoost4J-Spark</b> and <b>XGBoost-Flink</b>, receive the tremendous positive feedbacks from the community. It enables users to build a unified pipeline, embedding XGBoost into the data processing system based on the widely-deployed frameworks like Spark. The following figure shows the general architecture of such a pipeline with the first version of <b>XGBoost4J-Spark</b>, where the data processing is based on the low-level [Resilient Distributed Dataset (RDD)](http://spark.apache.org/docs/latest/programming-guide.html#resilient-distributed-datasets-rdds) abstraction.

Expand All @@ -12,14 +12,14 @@ In the last months, we have a lot of communication with the users and gain the d

* While Spark is still the mainstream data processing tool in most of scenarios, more and more users are porting their RDD-based Spark programs to [DataFrame/Dataset APIs](http://spark.apache.org/docs/latest/sql-programming-guide.html) for the well-designed interfaces to manipulate structured data and the [significant performance improvement](https://databricks.com/blog/2016/07/26/introducing-apache-spark-2-0.html).

* Spark itself has presented a clear roadmap that DataFrame/Dataset would be the base of the latest and future features, e.g. latest version of [ML pipeline](http://spark.apache.org/docs/latest/ml-guide.html) and [Structured Streaming](http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).
* Spark itself has presented a clear roadmap that DataFrame/Dataset would be the base of the latest and future features, e.g. latest version of [ML pipeline](http://spark.apache.org/docs/latest/ml-guide.html) and [Structured Streaming](http://spark.apache.org/docs/latest/structured-streaming-programming-guide.html).

Based on these feedbacks from the users, we observe a gap between the original RDD-based XGBoost4J-Spark and the users' latest usage scenario as well as the future direction of Spark ecosystem. To fill this gap, we start working on the <b><i>integration of XGBoost and Spark's DataFrame/Dataset abstraction</i></b> in September. In this blog, we will introduce <b>the latest version of XGBoost4J-Spark</b> which allows the user to work with DataFrame/Dataset directly and embed XGBoost to Spark's ML pipeline seamlessly.
Based on these feedbacks from the users, we observe a gap between the original RDD-based XGBoost4J-Spark and the users' latest usage scenario as well as the future direction of Spark ecosystem. To fill this gap, we start working on the <b><i>integration of XGBoost and Spark's DataFrame/Dataset abstraction</i></b> in September. In this blog, we will introduce <b>the latest version of XGBoost4J-Spark</b> which allows the user to work with DataFrame/Dataset directly and embed XGBoost to Spark's ML pipeline seamlessly.


## A Full Integration of XGBoost and DataFrame/Dataset

The following figure illustrates the new pipeline architecture with the latest XGBoost4J-Spark.
The following figure illustrates the new pipeline architecture with the latest XGBoost4J-Spark.

![XGBoost4J New Architecture](https://raw.githubusercontent.com/dmlc/web-data/master/xgboost/unified_pipeline_new.png)

Expand Down Expand Up @@ -49,7 +49,7 @@ import org.apache.spark.ml.feature.StringIndexer
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")

// transfrom the string-represented storeType feature to numeric storeTypeIndex
// transform the string-represented storeType feature to numeric storeTypeIndex
val indexer = new StringIndexer()
.setInputCol("storeType")
.setOutputCol("storeTypeIndex")
Expand All @@ -71,7 +71,7 @@ import org.apache.spark.ml.feature.StringIndexer
// load sales records saved in json files
val salesDF = spark.read.json("sales.json")

// transfrom the string-represented storeType feature to numeric storeTypeIndex
// transform the string-represented storeType feature to numeric storeTypeIndex
val indexer = new StringIndexer()
.setInputCol("storeType")
.setOutputCol("storeTypeIndex")
Expand Down Expand Up @@ -99,7 +99,7 @@ val salesRecordsWithPred = xgboostModel.transform(salesTestDF)
The most critical operation to maximize the power of XGBoost is to select the optimal parameters for the model. Tuning parameters manually is a tedious and labor-consuming process. With the latest version of XGBoost4J-Spark, we can utilize the Spark model selecting tool to automate this process. The following example shows the code snippet utilizing [TrainValidationSplit](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.tuning.TrainValidationSplit) and [RegressionEvaluator](http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.evaluation.RegressionEvaluator) to search the optimal combination of two XGBoost parameters, [max_depth and eta] (https://github.com/dmlc/xgboost/blob/master/doc/parameter.md). The model producing the minimum cost function value defined by RegressionEvaluator is selected and used to generate the prediction for the test set.

```scala
// create XGBoostEstimator
// create XGBoostEstimator
val xgbEstimator = new XGBoostEstimator(xgboostParam).setFeaturesCol("features").
setLabelCol("sales")
val paramGrid = new ParamGridBuilder()
Expand Down Expand Up @@ -137,5 +137,3 @@ If you are interested in knowing more about XGBoost, you can find rich resources
- [Tutorials for the R package](xgboost.readthedocs.org/en/latest/R-package/index.html)
- [Introduction of the Parameters](http://xgboost.readthedocs.org/en/latest/parameter.html)
- [Awesome XGBoost, a curated list of examples, tutorials, blogs about XGBoost usecases](https://github.com/dmlc/xgboost/tree/master/demo)


0 comments on commit da2556f

Please sign in to comment.