Skip to content

Commit

Permalink
[docs] fixed capital letters and formating (#873)
Browse files Browse the repository at this point in the history
* fixed capital letters and formating

* fixed CMake capital letters; added VS 2013 to the list of allowed versions
  • Loading branch information
StrikerRUS authored and guolinke committed Sep 1, 2017
1 parent 30e06be commit 8caad8d
Show file tree
Hide file tree
Showing 11 changed files with 89 additions and 72 deletions.
10 changes: 5 additions & 5 deletions R-package/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,15 @@ Installation

### Preparation

You need to install git and [cmake](https://cmake.org/) first.
You need to install git and [CMake](https://cmake.org/) first.

Note: 32-bit R/Rtools is not supported.

#### Windows Preparation

Installing [Rtools](https://cran.r-project.org/bin/windows/Rtools/) is mandatory, and only support the 64-bit version. It requires to add to PATH the Rtools MinGW64 folder, if it was not done automatically during installation.

The default compiler is Visual Studio (or [MS Build](https://www.visualstudio.com/downloads/#build-tools-for-visual-studio-2017)) in Windows, with an automatic fallback to Rtools or any [MinGW64](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/) (x86_64-posix-seh) available (this means if you have only Rtools and cmake, it will compile fine).
The default compiler is Visual Studio (or [MS Build](https://www.visualstudio.com/downloads/#build-tools-for-visual-studio-2017)) in Windows, with an automatic fallback to Rtools or any [MinGW64](https://sourceforge.net/projects/mingw-w64/files/Toolchains%20targetting%20Win64/Personal%20Builds/mingw-builds/) (x86_64-posix-seh) available (this means if you have only Rtools and CMake, it will compile fine).

To force the usage of Rtools / MinGW, you can set `use_mingw` to `TRUE` in `R-package/src/install.libs.R`.

Expand Down Expand Up @@ -75,7 +75,7 @@ model <- lgb.cv(params, dtrain, 10, nfold=5, min_data=1, learning_rate=1, early_
```

Installation with precompiled dll/lib from R using GitHub
------------
---------------------------------------------------------

You can install LightGBM R-package from GitHub with devtools thanks to a helper package for LightGBM.

Expand All @@ -86,7 +86,7 @@ You will need:
* Precompiled LightGBM dll/lib
* MinGW / Visual Studio / gcc (depending on your OS and your needs) with make in PATH environment variable
* git in PATH environment variable
* [cmake](https://cmake.org/) in PATH environment variable
* [CMake](https://cmake.org/) in PATH environment variable
* [lgbdl](https://github.com/Laurae2/lgbdl/) R-package, which can be installed using `devtools::install_github("Laurae2/lgbdl")`
* [Rtools](https://cran.r-project.org/bin/windows/Rtools/) if using Windows

Expand Down Expand Up @@ -125,7 +125,7 @@ lgb.dl(commit = "master",
For more details about options, please check [Laurae2/lgbdl](https://github.com/Laurae2/lgbdl/) R-package.

Examples
------------
--------

Please visit [demo](demo):

Expand Down
9 changes: 6 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,6 +23,7 @@ For more details, please refer to [Features](https://github.com/Microsoft/LightG

News
----

07/13/2017: [Gitter](https://gitter.im/Microsoft/LightGBM) is avaiable.

06/20/2017: Python-package is on PyPI now.
Expand Down Expand Up @@ -52,8 +53,9 @@ Julia Package: https://github.com/Allardvm/LightGBM.jl
JPMML: https://github.com/jpmml/jpmml-lightgbm


Get Started And Documentation
-------------------------
Get Started and Documentation
-----------------------------

Install by following the guide for the [command line program](https://github.com/Microsoft/LightGBM/wiki/Installation-Guide), [Python package](https://github.com/Microsoft/LightGBM/tree/master/python-package) or [R-package](https://github.com/Microsoft/LightGBM/tree/master/R-package). Then please see the [Quick Start](https://github.com/Microsoft/LightGBM/wiki/Quick-Start) guide.

Our primary documentation is at https://lightgbm.readthedocs.io/ and is generated from this repository.
Expand Down Expand Up @@ -90,5 +92,6 @@ LightGBM has been developed and used by many active community members. Your help
- Open issue if you met problems during development.

Microsoft Open Source Code of Conduct
------------
-------------------------------------

This project has adopted the [Microsoft Open Source Code of Conduct](https://opensource.microsoft.com/codeofconduct/). For more information see the [Code of Conduct FAQ](https://opensource.microsoft.com/codeofconduct/faq/) or contact [opencode@microsoft.com](mailto:opencode@microsoft.com) with any additional questions or comments.
13 changes: 6 additions & 7 deletions docs/GPU-Tutorial.md
Original file line number Diff line number Diff line change
@@ -1,15 +1,14 @@
LightGBM GPU Tutorial
==================================
=====================

The purpose of this document is to give you a quick step-by-step tutorial on GPU training.

For Windows, please see [GPU Windows Tutorial](./GPU-Windows.md).

We will use the GPU instance on [Microsoft Azure cloud computing platform](https://azure.microsoft.com/) for demonstration, but you can use any machine with modern AMD or NVIDIA GPUs.


GPU Setup
-------------------------
---------

You need to launch a `NV` type instance on Azure (available in East US, North Central US, South Central US, West Europe and Southeast Asia zones) and select Ubuntu 16.04 LTS as the operating system.

Expand All @@ -34,9 +33,10 @@ After about 30 seconds, the server should be up again.
If you are using a AMD GPU, you should download and install the [AMDGPU-Pro](http://support.amd.com/en-us/download/linux) driver and also install package `ocl-icd-libopencl1` and `ocl-icd-opencl-dev`.

Build LightGBM
----------------------------
--------------

Now install necessary building tools and dependencies:

```
sudo apt-get install --no-install-recommends git cmake build-essential libboost-dev libboost-system-dev libboost-filesystem-dev
```
Expand Down Expand Up @@ -82,7 +82,7 @@ You need to set an additional parameter `"device" : "gpu"` (along with your othe
You can read our [Python Guide](https://github.com/Microsoft/LightGBM/tree/master/examples/python-guide) for more information on how to use the Python interface.

Dataset Preparation
----------------------------
-------------------

Using the following commands to prepare the Higgs dataset:

Expand Down Expand Up @@ -172,9 +172,8 @@ Further Reading
[GPU Windows Tutorial](./GPU-Windows.md)

Reference
---------------
---------

Please kindly cite the following article in your publications if you find the GPU acceleration useful:

Huan Zhang, Si Si and Cho-Jui Hsieh. [GPU Acceleration for Large-scale Tree Boosting](https://arxiv.org/abs/1706.08359). arXiv:1706.08359, 2017.

27 changes: 13 additions & 14 deletions docs/GPU-Windows.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
GPU Windows Compilation
=========================
=======================

This guide is for the MinGW build.
This guide is for the MinGW build.

For the MSVC (Visual Studio) build with GPU, please refer to https://github.com/Microsoft/LightGBM/wiki/Installation-Guide#windows-2 ( We recommend you to use this since it is much easier).

Expand All @@ -15,7 +15,7 @@ Installation steps (depends on what you are going to do):
* Install MinGW
* Install Boost
* Install Git
* Install cmake
* Install CMake
* Create LightGBM binaries
* Debugging LightGBM in CLI (if GPU is crashing or any other crash reason)

Expand Down Expand Up @@ -190,16 +190,16 @@ Keep Git Bash open.

---

## cmake Installation, Configuration, Generation
## CMake Installation, Configuration, Generation

**CLI / Python users only**

Installing cmake requires one download first and then a lot of configuration for LightGBM:
Installing CMake requires one download first and then a lot of configuration for LightGBM:

![Downloading cmake](https://cloud.githubusercontent.com/assets/9083669/24919759/fe5f4d90-1ee4-11e7-992e-00f8d9bfe6dd.png)
![Downloading CMake](https://cloud.githubusercontent.com/assets/9083669/24919759/fe5f4d90-1ee4-11e7-992e-00f8d9bfe6dd.png)

* Download cmake 3.8.0 here: https://cmake.org/download/.
* Install cmake.
* Download CMake 3.8.0 here: https://cmake.org/download/.
* Install CMake.
* Run cmake-gui.
* Select the folder where you put LightGBM for `Where is the source code`, default using our steps would be `C:/github_repos/LightGBM`.
* Copy the folder name, and add `/build` for "Where to build the binaries", default using our steps would be `C:/github_repos/LightGBM/build`.
Expand Down Expand Up @@ -237,7 +237,7 @@ Configuring done
Generating done
```

This is straightforward, as cmake is providing a large help into locating the correct elements.
This is straightforward, as CMake is providing a large help into locating the correct elements.

---

Expand Down Expand Up @@ -293,13 +293,13 @@ You will have to redo the compilation steps for LightGBM to add debugging mode.

![Files to remove](https://cloud.githubusercontent.com/assets/9083669/25051307/3b7dd084-214c-11e7-9758-c338c8cacb1e.png)

Once you removed the file, go into cmake, and follow the usual steps. Before clicking "Generate", click on "Add Entry":
Once you removed the file, go into CMake, and follow the usual steps. Before clicking "Generate", click on "Add Entry":

![Added manual entry in cmake](https://cloud.githubusercontent.com/assets/9083669/25051323/508969ca-214c-11e7-884a-20882cd3936a.png)
![Added manual entry in CMake](https://cloud.githubusercontent.com/assets/9083669/25051323/508969ca-214c-11e7-884a-20882cd3936a.png)

In addition, click on Configure and Generate:

![Configured and Generated cmake](https://cloud.githubusercontent.com/assets/9083669/25051236/e71237ce-214b-11e7-8faa-d885d7826fe1.png)
![Configured and Generated CMake](https://cloud.githubusercontent.com/assets/9083669/25051236/e71237ce-214b-11e7-8faa-d885d7826fe1.png)

And then, follow the regular LightGBM CLI installation from there.

Expand Down Expand Up @@ -364,8 +364,7 @@ l-fast-relaxed-math") at C:/boost/boost-build/include/boost/compute/program.hpp:
Right-click the command prompt, click "Mark", and select all the text from the first line (with the command prompt containing gdb) to the last line printed, containing all the log, such as:

```
C:\LightGBM\examples\binary_classification>gdb --args "../../lightgbm.exe" config=train.conf data=binary.train valid=binary.test objective=binary device
=gpu
C:\LightGBM\examples\binary_classification>gdb --args "../../lightgbm.exe" config=train.conf data=binary.train valid=binary.test objective=binary device=gpu
GNU gdb (GDB) 7.10.1
Copyright (C) 2015 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later <http://gnu.org/licenses/gpl.html>
Expand Down
16 changes: 9 additions & 7 deletions examples/binary_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
Binary Classification Example
=====================
=============================

Here is an example for LightGBM to run binary classification task.

***You should copy executable file to this folder first.***

#### Training

For windows, by running following command in this folder:
For Windows, by running following command in this folder:

```
lightgbm.exe config=train.conf
```

For Linux, by running following command in this folder:

For linux, by running following command in this folder:
```
./lightgbm config=train.conf
```
Expand All @@ -21,14 +23,14 @@ For linux, by running following command in this folder:

You should finish training first.

For windows, by running following command in this folder:
For Windows, by running following command in this folder:

```
lightgbm.exe config=predict.conf
```

For linux, by running following command in this folder:
For Linux, by running following command in this folder:

```
./lightgbm config=predict.conf
```


15 changes: 9 additions & 6 deletions examples/lambdarank/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
LambdaRank Example
=====================
==================

Here is an example for LightGBM to run lambdarank task.

***You should copy executable file to this folder first.***

#### Training

For windows, by running following command in this folder:
For Windows, by running following command in this folder:

```
lightgbm.exe config=train.conf
```

For Linux, by running following command in this folder:

For linux, by running following command in this folder:
```
./lightgbm config=train.conf
```
Expand All @@ -21,13 +23,14 @@ For linux, by running following command in this folder:

You should finish training first.

For windows, by running following command in this folder:
For Windows, by running following command in this folder:

```
lightgbm.exe config=predict.conf
```

For linux, by running following command in this folder:
For Linux, by running following command in this folder:

```
./lightgbm config=predict.conf
```

15 changes: 9 additions & 6 deletions examples/multiclass_classification/README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,20 @@
Multiclass Classification Example
=====================
=================================

Here is an example for LightGBM to run multiclass classification task.

***You should copy executable file to this folder first.***

#### Training

For windows, by running following command in this folder:
For Windows, by running following command in this folder:

```
lightgbm.exe config=train.conf
```

For Linux, by running following command in this folder:

For linux, by running following command in this folder:
```
./lightgbm config=train.conf
```
Expand All @@ -21,13 +23,14 @@ For linux, by running following command in this folder:

You should finish training first.

For windows, by running following command in this folder:
For Windows, by running following command in this folder:

```
lightgbm.exe config=predict.conf
```

For linux, by running following command in this folder:
For Linux, by running following command in this folder:

```
./lightgbm config=predict.conf
```

13 changes: 7 additions & 6 deletions examples/parallel_learning/README.md
Original file line number Diff line number Diff line change
@@ -1,20 +1,21 @@
Parallel Learning Example
=====================
=========================
Here is an example for LightGBM to perform parallel learning for 2 machines.

1. Edit mlist.txt , write the ip of these 2 machines that you want to run application on.
1. Edit mlist.txt, write the ip of these 2 machines that you want to run application on.

```
machine1_ip 12400
machine2_ip 12400
```

2. Copy this folder and executable file to these 2 machines that you want to run application on.

3. Run command in this folder on both 2 machines:

For windows:```lightgbm.exe config=train.conf```
For linux:```./lightgbm config=train.conf```
For Windows: ```lightgbm.exe config=train.conf```

For Linux: ```./lightgbm config=train.conf```

This parallel learning example is based on socket. LightGBM also support parallel learning based on mpi.

Expand Down
13 changes: 9 additions & 4 deletions examples/python-guide/README.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,26 @@
Python Package Example
=====================
Python Package Examples
=======================

Here is an example for LightGBM to use python package.

***You should install lightgbm (both c++ and python verion) first.***

For the installation, check the wiki [here](https://github.com/Microsoft/LightGBM/wiki/Installation-Guide).

You also need scikit-learn, pandas and matplotlib (only for plot example) to run the examples, but they are not required for the package itself. You can install them with pip:

```
pip install scikit-learn pandas matplotlib -U
```

Now you can run examples in this folder, for example:

```
python simple_example.py
```
Examples including:

Examples include:

- [simple_example.py](https://github.com/Microsoft/LightGBM/blob/master/examples/python-guide/simple_example.py)
- Construct Dataset
- Basic train and predict
Expand All @@ -36,4 +41,4 @@ Examples including:
- Change learning rates during training
- Self-defined objective function
- Self-defined eval metric
- Callback function
- Callback function

0 comments on commit 8caad8d

Please sign in to comment.