{{ message }}

# Sign error in QuantileLossFunction #8087

Closed
opened this issue Dec 19, 2016 · 3 comments
Closed

# Sign error in QuantileLossFunction#8087

opened this issue Dec 19, 2016 · 3 comments

### ahyoussef commented Dec 19, 2016

#### Description

The QuantileLossFunction in sklearn/ensemble/gradient_boosting.py has a sign error, making it take negative values for certain inputs.

#### Steps/Code to Reproduce

• You can consult formula (2) in http://avesbiodiv.mncn.csic.es/estadistica/curso2011/qr4.pdf for the correct implementation.

• The peculiar thing about the quantile loss function is that it is asymmetric, i.e \$Loss(y, pred)\$ is different from \$Loss(pred, y)\$, but of course it has to be positive for all inputs. See graph on top of page 1005 of the previous article for a graph of the loss function.

• The error has no dramatic consequence because the negative gradient is correctly implemented.

• This issue has been raised several times already, corrected, but never merged (#6133 and #6429).

• All what needs to be done is to replace ` + (1.0 - alpha) diff[~mask]` by ` - (1.0 - alpha) diff[~mask]` in lines 422 and 425.

Here is the code generating the wrong behavior

``````from sklearn.ensemble.gradient_boosting import QuantileLossFunction
import numpy as np

x = np.array([-1])
print(QuantileLossFunction(1, 0.5)(x, np.array([0])))
``````

0.5

-0.5

#### Versions

Darwin-16.3.0-x86_64-i386-64bit
Python 3.5.2 |Anaconda custom (x86_64)| (default, Jul 2 2016, 17:52:12)
[GCC 4.2.1 Compatible Apple LLVM 4.2 (clang-425.0.28)]
NumPy 1.11.2
SciPy 0.18.1
Scikit-Learn 0.19.dev0

mentioned this issue Dec 20, 2016

### agramfort commented Dec 21, 2016

 it was not merged because the PRs did not contain any test. please finish the work and we'll be happy to merge.

### devanshdalal commented Dec 27, 2016

 this should be closed if works is done now.

### ahyoussef commented Dec 27, 2016

 All looks fine indeed.
closed this Dec 27, 2016