Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

qcut() should make sure the bins bounderies are unique before passing them to _bins_to_cuts #7751

Closed
yonil7 opened this issue Jul 14, 2014 · 21 comments
Labels
Error Reporting Incorrect or improved errors from pandas Reshaping Concat, Merge/Join, Stack/Unstack, Explode
Milestone

Comments

@yonil7
Copy link

yonil7 commented Jul 14, 2014

xref #8309

for example:

pd.qcut([1,1,1,1,1,1,1,1,1,1,1,1,1,5,5,5], [0.00001, 0.5])

will raise "ValueError: Bin edges must be unique: array([ 1., 1.])" exception

Fix suggestion - add one new line:

def qcut(x, q, labels=None, retbins=False, precision=3):
    if com.is_integer(q):
        quantiles = np.linspace(0, 1, q + 1)
    else:
        quantiles = q
    bins = algos.quantile(x, quantiles)
--->bins = np.unique(bins)
    return _bins_to_cuts(x, bins, labels=labels, retbins=retbins,
        precision=precision, include_lowest=True)
@jreback
Copy link
Contributor

jreback commented Jul 14, 2014

why would you not just catch the ValueError ? better to be informed of the issue, right?

@yonil7
Copy link
Author

yonil7 commented Jul 14, 2014

I think this should not throw exception as the is legitimate use.
maybe a better example:

pd.qcut([1,2,3,4,4], [0, .25, .5, .75, 1.])

also throw: ValueError: Bin edges must be unique: array([1, 2, 3, 4, 4], dtype=int64)

@yonil7
Copy link
Author

yonil7 commented Jul 14, 2014

INSTALLED VERSIONS

commit: None
python: 2.7.5.final.0
python-bits: 64
OS: Windows
OS-release: 7
machine: AMD64
processor: Intel64 Family 6 Model 60 Stepping 3, GenuineIntel
byteorder: little
LC_ALL: None
LANG: None

pandas: 0.14.0
nose: 1.3.0
Cython: 0.19.1
numpy: 1.8.1
scipy: 0.14.0
statsmodels: 0.5.0
IPython: 1.0.0
sphinx: 1.1.3
patsy: 0.2.1
scikits.timeseries: None
dateutil: 2.2
pytz: 2014.4
bottleneck: 0.8.0
tables: 3.1.1
numexpr: 2.3.1
matplotlib: 1.3.1
openpyxl: 1.8.5
xlrd: 0.9.3
xlwt: 0.7.5
xlsxwriter: 0.5.5
lxml: 3.3.5
bs4: 4.3.1
html5lib: None
bq: None
apiclient: None
rpy2: None
sqlalchemy: 0.9.6
pymysql: None
psycopg2: None

@jreback
Copy link
Contributor

jreback commented Jul 14, 2014

maybe you misunderstand me. This is a user error (in that you generally pass unique bins, and you CAN make them unique before passing them in). Yes it could be done in the function, but wouldn't you as a user want to know that you are passing non-unique bins?

@yonil7
Copy link
Author

yonil7 commented Jul 14, 2014

The issue is regarding the pd.qcut() function (not pd.cut()). It should get array of samples and quantiles/ranks.
In the prev example, I passed array of samples [1,2,3,4,4] and would like it to calculate for each sample in that array to which of the 4 quantiles it belongs. (quantile 1,2,3 or 4)

@jreback
Copy link
Contributor

jreback commented Jul 14, 2014

its the same issue. If you uniqfy in one, you would in the other. That's not the question though. why should this happen silently? (automatically). maybe I am missing something.

@jseabold ?

@jseabold
Copy link
Contributor

I would expect this to raise an informative error to the user. "The quantiles you selected do not result in unique bins. Try selecting fewer quantiles." I.e., if your 25th quantile is the same as your median, then you should be told this and you shouldn't ask for both. Trying to get fancy under the hood is asking for confusion IMO.

@jreback
Copy link
Contributor

jreback commented Jul 15, 2014

@jseabold ok, I can buy that this could produce a better msg.

@yonil7 want to take this on with a pull-request?

@edjoesu
Copy link

edjoesu commented Dec 7, 2014

In addition to just raising a more informative error, I think that there should be an option to automatically handle duplicates in the following way:

Suppose we have a list with too many duplicates, say we want to split [1,2,3,3,3,3,3,3,4,5,6,7] into quartiles. Right now qcut fails, because the second-lowest quartile consists entirely of '3's, duplicating the bin edges. But there is a natural way to assign the quartiles if we allow duplicate values to be split among different bins: [1,2,3], [3,3,3], [3,3,4], [5,6,7].

Now this is not completely ideal -- the assignment is arbitrary, and there is no way of knowing what to do by looking at the bin edges alone. But in the real world it can be convenient to have an option that 'just works', and I think this warrants a 'split_dup=True' option that is false by default.

What do people think? I'll add this if people support it.

@jreback
Copy link
Contributor

jreback commented Dec 7, 2014

@edjoesu start with the error msg. If you want to propose a new issue for discussion that is ok. I think if you show several test cases, some of which you CAN distinguish and some of which have to raise then prob ok, but needs discussion.

@mritchie712
Copy link

Is there any work around for this until it's changed?

@jreback
Copy link
Contributor

jreback commented Dec 11, 2014

this is not a bug, just a more informative error message.

@mritchie712
Copy link

Got it, I thought it was being changed to a warning instead of an error. I personally would like the option to allow non-unique bins, but I also understand the case against it.

I just commented out those lines in tile.py and it seems to have allowed me to use duplicates.

@springcoil
Copy link
Contributor

Does anyone own this change? Or has it already been done?

@jreback jreback modified the milestones: 0.16.0, Next Major Release Mar 6, 2015
@ghost ghost mentioned this issue Apr 17, 2015
@dukebody
Copy link
Contributor

See http://stackoverflow.com/questions/20158597/how-to-qcut-with-non-unique-bin-edges for a discussion about this.

@edjoesu I agree with your proposal, but I foresee that the implementation won't be trivial, since the way the current code assigns values to bins I think some bins would become empty if duplicates are allowed.

@dukebody
Copy link
Contributor

I think we could add a "duplicate_edges" parameter, with the following options:

Does it look reasonable? What do you think?

@rainjacket
Copy link

dukebody's suggestion looks good to me.

For my use case, I would prefer for qcut to just return the (non-unique) bin list, and let me handle it.

@randomgambit
Copy link

hi guys let me jump in and push for that to happen as well. Solutions already exist in many other languages, its just a matter of choosing one of them. See also https://stackoverflow.com/questions/38594277/how-to-compute-quantile-bins-in-pandas-when-there-are-ties-in-the-data/38596578?noredirect=1#comment64582414_38596578

@randomgambit
Copy link

see for instance here for a possible solution adopted in SAS

https://support.sas.com/documentation/cdl/en/proc/61895/HTML/default/viewer.htm#a000146840.htm

@jreback
Copy link
Contributor

jreback commented Jul 26, 2016

@randomgambit its not about a soln. This is quite straightforward. Its about someone actually implementing it. There are many many many issues. If someone wants to submit a PR that moves things WAY faster.

@randomgambit
Copy link

randomgambit commented Jul 26, 2016

i get it jeff, no worries. unfortunately i dont have the python skills to help you and get the job done. I can only give hints for the good samaritan that wants to improve the current work in this direction

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Error Reporting Incorrect or improved errors from pandas Reshaping Concat, Merge/Join, Stack/Unstack, Explode
Projects
None yet
9 participants