Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow merge to take advantage of pre-indexed columns #57

Closed
doobwa opened this issue Sep 13, 2012 · 8 comments
Closed

Allow merge to take advantage of pre-indexed columns #57

doobwa opened this issue Sep 13, 2012 · 8 comments
Labels

Comments

@doobwa
Copy link
Contributor

doobwa commented Sep 13, 2012

Currently merge(a, b, bycol, jointype) always calls join_idx to creates a left and right index needed for joining two DataFrames. It would be nice if this could take advantage of columns that are Index types.

@tshort
Copy link
Contributor

tshort commented Sep 13, 2012

Chris, are you referring to the types in experimental_indexing.jl (see also issue #24)?

If so, it would be faster to do joins on columns that are already sorted. I looked over the code. It would take some refactoring. I can't see an easy way to take advantage of that.

@doobwa
Copy link
Contributor Author

doobwa commented Sep 13, 2012

Yes, that is what I am referring to. I looked at it briefly and I agree it would take some refactoring. It could be pretty nice to have though.

@HarlanH
Copy link
Contributor

HarlanH commented Sep 23, 2012

I'm in favor of pre-indexed columns, but tend to think that hash-based indexes will be more flexible than sorted-value indexes. In particular, you can have multiple independently indexed columns, ala a relational database. But regardless, yes, use of indexes for merging is one of their primary reasons for indexes, along with split-apply-combine!

@tshort
Copy link
Contributor

tshort commented Sep 23, 2012

Harlan, could you expand upon the "hash-based indexes" idea a bit? Any pointers to links or R code would help.

@StefanKarpinski
Copy link
Member

Not to speak for Harlan, but I suspect he means keeping a hash mapping key tuples to row indices, so that you can look things up quickly by the hashed key values. I'm not sure hash-based indexing is more flexible, but it certainly would faster where applicable. Can't do range queries or anything like that, which is limiting.

@HarlanH
Copy link
Contributor

HarlanH commented Sep 28, 2012

Oh, thanks, yes. There are some other alternatives that might make sense. I
think B-trees and variants work better for range queries. There are some
more compact options that work well with blocked data, if you're most
worried about minimizing disk access in the memory mapped case, but care
less about in-memory scans.

On Fri, Sep 28, 2012 at 4:05 PM, Stefan Karpinski
notifications@github.comwrote:

Not to speak for Harlan, but I suspect he means keeping a hash mapping key
tuples to row indices, so that you can look things up quickly by the hashed
key values. I'm not sure hash-based indexing is more flexible, but it
certainly would faster where applicable. Can't do range queries or anything
like that, which is limiting.


Reply to this email directly or view it on GitHubhttps://github.com/HarlanH/JuliaData/issues/57#issuecomment-8991118.

@tshort
Copy link
Contributor

tshort commented Feb 20, 2013

I'm going to close this now. IndexedVectors have recently been improved to support this. PooledDataArrays were also updated to improve sorting, grouping, and merging. Here are some timings:

using DataFrames
N = 1_000_000
d = @DataFrame(
        dv1 => rand(1:10, N),
        dv2 => rand(1:1000, N),
        dv3 => rand(N),
        pdv1 => PooledDataArray(dv1),
        pdv2 => PooledDataArray(dv2),
        pdv3 => PooledDataArray(dv3),
        idv1 => IndexedVector(dv1),
        idv2 => IndexedVector(dv2),
        idv3 => IndexedVector(dv3))

@time sort(d["dv1"])     # elapsed time: 4.172357082366943 seconds
@time sort(d["dv2"])     # elapsed time: 4.258169889450073 seconds
@time sort(d["dv3"])     # elapsed time: 6.2575249671936035 seconds
@time sort(d["pdv1"])    # elapsed time: 0.04441094398498535 seconds
@time sort(d["pdv2"])    # elapsed time: 0.031256914138793945 seconds
@time sort(d["pdv3"])    # elapsed time: 0.07601308822631836 seconds
@time sort(d["idv1"])    # elapsed time: 0.014478921890258789 seconds
@time sort(d["idv2"])    # elapsed time: 0.049768924713134766 seconds
@time sort(d["idv3"])    # elapsed time: 0.02382802963256836 seconds

@time sortby(d, "dv1")   # elapsed time: 3.8030378818511963 seconds
@time sortby(d, "dv2")   # elapsed time: 5.8505167961120605 seconds
@time sortby(d, "dv3")   # elapsed time: 7.38699197769165 seconds
@time sortby(d, "pdv1")  # elapsed time: 1.1051669120788574 seconds
@time sortby(d, "pdv2")  # elapsed time: 1.275796890258789 seconds
@time sortby(d, "pdv3")  # elapsed time: 1.3765780925750732 seconds
@time sortby(d, "idv1")  # elapsed time: 1.076483964920044 seconds
@time sortby(d, "idv2")  # elapsed time: 1.249567985534668 seconds
@time sortby(d, "idv3")  # elapsed time: 1.1959848403930664 seconds

@time groupby(d, "dv1")  # elapsed time: 0.1824049949645996 seconds
@time groupby(d, "dv2")  # elapsed time: 0.1818699836730957 seconds
@time groupby(d, "pdv1") # elapsed time: 0.040914058685302734 seconds
@time groupby(d, "pdv2") # elapsed time: 0.02237415313720703 seconds
@time groupby(d, "idv1") # elapsed time: 0.033102989196777344 seconds
@time groupby(d, "idv2") # elapsed time: 0.08005595207214355 seconds

N1 = 50_000
d1 = @DataFrame(dv1 => rand(1:N1, N1), pdv1 => PooledDataArray(dv1), idv1 => IndexedVector(dv1), x => letters[rand(1:26, N1)])
N2 = 100_000
d2 = @DataFrame(dv1 => rand(5:N2, N2), pdv1 => PooledDataArray(dv1), idv1 => IndexedVector(dv1), y => LETTERS[rand(1:26, N2)])

@time merge(d1, d2, "dv1")    # elapsed time: 0.18854999542236328 seconds
@time merge(d1, d2, "pdv1")   # elapsed time: 0.13550090789794922 seconds
@time merge(d1, d2, "idv1")   # elapsed time: 0.11914300918579102 seconds

@tshort tshort closed this as completed Feb 20, 2013
@milktrader
Copy link

Thank you for that demonstration.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

5 participants