Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ENH: improve performance of df.to_csv GH3054 #3059

Merged
27 commits merged into from Mar 19, 2013

Conversation

@ghost
Copy link

commented Mar 15, 2013

Needs more testing before merging.

Following SO question mentioned in #3054:

In [7]: def df_to_csv(df,fname):
   ...:     fh=open(fname,'w')
   ...:     fh.write(','.join(df.columns) + '\n')
   ...:     for row in df.itertuples(index=False):
   ...:         slist = [str(x) for x in row]
   ...:         ss = ','.join(slist) + "\n"
   ...:         fh.write(ss)
   ...:     fh.close()
   ...: 
   ...: aa=pd.DataFrame({'A':range(100000)})
   ...: aa['B'] = aa.A + 1.0
   ...: aa['C'] = aa.A + 2.0
   ...: aa['D'] = aa.A + 3.0
   ...: 
   ...: %timeit -r10 aa.to_csv('/tmp/junk1',index=False)   
   ...: %timeit -r10 df_to_csv(aa,'/tmp/junk2') 
   ...: from hashlib import sha1
   ...: print sha1(open("/tmp/junk1").read()).hexdigest()
   ...: print sha1(open("/tmp/junk2").read()).hexdigest()
current pandaswith PRexample code
2.3 s1.29 s1.28 s

wins:

  • convert numpy numerics to native types to eliminate expensive numpy specific
    stringify calls.
  • if number of columns is < 10000, precompute the cols loop range rather
    then creating and walking a generator at each iteration of the inner loop.
  • some cargo cult stuff that's probably in the noise.
@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 15, 2013

Case 2

aa=pd.DataFrame({'A':range(100000),'B' : pd.Timestamp('20010101')})
aa.ix[100:2000,'A'] = np.nan

Current pandas

In [4]: %timeit aa.to_csv('test.csv')
1 loops, best of 3: 1.57 s per loop

case 2

In [7]: %timeit aa.to_csv('test.csv')
1 loops, best of 3: 2.66 s per loop

With change (starting with yours + my patch)

In [3]: %timeit aa.to_csv('test.csv')
1 loops, best of 3: 825 ms per loop

case 2

In [5]: %timeit aa.to_csv('test.csv')
1 loops, best of 3: 1.96 s per loop
@ghost

This comment has been minimized.

Copy link
Author

commented Mar 15, 2013

Thanks jeff. the patch was garbled by GH. Can you pull this branch, add your patch
as a new commit and push to your fork? I'll pick up the commit and add it to the PR.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 15, 2013

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 15, 2013

After jeff's commit:

current pandaswith PRexample code
2.3 s0.58s1.28 s
@ghost

This comment has been minimized.

Copy link
Author

commented Mar 15, 2013

L1389 sure looks like one loop too many.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 15, 2013

I played around more with this, but all of the pre-allocation and array indexing seems to be working, (even the map around asscalar makes a big diff), so unless we write this loop in cython, not sure how much better we can do

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

Another 25%:

current pandaswith PRexample code
2.3 s0.44s1.28 s
@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2013

I get about 20% more with cython.....(on my test case that has lots of dates).....
jreback@5335a80
jreback@fa12e63

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2013

I think you win! (I get 0.47s) with the same test (an 2.3s with current pandas) :(
hmm....though If i were to pre-allocate the rows...., I am using a single row and overwriting (and copy to the writer)

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

wait, I'm following your lead and cythonizing the latest python version. let's see...

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

Remember we're benchmarking on different machines, timings differ. I get 0.407 with your cython branch.

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

310ms

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2013

whoosh
it's the preallocation of rows
pretty good 8x speedup

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

hang on, I'm working on a linux kernel patch...

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

the list() cast in the call into cython eliminates some test failures that
occur when passing in the index ndarray, have no idea why.

another 20% (relative):

current pandaswith PRexample code
2.3 s0.35s1.28 s

any more tricks? probably heavily IO bound at this point.
edit: not really.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2013

could try doing a tolist() on the numeric arrays (in helper, when creating the series)
so that can eliminate the np.asscalar test and conversions

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

If it works, post the commit hash and i'll update.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2013

try this out

jreback@d78f4f6

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

240ms. very nice.
But this doubles the memory footprint doesn't it? definitely add as option.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 16, 2013

your wish is my command :) had the same issue in HDFStore, so chunked it

jreback@55adfb7

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 16, 2013

very good, and that also bounds the memory used by the list() copy in the call to cython.

I hereby declare these games over. 10x will have to do...

👍

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

AWESOME!

@ghost ghost deleted the GH3054/to_csv_perf branch Mar 19, 2013

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

typos in RELEASE.rst the 3059 link is pointing to 3039, and df.to_csv

•Improved performance of dv.to_csv() by up to 10x in some cases. (GH3059)
@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

fixed.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

fyi vs 0.10.1

Results:
                                            t_head  t_baseline      ratio
name                                                                     
frame_get_dtype_counts                      0.0988    217.2718     0.0005
frame_wide_repr                             0.5526    216.5370     0.0026
groupby_first_float32                       3.0029    341.3520     0.0088
groupby_last_float32                        3.1525    339.0419     0.0093
frame_to_csv2                             190.5260   2244.4260     0.0849
indexing_dataframe_boolean                 13.6755    126.9212     0.1077
write_csv_standard                         38.1940    234.2570     0.1630
frame_reindex_axis0                         0.3215      1.1042     0.2911
frame_to_csv_mixed                        369.0670   1123.0412     0.3286
frame_to_csv                              112.2720    226.7549     0.4951
frame_mult                                 22.6785     42.7152     0.5309
frame_add                                  24.3593     41.8012     0.5827
frame_reindex_upcast                       11.8235     17.0124     0.6950
frame_fancy_lookup_all                     15.0496     19.4497     0.7738
@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

We should jigger the code so the first column of the test names spells "AwesomeRelease".

If I had to pick one thing, I'd say iloc/loc is the best addition in 0.11, though.
ix and I have been playing "20 questions" for far too long.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

hahah....fyi I put up a commit on the centered mov stuff when you have a chance....all fixed except for rolling_count, which I am not quite sure what the results should be...

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

btw, the final perf tweak was all about avoiding iteritems and it's reliance
on icol. there might be a fast path hiding there which would get you another
0.0 entry on test_perf.sh.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

I saw your data pre-allocation, did you change iteritems somewhere?

fyi....as a side issue, that's the problem in the duplicate col names, essentially the col label maps to the column in the block, but there isn't a simple map, in theory should be a positional map, but very tricky

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

@jreback , can you follow up on the SO question, and make a pandas user happy?

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

Your refactor calculated series using iteritems, and that was the cause of the O(ncols)
behaviour I noted. Eliminating that in favor of yanking the data out directly from the blocks
turned that into ~ O(1), but breaks encpsulation. would be nice to use a proper interface
rather then rifle the bowels of the underlying block manager.

Come to think of it, iteritems is col oriented, I wonder if all that could have
been avoided... with iterrows. oh boy.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

iterrows does a lot of stuff....and the block based approach is better anyhow...I will answer the so question

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

The tests I added for duplicate columns was buggy, and didn't catch the fact
that dupe columns are disabled for to_csv.

v10.1 to_csv() mangles dupe names into "dupe.1,dupe.2," etc. That's an ok workaround,
but what's the fundamental reason we can't just do it straight? is there one?

correction: it's read_csv that does the mangling.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

the issue is that the blocks have a 2-d of the items in a particular block, the mapping between where it is in the block and the frame is depedent on a unique name of the columns (in the case of a mi this is fine of course).

There isn't a positional map (from columns in the frame to in the block). Keeping one is pretty tricky.

Wes has a failry complicated routing to find the correct column even with the duplicates, and it succeeds unless they are across blocks (which is the case that I am testing here).

I suppose you could temporarily do a rename, on the frame, (no idea if that works), then iterate on the blocks. which will solve the problem. As I said, 0.10.1 actually prints the data out twice. I think raising is ok here, very unlikely to happen, and if it does you can just put a mi in the first place.

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

I don't follow. If you can display a repr for a frame with dupe columns, you can write it out to csv.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

can't display a repr either......I will create an issue for this..

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

We're probably thinking of different things. what do you mean you can't repr a frame
with dupe columns?

In [19]: pd.DataFrame([[1,1]],columns=['a','a'])
Out[19]: 
   a  a
0  1  1

on the other hand

In [4]: df.to_csv("/tmp/a")
    807         if len(set(self.cols)) != len(self.cols):
--> 808             raise Exception("duplicate columns are not permitted in to_csv")
    809 
    810         self.colname_map = dict((k,i) for i,k in  enumerate(obj.columns))

Exception: duplicate columns are not permitted in to_csv

That's completely messed up.

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

I think I can rejig CSVWriter to do this properly, or at least discover what it is that I'm
failing to understand.

@jreback

This comment has been minimized.

Copy link
Contributor

commented Mar 19, 2013

a bit contrived, but this is the case that's the issue (yours we should allow)
the problem is that detecting this case is hard

In [32]: df1 = pd.DataFrame([[1]],columns=['a'],dtype='float64')

In [33]: df2 = pd.DataFrame([[1]],columns=['a'],dtype='int64')

In [34]: df3 = pd.concat([df1,df2],axis=1)

In [35]: df3.columns
Out[35]: Index([a, a], dtype=object)

In [36]: df3.index
Out[36]: Int64Index([0], dtype=int64)

In [37]: df3
Out[37]: ----------------------------------------------
Exception: ('Cannot have duplicate column names split across dtypes', u'occurred at index a')
@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

So:

  • the test that raises an exception should be finer grained, and fail only if dupes
    are across blocks.
  • I should fix up the way the data object is constructed to handle dupe columns

Then we can think about the general case.

@ghost ghost restored the GH3054/to_csv_perf branch Mar 19, 2013

@ghost ghost referenced this pull request Mar 19, 2013

@ghost ghost deleted the GH3054/to_csv_perf branch Mar 19, 2013

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

continued in #3095.

Thank you #3059, it's been fun.

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 19, 2013

dupe columns common case fixed in master via 1f138a4.

@ghost

This comment has been minimized.

Copy link
Author

commented Mar 26, 2013

Changed 'legacy' keyword to engine=='python', to be consistent with c_parser.
in case it sticks around.

@dragoljub

This comment has been minimized.

Copy link

commented Apr 6, 2013

Guys this is truly amazing work. With the performance of reading/writing CSVs Pandas has become truly has enterprise leading I/O performance. Recently I was reading some zipped CSV files to DF at ~40MB/sec thinking to myself how much faster this is than many distributed 'Hadoop' solutions I've seen... :)

This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
3 participants
You can’t perform that action at this time.