Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Minor performance exploration (clojure side) and some considerations #1

Open
joinr opened this issue Jun 20, 2019 · 1 comment
Open

Comments

@joinr
Copy link

joinr commented Jun 20, 2019

I didn't want to pop a pull request, since I think your baseline code is well written. I got interested in profiling performance and exploring your implementation (currently only really focusing on reader.fast). My excursion is here for reference.

The primary strategy was to speed up cell and row creation / processing where possible. I created an iterator reducer that implements CollFold (which fits in nicely with your transducer uses if into).
That eliminates some overhead from iterator-seq and is effectively seamless for the existing purpose.

I also moved toward simple functions instead of a protocol-by-extension based implementation. The extended protocols incur a minor cache lookup everytime they're invoked, unless they're part of an inlined definition (like reify, defrecord, deftype). I initially started reifying wrappers for each of the classes, then realized you can just write functions and type-hint away. This closes off the extension possibilities, but captures some (minor) performance gains.

The next bit was exchanging records for maps for the cells and rows entries, since record/type creation is faster than persistent array maps even. The tradeoff here is uglier results (record printing), but some more minor performance gains.

I noticed the reader.generics/blank? function (which is called for every cell) uses =, despite only comparing against a keyword. In this case, identical? is around 6-9x faster in microbenchmarks, although it doesn't end up being a huge bottleneck, it helps with some minor gains. In general, if you're doing keyword comparisons in a hot loop, identical? is a useful optimization.

Finally, I played around with eschewing the transducing chain of calls from into, and just inlined the stuff into a reduce call (since both filtering and mapping could be done there, albeit uglier). I also messed with different mutation strategies (array list), and started messing with parallelization (fold and pmap). Parallel stuff didn't currently pan out (meager improvements with fold, still mostly on par with single threaded case). I'm thinking that's due to the streaming nature of fast-excel though.

In all, on my high-end laptop, reading a 1.1mb xlsx file, I go from about 500-520ms (after warmup), to 450-475ms (with some faster outliers I can't explain). So....minor gains. If I knew more about fastexcel, I'd look to optimize there (based on visualvm profiling there's some potential overhead there, but I'm ignorant of whether that's already optimized; guessing it is!).

I then ran the same code on a test machine on AWS (t2.large running ubuntu), and ended up losing performance with the "faster" code. I got about [1350-1370]ms for fast, [1313-1330]ms for faster, with similar outliers. So much less improvement (possibly due to ssd, memory, processors, who knows).

Technical Consideration (float vs. double):

In cell-value for CellType/NUMBER, you coerce to float. I'd recommend using double, since float can lead to some funky, silent comparison problems down the road. I ran into this gem years ago, and have stayed away from float unless it's for performance or interop purposes:

user> (def l (hash-set (float 1.2))) #'user/l user> (def r (hash-set (double 1.2))) #'user/r user> (= l r) false
From the outside, that makes perfect sense. But if you never saw how the 2 sets were constructed, and you're only looking at the repl output, it's maddening (particularly in a big debug).

@alanmarazzi
Copy link
Owner

The primary strategy was to speed up cell and row creation / processing where possible. I created an iterator reducer that implements CollFold (which fits in nicely with your transducer uses if into). That eliminates some overhead from iterator-seq and is effectively seamless for the existing purpose.
I also moved toward simple functions instead of a protocol-by-extension based implementation. The extended protocols incur a minor cache lookup everytime they're invoked, unless they're part of an inlined definition (like reify, defrecord, deftype). I initially started reifying wrappers for each of the classes, then realized you can just write functions and type-hint away. This closes off the extension possibilities, but captures some (minor) performance gains.

This is interesting, have you recorded benchmarks to understand the performance gain of these changes only?

The next bit was exchanging records for maps for the cells and rows entries, since record/type creation is faster than persistent array maps even. The tradeoff here is uglier results (record printing), but some more minor performance gains.

This can be solved by defining a specialized printer, surely slower but I guess it should be used at a REPL for dev purposes, so it might be a good solution

I noticed the reader.generics/blank? function (which is called for every cell) uses =, despite only comparing against a keyword. In this case, identical? is around 6-9x faster in microbenchmarks, although it doesn't end up being a huge bottleneck, it helps with some minor gains. In general, if you're doing keyword comparisons in a hot loop, identical? is a useful optimization.

I missed this, good catch! 😄

Finally, I played around with eschewing the transducing chain of calls from into, and just inlined the stuff into a reduce call (since both filtering and mapping could be done there, albeit uglier). I also messed with different mutation strategies (array list), and started messing with parallelization (fold and pmap). Parallel stuff didn't currently pan out (meager improvements with fold, still mostly on par with single threaded case). I'm thinking that's due to the streaming nature of fast-excel though.

I was starting to think about parallelization, and yes streaming makes things harder. The only issue is that parallelization should be turned on and off case by case, though I could analyze either the file size or the workbook size in memory (the latter is surely harder to get right) and turn it on after some cutoff (but this means that if the workbook is very large, but the interesting sheet is pretty small there's a useless overhead going on).

I was thinking core.async as well, but what 'scares' me a bit is backpressure: I'd be spawning X threads according to a heuristic, but if the workbook is very large there's the risk to blow up channels. This is one of the most interesting developments, but I guess that some trial and error is needed.

If I knew more about fastexcel, I'd look to optimize there (based on visualvm profiling there's some potential overhead there, but I'm ignorant of whether that's already optimized; guessing it is!).

I think that there is some room for improvement here: fastexcel uses some of POI under the hood, so that's something that can be optimized. On the other hand the solution is probably to parse XML from scratch, so not the funniest thing around and it is likely that this should be done in Java to really get some performance improvements.

with some faster outliers I can't explain

If I have to guess this is likely the culprit (from fastexcel):

    /**
     * Note: will load the whole xlsx file into memory,
     * (but will not uncompress it in memory)
     */
    public ReadableWorkbook(InputStream inputStream) throws IOException {
        this(open(inputStream));
    }

Technical Consideration (float vs. double):

Absolutely right, I'll take care of it right away. The only thing is that fastexcel returns BigDecimal (you'll find 'em in the table ns) and while on one hand there's a performance price for using them, it is also true that usually Excel is used for financial stuff, so keeping BigDecimals to avoid precision errors might be another way to go.

Oh, and by the way this is already MUCH faster than a correspondingly solution with Python and openpyxl. I still have to try pandas though

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants