New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimize display of large DynamicMap parameter space #2646

Merged
merged 9 commits into from May 4, 2018

Conversation

Projects
None yet
2 participants
@philippjfr
Member

philippjfr commented May 4, 2018

As described in #2645, a DynamicMap with a large parameter space can be very slow to initialize, this is because we have started to process its keys more consistently but did not take into account the fact that the DynamicMap parameter spaces can be much larger. This now avoids a number of computations that are not necessary and optimized some others:

def test(t, z, cmap, vmin, vmax):
    return hv.Image(np.random.rand(100, 100))
    
dmap = hv.DynamicMap(test, kdims=['t','z', 'cmap', 'vmin','vmax']).redim.values(t=range(10), z=range(10) ,cmap=['viridis','Greys'], vmin=range(100), vmax=range(100,200))

This DynamicMap defines a parameter space with 2 million keys, previously this would take 52 seconds to display, with this PR this drops to 0.5 seconds.

bokeh_plot

@philippjfr philippjfr added this to the v1.10.3 milestone May 4, 2018

@philippjfr

This comment has been minimized.

Member

philippjfr commented May 4, 2018

@jlstevens Another good reason to get a 1.10.3 release out.

@zbarry zbarry referenced this pull request May 4, 2018

Open

Speedy image rendering #2648

@@ -1327,7 +1328,9 @@ def drop_streams(streams, kdims, keys):
stream_params = stream_parameters(streams)
inds, dims = zip(*[(ind, kdim) for ind, kdim in enumerate(kdims)
if kdim not in stream_params])
return dims, [tuple(wrap_tuple(key)[ind] for ind in inds) for key in keys]
get = itemgetter(*inds)

This comment has been minimized.

@jlstevens

jlstevens May 4, 2018

Member

Don't often see a use for itemgetter and I'm trying to think whether you really need it here. I don't immediately have a better suggestion so it is probably ok...

This comment has been minimized.

@philippjfr

philippjfr May 4, 2018

Member

itemgetter is about 100x faster than the previous tuple comprehension.

This comment has been minimized.

@jlstevens

jlstevens May 4, 2018

Member

That seems to be a good reason. Could you put in a comment to that effect?

This comment has been minimized.

@philippjfr

philippjfr May 4, 2018

Member

Sure, that said itemgetter is used frequently throughout utils, so I'm not sure this one deserves a special note.

@jlstevens

This comment has been minimized.

Member

jlstevens commented May 4, 2018

I've made one comment and I'm not sure whether you are done with it, but at a glance this PR seems fine.

@philippjfr

This comment has been minimized.

Member

philippjfr commented May 4, 2018

Ready to merge.

@jlstevens

This comment has been minimized.

Member

jlstevens commented May 4, 2018

Looks good. Merging.

@jlstevens jlstevens merged commit 9cdad85 into master May 4, 2018

4 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
continuous-integration/travis-ci/push The Travis CI build passed
Details
coverage/coveralls Coverage decreased (-0.0007%) to 83.27%
Details
s3-reference-data-cache Test data is cached.
Details

@philippjfr philippjfr deleted the large_dmap_opt branch Jul 4, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment