Skip to content

Scalable and Sustainable Deep Learning via Randomized Hashing #185

@harold

Description

@harold

From this mailing list post:
https://groups.google.com/forum/#!topic/clojure-cortex/YKpWDMsSU5s

Comes this popular summary:
https://www.sciencedaily.com/releases/2017/06/170601135633.htm

Of this preprint with the same title as this issue:
https://arxiv.org/pdf/1602.08194.pdf


My takeaways:

  • One can reduce the amount of matrix multiplication computation during nn training and inference by limiting computation to weights/connections that are likely to lead to strong activations (adaptive dropout).
  • Finding those weights/connections naively with a brute force approach is wasteful, and in general offsets potential benefits of reduced computation from the aforementioned sparse multiplies.
  • LSH can be used to quickly identify weights/connections related to strong activations. 💥

The practical speedups reported in the paper are on the same order (tens of x) that we currently get from GPU computation. No practical GPU version of these ideas exists (and when it does we'll probably pretty easily be able to leverage it, e.g., if it makes it into cudnn).

These techniques, as reported in the paper, stand to benefit low-power (read: mobile phone) and/or distributed (read: google datacenter) nn systems. This is a different, and so-far non-overlapping, niche of nn than Cortex addresses. When we do move in that direction, and/or when there is a practical GPU implementation of this, we should definitely hop on board.


@rosejn - This is a cute intersection of a lot of your interests; the paper is perhaps a fun read for you.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions