New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Custom OP #335

Open
yzh119 opened this Issue Jan 3, 2019 · 0 comments

Comments

Projects
None yet
3 participants
@yzh119
Copy link
Member

yzh119 commented Jan 3, 2019

馃殌 Feature

Custom op.

  • Kernel for masked_matrix_multiplication (both forward and backward)
  • Kernel for sparse_softmax (both forward and backward)
  • Kernel for vector-shape spmm (both forward and backward)
  • PyTorch wrapper.
  • Multi-Head
  • CPU
  • MXNet
  • ...

Motivation

Current self-attention implementation in DGL is not efficient and uses too much GPU memory.
Custom Op support is required to accelerate some graph operations like masked_mm and sparse_softmax used in the self-attention module.

Alternatives

In the future, there might be elegant solutions but currently, we write custom op for these operations on our own.

Additional context

You may find my primitive custom op implementations here(private repo), note that I've not covered MXNet yet and I hope team members familiar with MXNet would help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment