-
Notifications
You must be signed in to change notification settings - Fork 213
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Projects that are related to K2 #212
Comments
Yeah, cool. I heard of gtn a couple days ago from several people. Looks
interesting but seems to aim for CPU.
RE nestedtensor: the design seems quite different from TensorFlow's
RaggedTensor. Much more general, but also much less similar
to what we are doing. (TF has our row_ids and row_splits, which is a
design I came up with independently).
…On Sat, Oct 3, 2020 at 6:46 PM Fangjun Kuang ***@***.***> wrote:
I find that there are two projects, both from facebook, that have some
overlaps with K2:
- gtn <https://github.com/facebookresearch/gtn>, Dan mentioned this a
few days ago
- nestedtensor <https://github.com/pytorch/nestedtensor/>, for tensors
with irregular shapes, like Ragged<T>
Perhaps we can spend some time to find whether we can learn something from
them.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#212>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZFLO7KMVIOBLSK67ECH5TSI36GRANCNFSM4SCSQYYA>
.
|
Hi, (as Dan knows) I am one of the authors of GTN. I also started hearing about K2 a week or two ago from a few people :). GTN is indeed currently CPU only with some vague plans at the moment to GPU accelerate some of the bottlenecks (namely compose). I find the GPU approach with K2 interesting. We are hoping to learn a bit more about your approach and how we can leverage K2 to do better work with differentiable WFSTs. For reference here is a preprint that came out on arXiv today which describes the kinds of things we have started exploring with GTN and in general are interested in doing with a fast but easy to use differentiable WFST framework. Maybe you will find it useful when designing your API. |
Great, thanks! Will have a look.
…On Tue, Oct 6, 2020 at 5:09 AM Awni Hannun ***@***.***> wrote:
Hi, (as Dan knows) I am one of the authors of GTN.
I also started hearing about K2 a week or two ago from a few people :).
GTN is indeed currently CPU only with some vague plans at the moment to
GPU accelerate some of the bottlenecks (namely compose). I find the GPU
approach with K2 interesting. We are hoping to learn a bit more about your
approach and how we can leverage K2 to do better work with differentiable
WFSTs.
For reference here is a preprint <https://arxiv.org/abs/2010.01003> that
came out on arXiv today which describes the kinds of things we have started
exploring with GTN and in general are interested in doing with a fast but
easy to use differentiable WFST framework. Maybe you will find it useful
when designing your API.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#212 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAZFLO47UOFY46XR5QIT27DSJIYYLANCNFSM4SCSQYYA>
.
|
I find that there are two projects, both from facebook, that have some overlaps with K2:
Ragged<T>
Perhaps we can spend some time to find whether we can learn something from them.
The text was updated successfully, but these errors were encountered: