Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add option for Convolutions to operate over NTC and NHWC tensors #50962

Open
alanhdu opened this issue Jan 22, 2021 · 5 comments
Open

Add option for Convolutions to operate over NTC and NHWC tensors #50962

alanhdu opened this issue Jan 22, 2021 · 5 comments
Labels
feature A request for a proper, new feature. module: memory format Memory format/layout related issues/changes (channels_last, nhwc) triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Comments

@alanhdu
Copy link
Contributor

alanhdu commented Jan 22, 2021

馃殌 Feature

nn.Conv1d (and other convolutions) should have a channels_last: bool = False: if true, then they operate over (N, L, C) tensors -- otherwise they operate over (N, C, L) arrays (the current default).

Motivation

We work mainly with timeseries, where we want to mix convolutions with RNNs: right now, this is irritating, because most of the RNNs operate with channels-last (e.g. batch, seq_len, channels), while Conv1d operates with channel-first (batch, channels, seq_len). This means that we have to add a bunch of transpositions to swap dimensions at the boundary between convolutions and RNNs.

Ideally, there would be a way to tell Conv1d to operate over the same batch, seq_len, channels ordering as the RNNs do to avoid all of these unnecessary transposes.

Pitch

nn.Conv1d (and other convolutions) should have a channels_last: bool = False: if true, then they operate over (N, L, C) tensors -- otherwise they operate over (N, C, L) arrays (the current default). There is some precedent for this kind of flag: the RNNs, for instance, have a batch_first flag that switches between (seq_len, batch, channels) to (batch, seq_len, channels) operation.

This would affect the semantics of the convolutions -- the dimensions here are what are returned by tensor.shape, not necessarily their internal memory layout as described in https://pytorch.org/tutorials/intermediate/memory_format_tutorial.html. Of course, perhaps the implementation of these convolutions could exploit some of the memory layout optimizations under-the-hood.

cc @VitalyFedyunin @jamesr66a @ppwwyyxx

@glaringlee glaringlee added feature A request for a proper, new feature. module: memory format Memory format/layout related issues/changes (channels_last, nhwc) triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module labels Jan 25, 2021
@vadimkantorov
Copy link
Contributor

Related about 1d conv_tbc: #34306

@grudloff
Copy link

Any update regarding this? I am interested in using conv1d with (N, L, C) format to see possible performance improvements as suggested for conv2d with channel_last (CHANNELS LAST MEMORY FORMAT IN PYTORCH)

@VitalyFedyunin
Copy link
Contributor

VitalyFedyunin commented Sep 15, 2021

Right now we have no NLC kernels implemented for 1d tensors. If someone willing to work on this (big piece of work) I would happily provide guidance.
-- edited.

@vadimkantorov
Copy link
Contributor

@vitaly-fedyunin do you mean NLC, right?

@VitalyFedyunin
Copy link
Contributor

@vitaly-fedyunin do you mean NLC, right?

Totally. NLC

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
feature A request for a proper, new feature. module: memory format Memory format/layout related issues/changes (channels_last, nhwc) triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module
Projects
None yet
Development

No branches or pull requests

5 participants