-
Notifications
You must be signed in to change notification settings - Fork 383
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pointwise Conv1D with code generation for "Latency" strategy (update of #811) #881
base: main
Are you sure you want to change the base?
Pointwise Conv1D with code generation for "Latency" strategy (update of #811) #881
Conversation
# attrs.append(ConfigurableAttribute('conv_implementation', value_type=str, default='LineBuffer')) | ||
attrs.append(ChoiceAttribute('conv_implementation', choices=['LineBuffer', 'Encoded'], default='LineBuffer')) | ||
attrs.append( | ||
ChoiceAttribute('conv_implementation', choices=['LineBuffer', 'Encoded', 'Pointwise'], default='LineBuffer') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So this basically means that I can put Pointwise
as the implementation for layers that are not pointwise? that's not user-friendly because if it causes the pointwise hls functions to be used they will fail an assert, which is invisible when calling from python.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree this is bad, but I'm not sure the best way to fix it. Is it as simple as checking if a user tries to do this and exiting gracefully with a warning message?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Perhaps I'm forgetting other cases here, but wasn't this shown to have better performance than existing implementation? Therefore one could argue we don't really need a switch to choose algorithms we know are worse. I'd make this default and not hide it behind a (currently undocumented) switch.
@@ -24,6 +24,7 @@ namespace nnet { | |||
// Common type definitions | |||
enum io_type { io_parallel = 0, io_stream }; | |||
enum strategy { latency, resource }; | |||
enum class conv_implementation { linebuffer = 0, encoded = 1, pointwise = 2 }; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question as before
if (CONFIG_T::implementation == conv_implementation::pointwise) { | ||
// Use pointwise unrolled implementation | ||
if (CONFIG_T::reuse_factor > 1) { | ||
CONFIG_T::template pointwise_conv<data_T, res_T, CONFIG_T>::pointwise_conv(data, res, weights, biases); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So a model with a kernel > 1 but set conv_implementation to pointwise, the code will be generated and since it doesn't have assert it will proceed to do an incorrect computation? It's nonsense to us, but I fear that misunderstanding of the docs or just bugs in conversion scripts may trigger this
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So there is an assert here:
hls4ml/hls4ml/templates/vivado/nnet_utils/nnet_conv1d_latency.h
Lines 88 to 92 in 1dd2603
void pointwise_conv_1d_latency_cl(data_T data[CONFIG_T::in_width * CONFIG_T::n_chan / CONFIG_T::reuse_factor], | |
res_T res[CONFIG_T::out_width * CONFIG_T::n_filt / CONFIG_T::reuse_factor], | |
typename CONFIG_T::weight_t weights[CONFIG_T::n_chan * CONFIG_T::n_filt], | |
typename CONFIG_T::bias_t biases[CONFIG_T::n_filt]) { | |
assert(CONFIG_T::filt_width == 1); |
Where else should we add it?
@@ -100,6 +105,7 @@ def test_pointwiseconv2d(chans, padds, strides, backend, io_type, strategy): | |||
kernel_initializer='normal', | |||
use_bias=False, | |||
data_format=chans, | |||
name='pointwise2d', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
general comment, not specific to this line: why don't we do this for 2d as well?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we can easily add 2d... working on it.
Testing update of #811