Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pointwise Conv1D with code generation for "Latency" strategy (update of #811) #881

Open
wants to merge 23 commits into
base: main
Choose a base branch
from

Conversation

jmduarte
Copy link
Member

@jmduarte jmduarte commented Oct 8, 2023

Testing update of #811

@jmduarte jmduarte added the please test Trigger testing by creating local PR branch label Oct 8, 2023
@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Oct 8, 2023
@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Oct 11, 2023
hls4ml/backends/fpga/fpga_backend.py Outdated Show resolved Hide resolved
hls4ml/backends/fpga/passes/codegen.py Outdated Show resolved Hide resolved
hls4ml/backends/vivado/passes/convolution_templates.py Outdated Show resolved Hide resolved
# attrs.append(ConfigurableAttribute('conv_implementation', value_type=str, default='LineBuffer'))
attrs.append(ChoiceAttribute('conv_implementation', choices=['LineBuffer', 'Encoded'], default='LineBuffer'))
attrs.append(
ChoiceAttribute('conv_implementation', choices=['LineBuffer', 'Encoded', 'Pointwise'], default='LineBuffer')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So this basically means that I can put Pointwise as the implementation for layers that are not pointwise? that's not user-friendly because if it causes the pointwise hls functions to be used they will fail an assert, which is invisible when calling from python.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree this is bad, but I'm not sure the best way to fix it. Is it as simple as checking if a user tries to do this and exiting gracefully with a warning message?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps I'm forgetting other cases here, but wasn't this shown to have better performance than existing implementation? Therefore one could argue we don't really need a switch to choose algorithms we know are worse. I'd make this default and not hide it behind a (currently undocumented) switch.

hls4ml/templates/vivado/build_prj.tcl Outdated Show resolved Hide resolved
hls4ml/templates/vivado/nnet_utils/nnet_code_gen.h Outdated Show resolved Hide resolved
@@ -24,6 +24,7 @@ namespace nnet {
// Common type definitions
enum io_type { io_parallel = 0, io_stream };
enum strategy { latency, resource };
enum class conv_implementation { linebuffer = 0, encoded = 1, pointwise = 2 };
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same question as before

if (CONFIG_T::implementation == conv_implementation::pointwise) {
// Use pointwise unrolled implementation
if (CONFIG_T::reuse_factor > 1) {
CONFIG_T::template pointwise_conv<data_T, res_T, CONFIG_T>::pointwise_conv(data, res, weights, biases);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So a model with a kernel > 1 but set conv_implementation to pointwise, the code will be generated and since it doesn't have assert it will proceed to do an incorrect computation? It's nonsense to us, but I fear that misunderstanding of the docs or just bugs in conversion scripts may trigger this

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So there is an assert here:

void pointwise_conv_1d_latency_cl(data_T data[CONFIG_T::in_width * CONFIG_T::n_chan / CONFIG_T::reuse_factor],
res_T res[CONFIG_T::out_width * CONFIG_T::n_filt / CONFIG_T::reuse_factor],
typename CONFIG_T::weight_t weights[CONFIG_T::n_chan * CONFIG_T::n_filt],
typename CONFIG_T::bias_t biases[CONFIG_T::n_filt]) {
assert(CONFIG_T::filt_width == 1);

Where else should we add it?

@@ -100,6 +105,7 @@ def test_pointwiseconv2d(chans, padds, strides, backend, io_type, strategy):
kernel_initializer='normal',
use_bias=False,
data_format=chans,
name='pointwise2d',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

general comment, not specific to this line: why don't we do this for 2d as well?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, we can easily add 2d... working on it.

@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Oct 12, 2023
@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Oct 15, 2023
@jmduarte jmduarte changed the title Testing update of #811 Pointwise Conv1D/2D with code generation for "Latency" strategy (update of #811) Oct 20, 2023
@jmitrevs jmitrevs added this to the v1.0.0 milestone Oct 20, 2023
@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Dec 19, 2023
@jmduarte jmduarte changed the title Pointwise Conv1D/2D with code generation for "Latency" strategy (update of #811) Pointwise Conv1D with code generation for "Latency" strategy (update of #811) Apr 23, 2024
@jmduarte jmduarte added please test Trigger testing by creating local PR branch and removed please test Trigger testing by creating local PR branch labels Jun 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
please test Trigger testing by creating local PR branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants