You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dear experts, this PR is to add feature support for sparse CNN models built from the sparsepixels library, which is the code implementation of the paper arXiv:2512.06208. Issue is opened at #<1467>. Please see below the descriptions. Thanks.
Description
Add end-to-end conversion of sparse CNN models, built with the sparsepixels Keras package (HGQ2-based), to HLS firmware via the Vivado and Vitis backends. Sparse CNNs propagate only active pixels (those above a threshold) plus their spatial hash coordinates through the network instead of operating on the full image grid, drastically reducing FPGA latency and resource usage compared to dense convolution on naturally sparse inputs (e.g., particle detector data, neutrino events, sparse imaging).
What's added:
Five new layer classes in hls4ml/model/layers.py: SparseInputReduce, SparseConv2D, SparseActivation, SparsePooling2D, SparseFlatten
Keras v3 handlers for the sparse-pixels layers (InputReduce, QConv2DSparse, AveragePooling2DSparse)
Vivado backend: graph optimizer, input-precision fix pass, and config/function templates for each sparse layer
New HLS kernel header nnet_sparsepixels.h
Bit-exact precision dispatches for all 5 sparse layer types
Patch is fully self-contained. Every new pass and dispatch is gated on the new layer types via isinstance, so no existing model conversion path is touched.
Backends: Vivado, Vitis, io_parallel. Other backends and io_stream are out of scope at the moment.
Dependency:sparsepixels (optional for pytest only).
Type of change
New feature (non-breaking change which adds functionality)
A new research paper code implementation
Tests
test/pytest/test_sparsepixels.py builds a small sparse CNN, converts it via hls4ml, compiles, and compares HLS vs Keras outputs over random inputs on both Vivado and Vitis backends.
Every sparse layer operates on two parallel arrays that travel together through the network:
sparse_arr_feat[N_sparse * C]: the feature values, one per active pixel per channel
sparse_arr_hash[N_sparse * 2]: the (h, w) spatial coordinates (1-based) of each active pixel
Keeping the hash alongside the features is what makes sparse convolution possible without materializing the full H * W grid. Layer walkthrough:
SparseInputReduce: takes the dense input x_in[H * W * C], scans the first channel for pixels above threshold, and emits the top N_sparse active pixels (leftmost-first, via a binary-tree reduction). Outputs the two arrays above for the rest of the network. All channels of each selected pixel are preserved.
SparseConv2D: for every output pixel, iterates over all N_sparse input pixels and uses the difference of their hash coordinates to index into the kernel. Pairs whose offset lies outside the kernel radius contribute 0 and are skipped. Each output pixel's hash is identical to its input hash, so sparsity pattern is preserved across the conv layer.
SparseActivation: element-wise activation on the sparse feature array. The hash array is passed through untouched.
SparsePooling2D: pooling implemented on the sparse representation. Each input hash gets its new pooled cell. Pixels mapping to the same pooled cell have the pooling operation applied, and the duplicates are zeroed. A new hash array is emitted for the pooled layout.
SparseFlatten: the sparse->dense transition. The sparse representation is scattered back to a dense H_out * W_out * C_out grid using the final hash coordinates, then flattened. From this point on, the network runs on a normal dense tensor, so any standard Dense/Activation/Softmax layer can follow.
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Dear experts, this PR is to add feature support for sparse CNN models built from the
sparsepixelslibrary, which is the code implementation of the paper arXiv:2512.06208. Issue is opened at #<1467>. Please see below the descriptions. Thanks.Description
Add end-to-end conversion of sparse CNN models, built with the
sparsepixelsKeras package (HGQ2-based), to HLS firmware via the Vivado and Vitis backends. Sparse CNNs propagate only active pixels (those above a threshold) plus their spatial hash coordinates through the network instead of operating on the full image grid, drastically reducing FPGA latency and resource usage compared to dense convolution on naturally sparse inputs (e.g., particle detector data, neutrino events, sparse imaging).What's added:
hls4ml/model/layers.py:SparseInputReduce,SparseConv2D,SparseActivation,SparsePooling2D,SparseFlattenInputReduce,QConv2DSparse,AveragePooling2DSparse)nnet_sparsepixels.hPatch is fully self-contained. Every new pass and dispatch is gated on the new layer types via
isinstance, so no existing model conversion path is touched.Backends: Vivado, Vitis,
io_parallel. Other backends andio_streamare out of scope at the moment.Dependency: sparsepixels (optional for pytest only).
Type of change
Tests
test/pytest/test_sparsepixels.pybuilds a small sparse CNN, converts it via hls4ml, compiles, and compares HLS vs Keras outputs over random inputs on both Vivado and Vitis backends.Test Configuration:
Checklist
pre-commiton the files I edited or added.Happy to add to docs as a separate PR.