Skip to content

Fix optional bias and batch handling in cadence::fully_connected (#19194)#19194

Open
hsharma35 wants to merge 2 commits intopytorch:mainfrom
hsharma35:export-D102821213
Open

Fix optional bias and batch handling in cadence::fully_connected (#19194)#19194
hsharma35 wants to merge 2 commits intopytorch:mainfrom
hsharma35:export-D102821213

Conversation

@hsharma35
Copy link
Copy Markdown
Contributor

@hsharma35 hsharma35 commented Apr 28, 2026

Summary:

Fixes two bugs in the generic and HiFi cadence::fully_connected implementations. First, the optional bias was dereferenced without a has_value() guard, causing a crash for bias-free inputs. Second, only the first input row was computed because the batch loop was missing; a loop over leading_dims (the product of all non-channel input dimensions) is now added to correctly process batched and multi-sequence inputs.

Reviewed By: mcremon-meta

Differential Revision: D102821213

@pytorch-bot
Copy link
Copy Markdown

pytorch-bot Bot commented Apr 28, 2026

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/19194

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

❌ 1 New Failure, 2 Unrelated Failures

As of commit a1d8229 with merge base 5a206ab (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following jobs failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla Bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Apr 28, 2026
@meta-codesync
Copy link
Copy Markdown
Contributor

meta-codesync Bot commented Apr 28, 2026

@hsharma35 has exported this pull request. If you are a Meta employee, you can view the originating Diff in D102821213.

@github-actions
Copy link
Copy Markdown

This PR needs a release notes: label

If your change should be included in the release notes (i.e. would users of this library care about this change?), please use a label starting with release notes:. This helps us keep track and include your important work in the next release notes.

To add a label, you can comment to pytorchbot, for example
@pytorchbot label "release notes: none"

For more information, see
https://github.com/pytorch/pytorch/wiki/PyTorch-AutoLabel-Bot#why-categorize-for-release-notes-and-how-does-it-work.

@meta-codesync meta-codesync Bot changed the title Fix optional bias and batch handling in cadence::fully_connected Fix optional bias and batch handling in cadence::fully_connected (#19194) Apr 28, 2026
hsharma35 added a commit to hsharma35/executorch that referenced this pull request Apr 28, 2026
…orch#19194)

Summary:

Fixes two bugs in the generic and HiFi cadence::fully_connected implementations. First, the optional bias was dereferenced without a has_value() guard, causing a crash for bias-free inputs. Second, only the first input row was computed because the batch loop was missing; a loop over leading_dims (the product of all non-channel input dimensions) is now added to correctly process batched and multi-sequence inputs.

Reviewed By: mcremon-meta

Differential Revision: D102821213
…kernels (pytorch#19193)

Summary:

PR pytorch#19193
Fixes two correctness bugs in the HiFi kernels for cadence::quantized_conv1d_ncl.out and cadence::quantized_conv1d_nlc.out. The int8 path (xa_nn_conv2d_per_chan_sym8sxasym8s) produces incorrect results with stride > 1 on some backends (e.g., Artemis HiFi4) and is now redirected to the generic fallback for that case. The uint8 path overflowed WORD32 when computing out_multiplier32 if eff_scale >= 1.0 (i.e., output_scale > bias_scale), which is now clamped to INT32_MAX.

Reviewed By: zonglinpeng

Differential Revision: D102821209
…orch#19194)

Summary:

Fixes two bugs in the generic and HiFi cadence::fully_connected implementations. First, the optional bias was dereferenced without a has_value() guard, causing a crash for bias-free inputs. Second, only the first input row was computed because the batch loop was missing; a loop over leading_dims (the product of all non-channel input dimensions) is now added to correctly process batched and multi-sequence inputs.

Reviewed By: mcremon-meta

Differential Revision: D102821213
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. fb-exported meta-exported

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants