Skip to content

[Cherry-Pick][Optimization] merge matmul and add (#6986)#7191

Merged
zoooo0820 merged 11 commits intoPaddlePaddle:release/2.6from
BingooYang:linear_opt
Apr 9, 2026
Merged

[Cherry-Pick][Optimization] merge matmul and add (#6986)#7191
zoooo0820 merged 11 commits intoPaddlePaddle:release/2.6from
BingooYang:linear_opt

Conversation

@BingooYang
Copy link
Copy Markdown
Contributor

Motivation

性能优化

Modifications

将UnquantizedLinearMethod中的matmul和add用linear替换。
带bias情况基本上有加速,不带bias情况小shape下性能有下降(主要是python层if等调度开销,linear内部实现也是matmul)。
810cbda4d14af8f770c231d04dbe1090

Usage or Command

Accuracy Tests

精度保持一致

Checklist

  • [ x ] Add at least a tag in the PR title.
    • Tag list: [[FDConfig],[APIServer],[Engine], [Scheduler], [PD Disaggregation], [Executor], [Graph Optimization], [Speculative Decoding], [RL], [Models], [Quantization], [Loader], [OP], [KVCache], [DataProcessor], [BugFix], [Docs], [CI], [Optimization], [Feature], [Benchmark], [Others], [XPU], [HPU], [GCU], [DCU], [Iluvatar], [Metax]]
    • You can add new tags based on the PR content, but the semantics must be clear.
  • [ x ] Format your code, run pre-commit before commit.
  • [ x ] Add unit tests. Please write the reason in this PR if no unit tests.
  • [ x ] Provide accuracy results.
  • [ x ] If the current PR is submitting to the release branch, make sure the PR has been submitted to the develop branch, then cherry-pick it to the release branch with the [Cherry-Pick] PR tag.

@paddle-bot
Copy link
Copy Markdown

paddle-bot bot commented Apr 3, 2026

Thanks for your contribution!

@BingooYang BingooYang changed the title [Optimization] merge matmul and add [Cherry-Pick][Optimization] merge matmul and add Apr 8, 2026
@zoooo0820 zoooo0820 changed the title [Cherry-Pick][Optimization] merge matmul and add [Cherry-Pick][Optimization] merge matmul and add (#6986) Apr 9, 2026
Copy link
Copy Markdown
Collaborator

@zoooo0820 zoooo0820 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zoooo0820 zoooo0820 merged commit 849eb3d into PaddlePaddle:release/2.6 Apr 9, 2026
74 of 79 checks passed
Copy link
Copy Markdown

@fastdeploy-bot fastdeploy-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤖 AI Code Review | 2026-04-09

📋 Review 摘要

PR 概述:将 UnquantizedLinearMethod 中的 matmuladd 操作合并为 linear 函数,以提升性能

变更范围model_executor/layers/linear.pytests/e2e/utils/

影响面 Tag[Optimization] [OP]

📝 PR 规范检查

✅ PR 标题包含 [Optimization] 标签
✅ PR 描述包含 Motivation、Modifications、Usage、Accuracy Tests、Checklist
✅ PR 规范符合要求

问题

级别 文件 概述
🔴 Bug linear.py:91 对于 paddle 格式模型,paddle.nn.functional.linearweight 形状可能不匹配

总体评价

PR 的性能优化意图清晰,但存在一个潜在形状兼容性问题需要验证或处理。

f"bias must be 1D with size equal to the last dim of weight, "
f"but got bias.shape={bias.shape}, weight.shape[-1]={layer.weight.shape[-1]}"
)
out = paddle.nn.functional.linear(x, layer.weight, bias)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 Bug 对于 paddle 格式的模型,layer.weight 的形状为 [input_size, output_size],而 paddle.nn.functional.linear 期望的 weight 形状为 (output_size, input_size)

根据代码分析:

  • torch 格式:layer.weight 形状为 [output_size, input_size](在 create_weights 中转置)
  • paddle 格式:layer.weight 形状为 [input_size, output_size](未转置)
  • UnquantizedLinearMethodprocess_weights_after_loading 会被跳过

对于 paddle 格式的模型,直接使用 paddle.nn.functional.linear(x, layer.weight, bias) 可能会导致形状不匹配错误。

建议:

  1. 验证 paddle 格式模型的兼容性
  2. 如果不支持,考虑添加条件判断或注释说明

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants