Skip to content

Conversation

@algorithm1832
Copy link
Contributor

@algorithm1832 algorithm1832 commented Dec 21, 2025

PR Category

User Experience

PR Types

Improvements

Description

API & Unittest & EN Doc

  • Use decorator to add arg alias input and dim
  • Add arg out and correction
  • Add arg alias notes in EN doc
  • Add arg description for new args in EN doc
  • Add more examples in EN doc
  • Add compatibility test for std
  • Add arg correction test for std
  • Fix typo in std tests

Used AI Studio

@paddle-bot
Copy link

paddle-bot bot commented Dec 21, 2025

你的PR提交成功,感谢你对开源项目的贡献!
请关注后续CI自动化测试结果,详情请参考Paddle-CI手册
Your PR has been submitted. Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot paddle-bot bot added the contributor External developers label Dec 21, 2025
@algorithm1832
Copy link
Contributor Author

/re-run all-failed

@codecov-commenter
Copy link

codecov-commenter commented Dec 21, 2025

Codecov Report

❌ Patch coverage is 93.75000% with 1 line in your changes missing coverage. Please review.
⚠️ Please upload report for BASE (develop@ec01e8a). Learn more about missing BASE report.

Files with missing lines Patch % Lines
python/paddle/utils/decorator_utils.py 88.88% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             develop   #77006   +/-   ##
==========================================
  Coverage           ?   93.75%           
==========================================
  Files              ?        2           
  Lines              ?       16           
  Branches           ?        0           
==========================================
  Hits               ?       15           
  Misses             ?        1           
  Partials           ?        0           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR enhances the paddle.std function to support PyTorch-style parameter aliases and adds new functionality including the correction parameter and out parameter support. The changes aim to improve API compatibility with PyTorch.

Key Changes

  • Refactored std function implementation to use *args, **kwargs forwarding to the var function
  • Added comprehensive test coverage for parameter aliases (input for x, dim for axis), the new correction parameter, and the out parameter
  • Fixed typos in existing tests (expact_outexpect_out)

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 3 comments.

File Description
test/legacy_test/test_std_layer.py Added new test classes for compatibility testing (aliases) and correction parameter testing; fixed typos in variable names
python/paddle/tensor/stat.py Refactored std function to forward arguments to var and handle the out parameter for sqrt operation
python/paddle/_paddle_docs.py Added comprehensive documentation for std including alias notes, new parameter descriptions, and usage examples

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines 288 to 291
def std(*args, **kwargs) -> Tensor:
variance = var(*args, **kwargs)
if 'out' in kwargs:
return paddle.sqrt(variance, out=kwargs['out'])
Copy link

Copilot AI Dec 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The refactored std function forwards all arguments to var using *args and **kwargs, but this approach has a problem: when the decorator is added to enable parameter aliases, the function needs a proper signature to work correctly with the decorator. The decorator needs to know which parameters to alias, but with *args, **kwargs, there's no explicit signature to decorate. The function should have an explicit signature matching the documented parameters (x, axis, unbiased, keepdim, name, correction, out) similar to the var function, and then be decorated with @ParamAliasDecorator({"x": ["input"], "axis": ["dim"]}) or @param_two_alias(["x", "input"], ["axis", "dim"]).

Suggested change
def std(*args, **kwargs) -> Tensor:
variance = var(*args, **kwargs)
if 'out' in kwargs:
return paddle.sqrt(variance, out=kwargs['out'])
@ParamAliasDecorator({"x": ["input"], "axis": ["dim"]})
def std(
x,
axis=None,
unbiased: bool = True,
keepdim: bool = False,
name: str | None = None,
correction=None,
out=None,
) -> Tensor:
variance = var(
x,
axis=axis,
unbiased=unbiased,
keepdim=keepdim,
name=name,
correction=correction,
out=out,
)
if out is not None:
return paddle.sqrt(variance, out=out)

Copilot uses AI. Check for mistakes.
Comment on lines +289 to +291
variance = var(*args, **kwargs)
if 'out' in kwargs:
return paddle.sqrt(variance, out=kwargs['out'])
Copy link

Copilot AI Dec 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The handling of the out parameter needs careful consideration. The current implementation extracts out from kwargs and passes it to paddle.sqrt. However, the var function is also called with all kwargs including out, which means out would first receive the variance result from var, and then be overwritten with the sqrt result. This could lead to incorrect behavior if the var function tries to assign to out. The implementation should either remove out from kwargs before calling var, or handle this more explicitly in the proper signature-based implementation.

Suggested change
variance = var(*args, **kwargs)
if 'out' in kwargs:
return paddle.sqrt(variance, out=kwargs['out'])
out = kwargs.pop('out', None)
variance = var(*args, **kwargs)
if out is not None:
return paddle.sqrt(variance, out=out)

Copilot uses AI. Check for mistakes.
>>> print(out4.numpy())
[[1.721326]]
>>> out5 = paddle.std(input=x, dim=[0, 1]) # type: ignore[call-arg]
Copy link

Copilot AI Dec 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The documentation example on line 3361 contains a type ignore comment that indicates the signature doesn't properly support the alias parameters (input and dim). This is correct given that the implementation is missing the required decorator to enable these aliases. Once the decorator is added to the std function in stat.py, this type ignore comment should be removed as the call will be type-safe.

Suggested change
>>> out5 = paddle.std(input=x, dim=[0, 1]) # type: ignore[call-arg]
>>> out5 = paddle.std(input=x, dim=[0, 1])

Copilot uses AI. Check for mistakes.
Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

文档位置不对

Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这两个API不仅仅是参数名不同,涉及到更多的参数差异。

case有点复杂,先跑一下PaConvert,本地先验证PaConvert能通过后,再提交paddle。

)
out = var(**locals())
return paddle.sqrt(out)
variance = var(*args, **kwargs)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看来std、var这两个API不是简单的别名替换。

需要一套参数分发的逻辑,参考group_norm、gather的写法,这里需要处理args/kwargs来判断overload的两种签名。

var是底层的接口,先处理好var,std只需要这么写就行:

return sqrt(var(*args, **kwargs), out=out)

out=None的默认缺省逻辑在各API底层以及下沉后都会有,不需要重复判断。

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

那个if逻辑可能删不了,out需要从kwargs里面取,取不到的话会报KeyError。目前的实现也不涉及像gather一样新定义一个wrapper。

@algorithm1832
Copy link
Contributor Author

/re-run all-failed

@algorithm1832
Copy link
Contributor Author

/re-run all-failed

@algorithm1832
Copy link
Contributor Author

algorithm1832 commented Jan 12, 2026

PaConvert自测通过了
PaConvert测试截图

zhwesky2010
zhwesky2010 previously approved these changes Jan 12, 2026
Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhwesky2010 zhwesky2010 requested a review from SigureMo January 12, 2026 09:08
*,
correction: float = 1,
out: Tensor | None = None,
) -> Tensor: ...
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

只有一种签名?这里用 overload 的意义是什么?

) -> Tensor: ...


def std(*args: Any, **kwargs: Any) -> Tensor:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Image

这样的话,签名会丢掉

@algorithm1832
Copy link
Contributor Author

这里用 overload 的意义是什么?

因为使用def std(*args: Any, **kwargs: Any)的形式原有签名就没了,所以加了overload,参考的是paddle.nn.group_norm的实现

不过现在加了overload签名还是没有的话,我可能需要再看一下怎么补上

@SigureMo
Copy link
Member

因为使用def std(*args: Any, **kwargs: Any)的形式原有签名就没了,所以加了overload,参考的是paddle.nn.group_norm的实现

overload 不影响运行时,是静态检查所用,是为多种类型签名提供的解决方案,对于只有一个签名的函数来说没有意义,也不合法

image

如果只有一种签名,这个 API 为什么要用 *args, **kwargs?为什么不能写清每个参数?

@algorithm1832
Copy link
Contributor Author

如果只有一种签名,这个 API 为什么要用 *args, **kwargs?为什么不能写清每个参数?

常规思路是加一层装饰器实现参数别名,但这个对性能影响比较严重,所以现在实现的是不用装饰器的方案

API std 的实现实际上是先调用var再调用sqrt,目前varsqrt两个API都支持别名,所以直接把std接收到的参数转发给var,就可以省下装饰器

然而如果std原先的签名不进行修改,对于参数别名输入(如input)会报TypeError,而且对于有别名的参数(如x和input)都需要设置成optional否则也会报错。所以最后变成了 *args, **kwargs 的形式

@SigureMo
Copy link
Member

API std 的实现实际上是先调用var再调用sqrt,目前var和sqrt两个API都支持别名,所以直接把std接收到的参数转发给var,就可以省下装饰器

既然有别名,那为什么会只有一个签名?这不对吧,参考的 paddle.nn.group_norm 也不是只有一个签名啊……

@algorithm1832
Copy link
Contributor Author

这不对吧

确实不对,理论上最好应该是paddle用法一个签名,torch用法一个签名,这样可以包括所有的参数可能性

不过因为stdvar是一起的,var用了装饰器所以只有一个签名,所以std为了保持一致就只写了一个签名。另外,torch用法和paddle用法除了别名以外,其它的好像一样,实现的功能也是一致

要不给torch用法(别名用法)加个签名(?

@SigureMo
Copy link
Member

SigureMo commented Jan 12, 2026

要不给torch用法(别名用法)加个签名(?

加,这种才是合理的写法

运行时签名需要单独考虑,可以考虑使用其第一个签名,可以加一个简单的装饰器来实现这一点

from typing import overload, Any, get_overloads, TypeVar
from typing_extensions import ParamSpec
from collections.abc import Callable
import inspect

P = ParamSpec("P")
R = TypeVar("R")


def use_first_signature(fn: Callable[P, R]) -> Callable[P, R]:
    overloads = get_overloads(fn)
    if not overloads:
        return fn
    first_overload = overloads[0]
    sig = inspect.signature(first_overload)
    fn.__signature__ = sig
    return fn


@overload
def fn(x: int) -> int: ...


@overload
def fn(x: str) -> str: ...


@use_first_signature
def fn(*args: Any, **kwargs: Any) -> Any: ...


print(inspect.getfullargspec(fn))

这个装饰器对装饰后的函数没有额外运行时开销,不必担心

目前以 Paddle 签名为主,所以 Paddle 签名写在前面

@algorithm1832
Copy link
Contributor Author

algorithm1832 commented Jan 12, 2026

签名和重载加好了,虽说参数keepdim在paddle和torch中参数类型不同(torch中此参数不可以是位置参数),不过如果从torch迁移,直接修改torch.stdpaddle.std不会有任何问题

@algorithm1832
Copy link
Contributor Author

/re-run all-failed

Copy link
Contributor

@zhwesky2010 zhwesky2010 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhwesky2010 zhwesky2010 merged commit d99f03c into PaddlePaddle:develop Jan 13, 2026
138 of 146 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants