-
Notifications
You must be signed in to change notification settings - Fork 756
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update autotest framework #5520
Conversation
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
return flow.tensor(torch_tensor.cpu().numpy()) | ||
|
||
|
||
def convert_torch_object_to_flow(x): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
原先的每个 generator 生成一份 oneflow 数据和一份 pytorch 数据的行为顺手改掉了。现在每个 generator 只生成 pytorch 数据,再通过 convert_torch_object_to_flow 转成 oneflow 数据
def value(self): | ||
if self._value is None: | ||
self._value = self._calc_value() | ||
return self._value |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
每个 generator 对象会缓存自己本次生成的值,用于 x0 = random(1, 6); x1 = x0 + 1; x2 = x0 + 1
这种情况,计算 x1 和 x2 时 x0 只会被计算一次,x1 和 x2 的值会相等
return self._value | ||
|
||
def size(self): | ||
return 1 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个 size 用来处理 a | b | c
这样连续的 |
,期望的行为是 a b c 各有 1/3 概率
def random(low, high): | ||
def generator(annotation): | ||
class oneof(generator): | ||
def __init__(self, *args, possibility=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个 possiblity 只能通过 keyword 形式传进来,需要注意
t = [generator(x) for x in annotation.__args__] | ||
return zip(*t) | ||
return self._generate(x) | ||
if annotation.__origin__ is Tuple or annotation.__origin__ is py_tuple: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tuple 被劫持成了 generator dsl 里的一个操作(这个目前不会暴露到文件外),所以原来 python 的 tuple 现在用 py_tuple 表示
continue | ||
flow_data, torch_data = generate(name) | ||
|
||
generator_tuple = tuple( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里创建一个容纳所有 generator 的新 generator,这样才能让 x0 = random(1, 6); x1 = x0 + 1; x2 = x0 + 1 这种情况计算 x1 和 x2 时 x0 只会被计算一次
是不是可以把这些介绍写在某个 README 里面,然后出错的时候提示去看这个readme |
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
Signed-off-by: daquexian <daquexian566@gmail.com>
|
||
counter = 0 | ||
|
||
def GetDualObject(name, pytorch, oneflow): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没看太懂这部分 DualObject的处理,大老师可以简单讲下吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
应该是为了防止多次call method的时候前一次的args被以引用的形式进行了修改,导致后一次call method时接受的args值不正确,所以在上下文中动态generate一个method出来,使得每次call method时args的引用不同
return np.allclose(torch_tensor.detach().cpu().numpy(), flow_tensor.numpy()) | ||
|
||
|
||
def autotest(n=20, auto_backward=True): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
待加rtol和atol
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
已加
def check_tensor_equality(torch_tensor, flow_tensor): | ||
# TODO: check dtype | ||
if torch_tensor.grad is not None: | ||
assert flow_tensor.grad is not None, "OneFlow tensor doesn't have grad while PyTorch tensor has one" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里可以加一下更明显的错误提示吗,比如输出一下出错的这组随机测试的输入和attr,这样CI有异常的话就可以快速定位错误了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个比想象中复杂一些,之后再加吧
Signed-off-by: daquexian <daquexian566@gmail.com>
CI failed, removing label automerge |
def register_flow_to_flow_converter(func): | ||
annotation2torch_to_flow_converter[annotation] = func | ||
return func | ||
|
||
return register_flow_to_flow_converter |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个register_flow_to_flow_converter
名字写错了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没有太get到,拼写问题吗
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
应该是torch_to_flow吧
def torch_tensor_to_flow(x): | ||
return flow.tensor(x.cpu().numpy()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
没用上在generators.py
中定义的tensor_converter
oneflow_args, | ||
oneflow_kwargs, | ||
) = get_args(pytorch_method, *args, **kwargs) | ||
pytorch_res = pytorch_method(*pytorch_args, **pytorch_kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这一行多余了
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
好的,已删除
CI failed, removing label automerge |
Speed stats:
|
oneflow_res = torch_tensor_to_flow(pytorch_res) | ||
else: | ||
oneflow_res = oneflow(*oneflow_args, **oneflow_kwargs) | ||
return GetDualObject("unused", pytorch_res, oneflow_res) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是不是应该用GetDualObject
参数中传入的name
啊
|
||
@data_generator(torch.Tensor) | ||
class random_tensor(generator): | ||
def __init__(self, ndim=None, dim0=1, dim1=None, dim2=None, dim3=None, dim4=None): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里的dim0=1
是不是写错了
重构自动测试框架让 generator 可以组合。原来的以 1/3 的概率对有默认值参数不传参的行为现在需要显式触发,可以通过将 generator 设置为
random_or_nothing
或者oneof(x, nothing(), possibility=2/3)
、x | nothing()
等方式触发,generator 为random
、constant
等时不会再考虑 api 参数的默认值。如果要跳过某一个参数的生成(例如新版 pytorch bn 的 device 和 dtype),直接将 generator 设置为
nothing()
即可添加 torch_flow_dual_object.py,实现 from automated_test_util import * 之后,可以通过
这样纯写 pytorch 代码的方式来自动测试