Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[PTen] Update all forward argument maping fns #39252

Merged
merged 4 commits into from Jan 28, 2022

Conversation

chenwhql
Copy link
Contributor

@chenwhql chenwhql commented Jan 26, 2022

PR types

Function optimization

PR changes

Others

Describe

[PTen] Update all forward argument maping fns

为满足infrt需求,将用于Op和Kernel参数匹配的函数,lower到pten中作为兼容组件,且不依赖fluid,本PR迁移了前向kernel相关的匹配函数

由于infrt不仅需要在运行时动态调用此参数映射函数,还需要在没有任何信息的情况下,提前将该函数所有的可能结果在运行之前提前静态化声明如下定义:

//为op到每个kernel的映射生成一个规则描述
def PDKEL_Reshape_to_CPU : Pat<
          (PD_ReshapeOp $x, $shape_tensor, $shape_attr), // OpMaker参数列表
          (PDKEL_ReshapeKernelAttr $x, fn($shape_attr, (INFRT_createBoolAttr<"false">))>; // Kernel需要的参数列表
def PDKEL_Reshape_to_CPU : Pat<
          (PD_ReshapeOp $x, $shape_tensor, $shape_attr),
          (PDKEL_ReshapeKernelAttr $x, fn($shape_tensor, (INFRT_createBoolAttr<"false">))>; // use_mkldnn = false

因此,我们需要将参数映射函数的每个结果都具体写出来,infrt通过正则抽取return KernelSignature("full", {}, {"ShapeTensor", "value"}, {"Out"});,在运行之前拿到所有的结果用于上述静态定义的生成,这导致一些函数会写得比较冗长,复杂的可能会有几百行,对于编程体验有一定副作用,示例如下

fill_constant_op映射函数原写法:

framework::KernelSignature GetExpectedPtenKernelArgs(
      const framework::ExecutionContext& ctx) const override {
    std::string shape;
    if (ctx.HasInput("ShapeTensor")) {
      shape = "ShapeTensor";
    } else if (ctx.MultiInput<framework::Tensor>("ShapeTensorList").size()) {
      shape = "ShapeTensorList";
    } else {
      shape = "shape";
    }
    std::string value;
    if (ctx.HasInput("ValueTensor")) {
      value = "ValueTensor";
    } else {
      const auto& str_value = ctx.Attr<std::string>("str_value");
      value = str_value.empty() ? "value" : "str_value";
    }
    if (!ctx.OutputVar("Out")->IsType<pten::SelectedRows>()) {
      return framework::KernelSignature("full", {}, {shape, value}, {"Out"});
    }
    return framework::KernelSignature("fill_constant.unregistered", {}, {}, {});
  }

为适配infrt更改后,fill_constant_op映射函数写法:

KernelSignature FillConstantOpArgumentMapping(
    const ArgumentMappingContext& ctx) {
  if (ctx.IsDenseTensorOutput("Out")) {
    if (ctx.HasInput("ShapeTensor")) {
      if (ctx.HasInput("ValueTensor")) {
        return KernelSignature(
            "full", {}, {"ShapeTensor", "ValueTensor"}, {"Out"});
      } else {
        const auto& str_value =
            paddle::any_cast<std::string>(ctx.Attr("str_value"));
        if (str_value.empty()) {
          return KernelSignature("full", {}, {"ShapeTensor", "value"}, {"Out"});
        } else {
          return KernelSignature(
              "full", {}, {"ShapeTensor", "str_value"}, {"Out"});
        }
      }
    } else if (ctx.InputSize("ShapeTensorList") > 0) {
      if (ctx.HasInput("ValueTensor")) {
        return KernelSignature(
            "full", {}, {"ShapeTensorList", "ValueTensor"}, {"Out"});
      } else {
        const auto& str_value =
            paddle::any_cast<std::string>(ctx.Attr("str_value"));
        if (str_value.empty()) {
          return KernelSignature(
              "full", {}, {"ShapeTensorList", "value"}, {"Out"});
        } else {
          return KernelSignature(
              "full", {}, {"ShapeTensorList", "str_value"}, {"Out"});
        }
      }
    } else {
      if (ctx.HasInput("ValueTensor")) {
        return KernelSignature("full", {}, {"shape", "ValueTensor"}, {"Out"});
      } else {
        const auto& str_value =
            paddle::any_cast<std::string>(ctx.Attr("str_value"));
        if (str_value.empty()) {
          return KernelSignature("full", {}, {"shape", "value"}, {"Out"});
        } else {
          return KernelSignature("full", {}, {"shape", "str_value"}, {"Out"});
        }
      }
    }
  }
  return KernelSignature("unregistered", {}, {}, {});
}

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@chenwhql chenwhql merged commit 75923a3 into PaddlePaddle:develop Jan 28, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants