Description
Hi ONNXRT team,
I implemented one custom op in ONNXRT and was able to run it correctly with the correct results.
Having said that, I implemented the multiple version of the kernel wrt multiple shapes (currently I have implemented 4 versions for 4 different input heights). So, I have to run it separately for a different version. Hence, when I want to run it with multiple ops(model) at once, I am facing difficulty to make the custom op dynamic, is there any way that I can make it dynamic??
I have made it different version with if-else
conditions in this function:
struct CustomOp
: Ort::CustomOpBase<CustomOp, Kernel<int64_t>>
{
private:
std::string implem;
unsigned int ih;
public:
CustomOp(std::string implem, unsigned int ih) : implem(implem), ih(ih) {}
void *CreateKernel(OrtApi api, const OrtKernelInfo *info) const
{
if (ih == 54)
{
return new Kernel_1<int64_t>(api, info);
}
else if(ih == 50)
{
return new Kernel_2<int64_t>(api, info);
}
.....
}
So, whenever I want to run for a particular dims, I will pass the args here:
as CustomOp custom_op(implem, ih)
, implem is in my control, so no worries about that, but ih is dependent on the height of input tensor.
So, the main thing I want to do here is to execute the custom op dynamically based on the height of the input tensor.
I have referred to this tutorial for adding the custom op in ONNXRT: https://github.com/onnx/tutorials/tree/master/PyTorchCustomOperator
Look forward to your reply
Thanks!