-
Notifications
You must be signed in to change notification settings - Fork 676
Closed
Labels
module: kernelsIssues related to kernel libraries and utilities, and code under kernels/Issues related to kernel libraries and utilities, and code under kernels/need-user-inputThe issue needs more information from the reporter before moving forwardThe issue needs more information from the reporter before moving forwardtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module
Description
Hi,
Although I study the https://pytorch.org/executorch/stable/concepts.html#codegen about codegen part, I do not understand very well about this part.
Above the concepts map, after I export the model.pte file which is the binary file.
Can I directly select the kernel op to run the model with Executorch Runtime library ?
And there is another branch of model.pte file which do the codegen to gen the Kernel Registration Library. I do not understand very well about this part.
My question is that if I can run with model.pte file with kernel op run time library, why need to codegen again?
Or what is the codegen output at real flow? Is it a c code about the graph of the model with ops and the weight?
Metadata
Metadata
Assignees
Labels
module: kernelsIssues related to kernel libraries and utilities, and code under kernels/Issues related to kernel libraries and utilities, and code under kernels/need-user-inputThe issue needs more information from the reporter before moving forwardThe issue needs more information from the reporter before moving forwardtriagedThis issue has been looked at a team member, and triaged and prioritized into an appropriate moduleThis issue has been looked at a team member, and triaged and prioritized into an appropriate module