-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate TFJS kernels into TVM #4
Comments
One item that would be helpful is to provide nodejs program
which outputs the shader, then we can likely take over from there and start some initial integrations |
Followup comments, one thing to note is that the kernel can be shape dependent. So one thing that could be helpful instead is something like (where we pass in the input shape spec as well) that way we will be able to allow the tfjs side to get the related kernels
|
@qjia7 please let me know of the new additional shape config would be sufficient for shader dumping |
Thank you @qjia7 ! This seems to be a great step. It would be super nice is to avoid the creation of the tfjs tensor and directly pass in the shape spec that would enable quite natural integration as the command showed above. |
@tqchen, for webgpu backend, print shader is now behind a flag WEBGPU_PRINT_SHADER(tensorflow/tfjs#7523). Here are examples. Print shader in non-model modeOpen below page with urls like: index.html?WEBGPU_PRINT_SHADER=all, index.html?WEBGPU_PRINT_SHADER=binary, index.html?WEBGPU_PRINT_SHADER=binary,depth):
Print shader in model modeIf you want try this on a model, you can put this and this under tfjs\e2e\benchmarks\local-benchmark. Setup a web server then type url like:
|
Thank you! is it possible to install tfjs as a nodejs dependency and prin using nodejs? That would allow some native integration of python packages that leverages this |
@tqchen I will try how to make this works on node. Will update when any progress. |
cc @Hzfengsy |
@tqchen, If you want a quick try on print shader on webgpu-nodejs, I draft a document here: Please note: currently some webgpu APIs are not fully supported in dawn, so this can be only used for dump shader, the predict results are invalid. BTW, I will try to see if there any opportunity to upstream these change and make the usage more simple. |
actually we don't need to see prediction result, instead it would be great to simply get the shaders without running the prediction or even running webgpu api, since we are on the compilation and packaging side. |
Hi, @tqchen @Hzfengsy, I drafted a design doc about dump shader here: |
Thank you @axinging , what we want is the ability to get the WGSL shader code without executing them. So effectively an lookup feature. My understanding is that most of the execution contains two parts of logic(that maybe be coupled together)
Let me use the following code to show some of the intend of the logic interface InputSpec {
shapes: Array<Array<number>>;
};
// Get the shader string based on ket and input shapes (in spec)
//
function getShader(key: str, spec: InputSpec) : string {
if (spec.shapes[0] match some pattern) {
return shader0;
} else {
....
}
}
function matmul(input: Tensor, w: Tensor) {
const shader = getShader("matmul", [input.shape, w.shape]);
const output = allocOutput(...);
// abstract code for compile
const pipeline = compile(shader);
...
submit(pipeline, inputs, output);
} What we need is the ability to be able to directly call Being able to call this function pragmatically from nodejs, like |
@tqchen is there a way to incorporate TFJS backend into the TVM runtime instead of relies on the AOT shader copy? |
Thanks @pyu10055 , the main goal is we would like to be able to do development through python env and recompose the solutions. This would be a orthogonal path from the tvmjs backend runtime integration and use tfjs as a graph exec provider, which i think would also be valuable. |
Some kernels in TFJS may further improve the performance of TVM for the time being and Intel may provide them.
The text was updated successfully, but these errors were encountered: