-
Notifications
You must be signed in to change notification settings - Fork 2k
Description
testing new experimental feature from PR #5826 Add functions for parallel compilation
which was recently merged into main branch
im loading number of small models and attempting to run pre-compile and i'm getting errors on all attempts
here ive documented 3 different failures:
-
compile fails on some models with a totally random message such as:
(and works fine on some models)Uncaught (in promise) Error: Pass at least one tensor to
tf.stack
-
compile completes without errors, but later actual code execution in js fails:
(same code works just fine if there is no pre-compile)Uncaught (in promise) TypeError: Cannot read properties of null (reading 'A') at tfjs.esm.js:47772:27 at Array.forEach (<anonymous>) at runProgram (tfjs.esm.js:47770:10) at _MathBackendWebGL.runWebGLProgram (tfjs.esm.js:49796:7) at _MathBackendWebGL.uploadToGPU (tfjs.esm.js:49916:40)
which happens in a trivial function that runs
tf.image.resizeBilinear
followed bytf.div
to normalize input tensor -
compile completes without errors, but later model inference fails with the same error as above
actual backtrace shows that it happens duringexecute
call and kernel op in model that triggers error is a simplesub
(same model executes without issues if there is no pre-compile)
my function that runs precompile on all models is:
type Models: Record<string, GraphModel>;
async function runCompile(allModels: Models) {
const backendType = tf.getBackend();
const webGLBackend = tf.backend();
if ((backendType !== 'webgl') || (!webGLBackend || !webGLBackend.checkCompileCompletion)) {
log('compile pass: skip');
return;
}
const models = Object.values(allModels).filter((m) => m !== null) as GraphModel[];
tf.env().set('ENGINE_COMPILE_ONLY', true);
const numTensorsStart = tf.engine().state.numTensors;
for (const model of models) {
const shape = (model.inputs && model.inputs[0] && model.inputs[0].shape) ? [...model.inputs[0].shape] : [1, 64, 64, 3];
const dtype = (model.inputs && model.inputs[0] && model.inputs[0].dtype) ? model.inputs[0].dtype : 'float32';
for (let dim = 0; dim < shape.length; dim++) {
if (shape[dim] === -1) shape[dim] = dim === 0 ? 1 : 64; // override batch number and any dynamic dimensions
}
const tensor = tf.zeros(shape, dtype);
const res = await model.executeAsync(tensor);
if (Array.isArray(res)) res.forEach((t) => tf.dispose(t));
else tf.dispose(res);
tf.dispose(tensor);
}
const kernels = await webGLBackend.checkCompileCompletionAsync(); // same errors if check is moved inside per-model loop
webGLBackend.getUniformLocations();
log('compile pass kernels:', kernels.length); // getting a reasonable value here
tf.env().set('ENGINE_COMPILE_ONLY', false);
const numTensorsEnd = tf.engine().state.numTensors;
if ((numTensorsEnd - numTensorsStart) > 0) log('tensor leak:', numTensorsEnd - numTensorsStart); // no leaks
}
Activity
lina128 commentedon Apr 14, 2022
Hi @vladmandic , thank you for reporting this. The parallel compilation experimental feature, we only test for one model, if there's a couple models, the state may get messed up because of the async call (this line
const res = await model.executeAsync(tensor)
). Maybe try using model.execute(). We'd like to know whether it works. Anyways, we are working on some infra improvement that will allow us to track state for each model, when that improvement is done, we will be able to support multiple models.vladmandic commentedon Apr 15, 2022
yup, that does the trick!
and compile definitely speeds up time to first inference - some ~30% in my tests using simple models
that is VERY useful for webapps where time to interactive is critical
i do wish there was a way to determine ahead of time if model can be executed synchronously
instead of wrapping the block in
try...catch
(i do have open feature request for that)for reference:
vladmandic commentedon Sep 29, 2022
any update on supporting models that require async execution?
or how to detect in advance if model requires async execution to start with?
SangbumChoi commentedon Jan 19, 2023
any progress update for this parallel compilation?
gaikwadrahul8 commentedon May 30, 2023
Hi, @vladmandic
Apologize for the delayed response and we're re-visiting our older issues and checking whether those issues got resolved or not as of now so May I know are you still looking for the solution or your issue got resolved ?
If issue still persists after trying with latest version of TFJs please let us know with error log and code snippet to replicate the same issue from our end ?
Could you please confirm if this issue is resolved for you ? Please feel free to close the issue if it is resolved ? Thank you!
vladmandic commentedon May 30, 2023
Yes, this issue is still valid and there has been no updates from TFJS team.