You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
torch.compile always re-compiles a function from scratch in a new Python session, which takes a lot of time.
I'm wondering if there's a way to cache the compilation result in the file system (like gcc/clang) to speed up the development & debugging process. @Chillee
This is currently an issue we're aware of, unfortunately. In theory, it's possible to use AOTInductor https://www.youtube.com/watch?v=w7d4oWzwZ0c to completely AOT compile everything, however it's somewhat finicky to use.
We also have some plans to offer an easier way to cache compilation results.
To be clear, a number of components should already be cached on recompile - triton autotuning decisions, inductor compilation, etc. It typically takes me on the order of 30-40 seconds for a warm recompile, although we should certainly try to drive this down even further.
torch.compile
always re-compiles a function from scratch in a new Python session, which takes a lot of time.I'm wondering if there's a way to cache the compilation result in the file system (like gcc/clang) to speed up the development & debugging process.
@Chillee
gpt-fast/generate.py
Lines 16 to 18 in db7b273
The text was updated successfully, but these errors were encountered: