New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cache the compiled result #1186
Comments
Thanks for proposing this! I think it's a trade-off for embedding it into Python -- Taichi JIT compiles and doesn't generate executable files like C++. We could cache the compiled result in some files like the (Nevertheless, maybe we can add it to #677? Ideally, I want to make Taichi compile fast every time so this will be unnecessary...) Because we added many advanced optimizations recently and #1059 is not done, compilation can take a long time. You may If you'd like to do some benchmarking of compilation time or write some code to contribute to Taichi, that would be great! |
Thanks for all the discussion here. Here's my thought: caching is definitely doable (LLVM module + options -> binary), but let's first finish the IR passes performance improvements and revisit this item. Maybe by that time caching is no longer necessary since IR passes are fast enough. |
Concisely describe the proposed feature
I have tried to write some short snippnets with Taichi and I have to wait for about 30s ~ 2 mins to re-compile the code (depends on the lines of code). It seems that even if the code is not modified, the taichi compiler will still go to re-compile it.
I have yet tried to sperate functions in single file to multi source files. Will this help alleviate the problem, like the cpp compiler does, the unmodified sources will be cached, or just all sources will be recompiled by taichi?
I'm writing the issue when waitting for compilation:). Is it a intended behaviour, or a trade-off for embedding it into the Python?
The text was updated successfully, but these errors were encountered: