Skip to content

Performance ideas

Sam McCall edited this page Dec 18, 2020 · 1 revision
  • persistent/shared preamble cache
  • parallelize IO and parsing by #include-scanning ahead (parallelize IO itself for supporting VFSes?)
  • module support/inference
  • cache Sema code completion result set rather than reparsing on each keystroke (replay index query etc)
  • improve allocation/memory usage: https://reviews.llvm.org/D93452
Clone this wiki locally