Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Saving and loading cache #2

Closed
aboeglin opened this issue May 9, 2022 · 5 comments
Closed

Saving and loading cache #2

aboeglin opened this issue May 9, 2022 · 5 comments

Comments

@aboeglin
Copy link

aboeglin commented May 9, 2022

Hi, I found out about your library by coming across your post here: https://ollef.github.io/blog/posts/query-based-compilers.html.

I'm currently implementing the ad-hoc caches you talk about, so I think it's a good time to evaluate other options. I already do have quite a lot of in-memory caches for various steps. What I'm currently looking at is to have a persistent cache across runs. Is it something that would be viable?

In the section Reverse dependency tracking you basically describe what I'm after. I'm just not certain how that's achieved in practice. Do you mind elaborating a bit or point to code that achieves this in Sixten?

@ollef
Copy link
Owner

ollef commented May 9, 2022

I don't do reverse dependency tracking across runs, but for e.g. the watch mode and the language server where there's a long-running process that keeps type checking the code when there are changes (and reports back errors etc).

The relevant code in Sixty is around here.

I haven't experimented much with persisting the state to disk, though I do have some code that should allow that in Sixty using the persist library (see instances for DHashMap etc and instances for Query).

So the first idea is to persist to disk the traces (for reuse in trace verification) and the reverse dependencies (but note that this would also require saving the state of the input files at the time of persisting the state), and load them from disk the next time the compiler is run. But I have yet to actually do this, so I don't know how large these files would become and how costly it would be to read and write them. Please report back if you try this out.

@aboeglin
Copy link
Author

aboeglin commented May 9, 2022

Well that's exactly what I've been playing with for the past week. Right now I only save typed asts to disk. But the loading and deserialization is pretty expensive. Obviously the higher in the tree the rebuild happens the less cached asts I need to load and the faster it is. It's still faster than doing type checking and codegen again though.

@aboeglin
Copy link
Author

aboeglin commented May 11, 2022

@ollef So I'm in the process of rewriting things to integrate Rock. That'll most likely take a little while until I get it right and can play with persisting the cache. Happy to close the Ticket for now.

Also, what would be the best way to ask specific questions about Rock? Which may result in documentation contribution?

I have a specific one just right now. I temporarily removed import cycle detection as imports are now handled differently and fetched from Rock. I found memoiseWithCycleDetection in the docs. Is it something that could help there or would I just need to keep on doing it the old way? Well except that I'd fetch imported modules this time instead of keeping a self made cache for already parsed modules.

@ollef
Copy link
Owner

ollef commented May 11, 2022

Okay, thanks. It would be interesting to hear about your progress later. 😊

@ollef ollef closed this as completed May 11, 2022
@aboeglin
Copy link
Author

@ollef Will do! I'm very much down that path right now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants