New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incremental caching #553
Incremental caching #553
Conversation
Could we use Axum instead of Actix? |
Could you share some context about why |
a559e06
to
cb9de74
Compare
It is faster. I would be willing to make the first implementation (this one) get merged and then I can later PR to use Axum, with other tokio features. |
Could you post some screenshots of speed improvements? |
Will do! |
Hate to be that guy, but any benchmarks? Sorry! |
@Milo123459 I've added numbers for two test cases For Incremental cache alone (without layers cache) the goal is to keep numbers of cold builds almost as is with minimal overhead, Given the fact that we're doing more IO work (by starting Web server > sending files to > create system file. docker image with) Note: incremental cache overhead will be even lower for builds that takes more time to download & install dependencies, as incremental cache download everything in one shot instead of doing multiple http calls to multiple sources Thanks for your following up! |
Great to see these performance improvements! Now I think the outstanding review comments need to be looked at and conflicts to be fixed before we can merge this. |
d29aed7
to
54ff1bb
Compare
NOTE: This is early PoC, feedback is welcomed!
When building a docker image, There are two type of cache content that Docker maintain between builds:
/root/.npm
)Both cache content are served/handled by the local Docker daemon, this means that an image built on specific Docker host. need to be built on the same host each time to take full advantage of caching capabilities
This PR enables Nixpacks to work more smoothly in a cluster setup (Multiple Docker nodes accessing cache remotely)
After build command runs (for example
npm run build
) the directories need to be cached (like/root/.npm
) will be archived withtar
and sent to a small HTTP server that has been started by Nixpacks to receive these filesThese
tar
files -if available- will be copied back to any newer image build. It will extracted to the right location inside the image to help build tools find out the files needed for the build without the needed to re-download it.From high level, these are the steps:
COPY --from=...
)docker import
against to create a "system file" image (an image that won't be started as a container)Numbers for sample tests:
Clean build (no remote cache):
2m44s
Clean build with remote cache (both layer cache + new incremental cache):
1m26s
Clean build with incremental cache only:
2m46s
Another test case
Clean build (no remote cache):
1m23s
Clean build with remote cache (both layer cache + incremental cache):
0m55s
Clean build with Incremental cache only:
1m29s