Skip to content

Conversation

@avallete
Copy link
Member

@avallete avallete commented Nov 1, 2025

What kind of change does this PR introduce?

  • When running supabase start all images are pulled at the time the service is starting, so each services need to wait until it's started before pulling something else. This change the logic into two steps:
  1. List all services/images that will be needed concurrently (with a 1s delay between each to reduce the likeliness of hitting AWS ecr rate limit issue), prioritizing the largest images first.
  2. Actually start the service with docker (when the image is already there)

On my machine/network, this reduce the "cold start" (no images at all) of supabase start from ~2:43.19 total to ~1:30.33 total.

Note that even with all images already present there is a ~30s uncompressible delay to start all the containers. So this only bring the cold start download overhead from ~2.13min to ~1min.

@avallete avallete requested a review from a team as a code owner November 1, 2025 10:42
Copy link
Contributor

@sweatybridge sweatybridge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we check how docker compose downloads images and import that as a library instead? https://github.com/docker/compose/blob/main/pkg/compose/images.go

@avallete
Copy link
Member Author

avallete commented Nov 3, 2025

Can we check how docker compose downloads images and import that as a library instead? https://github.com/docker/compose/blob/main/pkg/compose/images.go

Didn't found any pull related things on the linked file, but here: https://github.com/docker/compose/blob/b80bb0586e42dbb6adc2190cb84881138b61eb53/pkg/compose/pull.go#L312-L331

It seems quite similar to what I've done, I can see they introduce some maxConcurrency and doesn't optimize by pull largest images files first (makes sense, we do know which image we pull and what are the approximate size for each, they don't).

I'll see if we can re-use this directly rather than the custom implementation.

@sweatybridge
Copy link
Contributor

What about the console logs when pulling images concurrently? Docker compose has a nice display for each image and clears up when downloads complete.

@avallete
Copy link
Member Author

avallete commented Nov 3, 2025

What about the console logs when pulling images concurrently? Docker compose has a nice display for each image and clears up when downloads complete.

Handled this with a single spinner showing progress (nb of remaining images to pull) and a checkmark at the end to show which image failed/succeed.
Screenshot 2025-11-03 at 09 46 47

Screenshot 2025-11-03 at 09 47 49

I've looked at the docker-compose repo, and it's not built as a library, the pull logic seems purely internal so I think we need to come up with our own implementation here.

@sweatybridge
Copy link
Contributor

Hmm ok, I will explore a bit more if you don't mind. Their progress logs is something I've always wanted to try.

@avallete
Copy link
Member Author

avallete commented Nov 3, 2025

Hmm ok, I will explore a bit more if you don't mind. Their progress logs is something I've always wanted to try.

If it's only the progress bar you're interested in, then maybe that's doable, it seems like it's handled on it's own separate package in their codebase.

@avallete
Copy link
Member Author

avallete commented Nov 3, 2025

Made a new version keeping the concurrency logic in our code (since it's not exposed by compose). But improving the implementation by looking how compose does it (errgroup, channel).

Also added a 1s delay at the start between each image pull, this reduce greatly the number of "api rate limit" encountered which overall reduce the total pulling time.

And re-use part of compose for better logging (docker-compose style):

Also thank's for this new more detailed log (time for each image) we can see the largest image (postgres) takes ~1min to download, which match our total time of running from cached image (30s) + network speed (1min to download the largest).

@avallete avallete requested a review from sweatybridge November 3, 2025 10:20
@coveralls
Copy link

Pull Request Test Coverage Report for Build 19032133102

Details

  • 213 of 384 (55.47%) changed or added relevant lines in 3 files are covered.
  • 16 unchanged lines in 2 files lost coverage.
  • Overall coverage decreased (-0.1%) to 54.585%

Changes Missing Coverage Covered Lines Changed/Added Lines %
internal/utils/retry.go 20 24 83.33%
internal/utils/docker.go 34 116 29.31%
internal/start/start.go 159 244 65.16%
Files with Coverage Reduction New Missed Lines %
internal/gen/keys/keys.go 5 12.9%
internal/utils/docker.go 11 62.55%
Totals Coverage Status
Change from base Build 19030603935: -0.1%
Covered Lines: 6584
Relevant Lines: 12062

💛 - Coveralls

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants