Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-fetch API requests #2274

Merged
merged 7 commits into from May 16, 2023
Merged

Pre-fetch API requests #2274

merged 7 commits into from May 16, 2023

Conversation

haileys
Copy link
Contributor

@haileys haileys commented May 12, 2023

This PR makes some commands faster by pre-fetching API requests in the background before they are needed. The commands that this PR makes faster are: volumes create (2x RTT), postgres create and import (1x RTT), redis create (1x RTT), and migrate_to_v2 (1x RTT).

I've taken an interesting approach to achieve this without interrupting the existing control flow too much - there is now a new Future type under internal.

There are two ways to spawn a future: future.Spawn(func() { ... }), which spawns a new goroutine to call the passed function, which resolves the future on return; and future.Ready(val), which creates a new future that is immediately resolved with the passed value.

There are two new uses of futures in this PR:

  • prompt.PlatformRegions() - this new function fetches the list of regions from the API and returns a future. This function is memoised, so the HTTP request is only made on the first call, and subsequent calls reuse the same future.

    I added an extra call to this function at the beginning of execution for all commands that eventually call prompt.Region. This means that by the time the command calls prompt.Region, the list of regions has most likely already been fetched. If the request was slow and the future has not yet completed for some reason, prompt.Region will wait just like if it had made the request inline like previously.

  • In the fly volumes create subcommand, fetching AppBasic in the background while waiting on the user to confirm the action. This one's pretty straight forward.

Would love to hear everyone's thoughts on this approach!! I think there's quite a bit of potential to make more parts of flyctl faster too, this PR is just where I've started off on this :)

@rugwirobaker
Copy link
Contributor

This is so nice, I can see doing some prefetching in one of these command preparers:

var commonPreparers = []Preparer{
.

api/http.go Outdated Show resolved Hide resolved
Comment on lines +84 to +86
appFuture := future.Spawn(func() (*api.AppBasic, error) {
return client.GetAppBasic(ctx, appName)
})
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey, the improvements are nice but I have mixed feelings about using Future/Promise pattern in a Go codebase. Somehow I tend to think it can be solved with Go primitives like a go routine and a channel.

Copy link
Contributor Author

@haileys haileys May 12, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using a goroutine and a channel was my first thought, but I ran into a few problems with that approach:

  • A channel would only allow for a single receiver of the value. This would necessitate a different approach when pre-loading the list of regions in platform.Region, which could be called any number of times. A future neatly encapsulates the concept of a value that may or may not be ready yet, which can be passed around just like any other Go value.

  • Error handling is clumsy with a single channel. We'd either need to wrap both success type and error type up in a single value (interface{}?) and then pull them out on the other end, or use two channels and select on the other end. Either way, it would be a lot of code that could be error-prone to write out each time. Future's API is difficult to use wrong and lets the logic, not the logistics, shine through.

  • A panicking goroutine would dump a stack trace to the console, but it otherwise leave the waiting goroutines hanging without further special care. There's a few lines of code to deal with this case in internal/future/future.go, and my hope is that a small, neat abstraction can do the right thing for us in one place so we don't have to think about carefully handling this everywhere we'd like to repeat this pattern.

Goroutines and channels are useful, but they often aren't the whole picture. For example, goroutines and channels pair very well with WaitGroup when writing code that fans out to multiple worker goroutines. You could implement the logic to keep count of how many goroutines are still running yourself, but doing so in every instance would be error-prone and this is a common enough pattern that the Go designers encapsulated this pattern into a general purpose abstraction.

My hope is that Future can serve as a useful concurrency primitive for us in a way similar to other Go stdlib primitives like WaitGroup or Mutex (in fact, the Future implementation here is essentially just a wrapper over RWMutex). There are cases where Future as a primitive is the best suited tool for the job, and I hope this PR does a good job of demonstrating a few of those cases.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Those are good arguments even if I don't like them :) Ideally I would spend time on a counter proposal but honestly I think the story that matters at this point is to improve user experience and our time should be focused on it. I'm sure we can find a more idiomatic way to express the solution to this problem but for now, I'm fine to let it be until next iteration or forever, who knows. Thanks for the detailed reply!

Copy link
Member

@kzys kzys left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"The future is already here, it's just not evenly distributed."

So, I like;

  • Being faster
  • Doing something while waiting humans is clever. Humans are slow.

I don't like

  • Futures/promises make functions colored. Go didn't take the approach and made the whole language runtime aware about async network I/O.
  • Not really sure the latency wins we could have from that. This won't be the slowest part. How far can we go faster with this pattern?
  • Explicitness/discoverability: "Hey you should actually prefetch this data a few lines above!" could become a frequent ask from reviewers.

@haileys
Copy link
Contributor Author

haileys commented May 13, 2023

Futures/promises make functions colored. Go didn't take the approach and made the whole language runtime aware about async network I/O.

I don't know if I necessarily follow this argument? Here's an example from this PR of a calling a function that returns a future and immediately waiting on its value:

	regionInfo, err := PlatformRegions(ctx).Get()

There's an extra call to .Get() there, which is a consequence of splitting the "start doing a thing" and "wait for the thing" into separate parts, but I think this remains pretty ergonomic?

Not really sure the latency wins we could have from that. This won't be the slowest part. How far can we go faster with this pattern?

Latency to our GraphQL API for me is about 300ms. Every request that we're able to run in the background without the user having to wait for it is a significant improvement to responsiveness.

Explicitness/discoverability: "Hey you should actually prefetch this data a few lines above!" could become a frequent ask from reviewers.

I would love for this to be the case! Being more deliberate about how we make API requests is, I think, the most impactful thing we can do in flyctl to make flyctl faster. A snappier, more responsive flyctl makes for a great user experience.

@kzys
Copy link
Member

kzys commented May 15, 2023

We are building both flyctl and the Fly.io platform. We could carefully call APIs in background and make flyctl faster, but this solution only works for us, not customers who directly use the platform without flyctl. Instead we should make our API faster, make batch calls first-class citizen (which GraphQL did but we might lose as we are making less calls to web), or something completely different. We should aim for the most impact thing we could do in our products, not just flyctl.

For the regions call, how about just caching the result instead of introducing futures? The abstraction isn't really used since sortedRegions is immediately resolving the future.

For the app call, you could use errgroup.

	var (
		app     *api.AppBasic
		confirm bool
	)
	eg, ctx := errgroup.WithContext(ctx)
	eg.Go(func() error {
		var err error
		app, err = client.GetAppBasic(ctx, appName)
		if err != nil {
			return err
		}
		return nil
	})
	eg.Go(func() error {
		var err error
		if confirm, err = confirmVolumeCreate(ctx, appName); err != nil {
			return err
		}
		return nil
	})
	err := eg.Wait()
	if err != nil {
		return err
	}
	if !confirm {
		return nil
	}

@haileys
Copy link
Contributor Author

haileys commented May 16, 2023

We had a chat about this in Slack and I think I'll just ship it for now, and we can see how it goes. If it turns out to be problematic in practise I'm happy to spend some time figuring out an alternate approach :)

@haileys haileys merged commit 7790a5b into master May 16, 2023
9 checks passed
@haileys haileys deleted the faster-volumes-create branch May 16, 2023 02:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants