There are a lot of interesting things we could do with Cloud Run for the Go build system.
The make.bash step would be great, but at Cloud Run's 1 vCPU, that takes 4.5 minutes. If they add support for cranking that up, we plateau pretty flatly around 4 vCPUs (1m45s). It doesn't get much better than 1m15s even with 96 CPUs. So 4 vCPUs would be great. But that's not available.
But we could also wrap each make.bash tool invocation (compile, link, etc) with a toolexec wrapper, and run each step remotely as its own Cloud Run call. We'd need to inject a VFS into the remote call with some private syscall and/or os package hooks, as Cloud Run doesn't permit FUSE or NFS or anything in the kernel. (It's gVisor)
So Cloud Run for make.bash naively is too slow today with its 1 vCPU (4.5 minutes) and doing make.bash quickly on Cloud Run today requires some time consuming (but straight forward) work for the VFS and wrappers.
But tests today are easy...
We can do make.bash elsewhere, and then use Cloud Run to massively shard the test execution.
With a make.bash snapshot,
Cloud Run's default quota (before asking for it to be raised) permits 1,000 instances. We'd need to lock down concurrency to 1 call per service (probably), but 1,000 test executions in parallel SGTM.
I hacked up a super rough prototype today that's promising.
The text was updated successfully, but these errors were encountered:
…arding Even with 10 shards on builders, it still takes about ~2.5 minutes per shard (and getting slower all the time as the test/ directory grows). I'm currently experimenting with massively sharding out testing on Cloud Run (each dist test & normal TestFoo func all running in parallel), and in such a setup, 2.5 minutes is an eternity. I'd like to increase that dist test's sharding from 10 to more like 1,000. Updates #31834 Change-Id: I8b02989727793b5b5b2013d67e1eb01ef4786e28 Reviewed-on: https://go-review.googlesource.com/c/go/+/175297 Reviewed-by: Dmitri Shuralyov <firstname.lastname@example.org> Reviewed-by: Ian Lance Taylor <email@example.com>