fix(cloudrun): tail logs from all services concurrently#10058
Conversation
|
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Code Review
This pull request adds support for the --set flag to the skaffold delete command and updates Cloud Run log tailing to operate concurrently, enabling simultaneous streaming for multiple services. The review identifies critical issues where the pipe writer must be closed during command failure or termination to prevent goroutine leaks. Additionally, it is recommended to implement a synchronized writer to prevent log line interleaving from concurrent output.
e66533f to
90aa39c
Compare
90aa39c to
951325d
Compare
streamLog() was called inline (blocking) inside the for-loop iterating over Cloud Run resources, so only the first service yielded by Go's randomized map iteration would get its logs tailed. Move streamLog() into its own goroutine per resource so the loop continues immediately. Also fixes variable shadowing where r, w := io.Pipe() shadowed the method receiver r *runLogTailer (renamed to pr, pw). Fixes GoogleContainerTools#10057 Signed-off-by: Bogdan Nazarenko <bogdan.nazarenko@outlook.com>
951325d to
c966c16
Compare
|
generally lgtm. waiting for workflows to finish. |
|
Can you take a look @ the integration tests failures? I ran it twice now and it seems to have failed with the same error. |
|
@Darien-Lin same as with the other PR failures. |
Summary
Fix Cloud Run
--taillog streaming to show logs from all deployed services concurrently, instead of only one randomly selected service.Fixes #10057
Problem
When running
skaffold run --tailwith multiple Cloud Run services, only one service's logs were displayed. The service appeared to be chosen randomly on each run.Root cause:
streamLog()was called inline (blocking) inside the for-loop that iterates over resources. SincestreamLog()runs an infinite read loop, the loop never advanced past the first resource. Go's randomized map iteration order made it appear random.Kubernetes deployments are not affected — the K8s
LogAggregatoralready usesgo a.streamContainerLogs(...)per container.Changes
pkg/skaffold/deploy/cloudrun/log.go: MovestreamLog()into its own goroutine per resource so the for-loop continues immediately to start all services. Also renamedr, w := io.Pipe()topr, pwto fix variable shadowing of the method receiver.pkg/skaffold/deploy/cloudrun/log_test.go: AddedTestStreamLogMultipleServicesConcurrentlyregression test that verifies multiple concurrentstreamLogcalls both produce output.Testing
cloudrunpackage tests pass