Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allows to combine the `-f` and `-l` flags in kubectl logs #67573

Merged
merged 1 commit into from Feb 26, 2019

Conversation

@m1kola
Copy link
Member

m1kola commented Aug 19, 2018

What this PR does / why we need it:

This PR allows to combine the -f and -l flags in the kubectl logs by reading logs from multiple sources simultaneously. It is a part of what was requested in #52218 (without the part about -c).

Example:

kubectl logs -l app=logging_test --all-containers -f

Release note:

It is now possible to combine the `-f` and `-l` flags in `kubectl logs`

/sig cli
/kind feature

@neolit123

This comment has been minimized.

Copy link
Member

neolit123 commented Aug 20, 2018

/ok-to-test

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Aug 21, 2018

/assign @liggitt

Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
}
go func(request *rest.Request) {
if err := o.ConsumeRequestFn(request, pipew); err != nil {
pipew.CloseWithError(err)

This comment has been minimized.

@liggitt

liggitt Aug 21, 2018

Member

this closes all streams on first error. is that what we want?

This comment has been minimized.

@m1kola

m1kola Aug 21, 2018

Author Member

Yes, I think this is the closest to what we currently have. At the moment, it's possible to get multiple log sources using a command like kubectl logs -l app=logging_test --all-containers, but this code will fail on the first error:

for _, request := range requests {
if err := o.ConsumeRequestFn(request, o.Out); err != nil {
return err
}
}

So, I think, the concurrent version should behave in a similar way: fail on the first error. At least for now. We can do something like retries as a separate improvements, I think. But I'm open for suggestions.

@m1kola m1kola force-pushed the m1kola:52218_watching_selectors branch from 4588280 to 4dd5751 Aug 21, 2018

@m1kola
Copy link
Member Author

m1kola left a comment

@liggitt, thanks for the review. I've updated the PR with changes (see the fixup commits since your review) and replied to your comments.

Please take another look.

Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
}
go func(request *rest.Request) {
if err := o.ConsumeRequestFn(request, pipew); err != nil {
pipew.CloseWithError(err)

This comment has been minimized.

@m1kola

m1kola Aug 21, 2018

Author Member

Yes, I think this is the closest to what we currently have. At the moment, it's possible to get multiple log sources using a command like kubectl logs -l app=logging_test --all-containers, but this code will fail on the first error:

for _, request := range requests {
if err := o.ConsumeRequestFn(request, o.Out); err != nil {
return err
}
}

So, I think, the concurrent version should behave in a similar way: fail on the first error. At least for now. We can do something like retries as a separate improvements, I think. But I'm open for suggestions.

Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Aug 25, 2018

@liggitt tests are passing now. Could you, please, take another look after the weekend?

Changes since your previous review: ba7cb09...e554912

In these commits:

  • A channel was replaced with a WaitGroup which helps to close PipeWriter once all log sources are exhausted
  • DefaultConsumeRequest was modified to make sure that it doesn't interleave sub-line when running concurrently
  • DefaultConsumeRequest was covered with unit tests

I'll squash commits before merging this

@liggitt
Copy link
Member

liggitt left a comment

one comment on error detection/handling on failed writes.

I'm still on the fence whether we want to support fan-out of long-running requests like this. @kubernetes/sig-cli-pr-reviews @kubernetes/sig-scalability-pr-reviews, any thoughts on that?

Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated
@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Sep 3, 2018

/retest

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Sep 25, 2018

Hi @liggitt, is there anything I can do to move this forward? I'm open for further feedback and ready to discuss alternative approaches to implement this.

@liggitt

This comment has been minimized.

Copy link
Member

liggitt commented Sep 26, 2018

I'd recommend raising the question with the CLI and scalability teams (either in slack, or on their mailing lists, or in one of their sig meetings) and see what feedback you get

@pwittrock

This comment has been minimized.

Copy link
Member

pwittrock commented Sep 28, 2018

@m1kola please put this on the agenda for the next sig-cli meeting (1 week from Wednesday.)

@soltysh
Copy link
Contributor

soltysh left a comment

Let's discuss in depth this on the next sig meeting on Oct 10th.

Show resolved Hide resolved pkg/kubectl/cmd/logs.go Outdated

@m1kola m1kola force-pushed the m1kola:52218_watching_selectors branch from 1e830ad to 2255ba8 Oct 10, 2018

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Oct 11, 2018

Hi. Yesterday, after the sig-cli meeting I:

  • e77d1c1 - rebased the PR on top of the master branch
  • 2255ba8 - added the flag that allows to control the number of concurrent of log streams with a (low) default limit

We also discussed the need to add the --prefix flag that will add source name to the log output. Everyone seemed to agree to handle this in a separate PR (I'll look into it once we are happy with this one).

@m1kola m1kola referenced this pull request Oct 16, 2018

Closed

REQUEST: New membership for m1kola #170

6 of 6 tasks complete
Show resolved Hide resolved pkg/kubectl/cmd/logs/logs.go Outdated

@m1kola m1kola force-pushed the m1kola:52218_watching_selectors branch from 2255ba8 to 028c852 Nov 17, 2018

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Nov 17, 2018

/retest

@m1kola m1kola force-pushed the m1kola:52218_watching_selectors branch from 028c852 to 745f409 Nov 18, 2018

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Nov 20, 2018

@soltysh and @seans3 I think, I've addressed the feedback you gave during the meeting. Could you, please, take another look at this PR and let me know, if you have any further comments.

I'm now looking into prefixing log lines with source (pod & container name) as we discussed, but it will be a separate PR to keep things simpler for reviewers and for me.

/assign @soltysh
/assign @seans3

@soltysh
Copy link
Contributor

soltysh left a comment

I left you some comments

@@ -151,6 +158,7 @@ func NewCmdLogs(f cmdutil.Factory, streams genericclioptions.IOStreams) *cobra.C
cmd.Flags().StringVarP(&o.Container, "container", "c", o.Container, "Print the logs of this container")
cmdutil.AddPodRunningTimeoutFlag(cmd, defaultPodLogsTimeout)
cmd.Flags().StringVarP(&o.Selector, "selector", "l", o.Selector, "Selector (label query) to filter on.")
cmd.Flags().IntVar(&o.MaxFollowConcurency, "max-follow-concurency", o.MaxFollowConcurency, "Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.")

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

I'd propose naming this max-requests

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

Maximum number of parallel log requests when using selector. Defaults to 5.

This comment has been minimized.

@juanvallejo

juanvallejo Jan 17, 2019

Member

We could settle for a name in the middle, maybe something like max-log-requests

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

sgtm

This comment has been minimized.

@m1kola

m1kola Jan 17, 2019

Author Member

@soltysh @juanvallejo I'm not sure that about max-log-requests: sound too vague to me. When you do something like kubectl logs deployment/some-name, it will basically create N requests, but kubectl will be reading them sequentially. EDIT: Fixed call example. Also clarification: N is a number of sources from the deployment.

Probably it's ok for end-users, because they don't care about the number of sequentially requests, right?

if o.Follow && len(requests) > 1 {
if len(requests) > o.MaxFollowConcurency {
return fmt.Errorf(
"you are attempting to follow %d log streams, but maximum allowed concurency is %d. Use --max-follow-concurency to increase the limit",

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

Update when renaming the flag and use comma instead of a dot in that sentence.

}

func (o LogsOptions) concurrentConsumeRequest(requests []rest.ResponseWrapper) error {
piper, pipew := io.Pipe()

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

reader and writer are better variable names

return o.sequentialConsumeRequest(requests)
}

func (o LogsOptions) concurrentConsumeRequest(requests []rest.ResponseWrapper) error {

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

parallelConsumeRequest

}(request)
}

go func() {

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor

I think you want to block the main flow until you hear back from the wait group, iow. this code should not be part of any goroutine.

This comment has been minimized.

@m1kola

m1kola Jan 17, 2019

Author Member

io.Copy below blocks the main flow. If we do not close pipe writer (pipew currently), it will be blocking the main flow forever (even if the server closed the connection). So we are waiting for all requests to finish in a separate goroutine and close the writer (as a result io.Copy stops blocking the main flow).

Or do I miss something?

return err
}

if err != nil {

This comment has been minimized.

@soltysh

soltysh Jan 17, 2019

Contributor
if err != nil && err != io.EOF {
    return err
}
return nil

This comment has been minimized.

@m1kola

m1kola Jan 17, 2019

Author Member

@soltysh return nil will stop the for loop after the first iteration. We want to return from the function when the first error appears, but we don't treat io.EOF as an error (because it is what we are waiting for). In case of no error we want to continue the loop.

@juanvallejo

This comment has been minimized.

Copy link
Member

juanvallejo commented Jan 17, 2019

@soltysh @m1kola how about naming the flag something like max-log-requests? At least to be a bit more descriptive about its intention

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Jan 20, 2019

/test pull-kubernetes-integration

Allows to read from multiple logs simultaneously
Which makes possible to combile the `-f` and `-l` flags in kubectl logs

@m1kola m1kola force-pushed the m1kola:52218_watching_selectors branch from 77e3c0d to 2a230cc Jan 20, 2019

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Jan 20, 2019

/retest

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Jan 20, 2019

@soltysh please, take another look. I addressed your feedback in the latest changes and in comments.

And thanks for the review!

@soltysh
Copy link
Contributor

soltysh left a comment

/lgtm
/approve
@m1kola thanks for your patience 👍

@k8s-ci-robot k8s-ci-robot added the lgtm label Feb 25, 2019

@k8s-ci-robot

This comment has been minimized.

Copy link
Contributor

k8s-ci-robot commented Feb 25, 2019

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: m1kola, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@m1kola

This comment has been minimized.

Copy link
Member Author

m1kola commented Feb 25, 2019

Test failures seem to be unrelated so let's try to rerun them.
/retest

@k8s-ci-robot k8s-ci-robot merged commit 1ddfd8f into kubernetes:master Feb 26, 2019

19 checks passed

cla/linuxfoundation m1kola authorized
Details
pull-kubernetes-bazel-build Job succeeded.
Details
pull-kubernetes-bazel-test Job succeeded.
Details
pull-kubernetes-cross Skipped
pull-kubernetes-e2e-gce Job succeeded.
Details
pull-kubernetes-e2e-gce-100-performance Job succeeded.
Details
pull-kubernetes-e2e-gce-device-plugin-gpu Job succeeded.
Details
pull-kubernetes-e2e-kops-aws Context retired without replacement.
pull-kubernetes-e2e-kubeadm-gce Skipped
pull-kubernetes-godeps Skipped
pull-kubernetes-integration Job succeeded.
Details
pull-kubernetes-kubemark-e2e-gce-big Job succeeded.
Details
pull-kubernetes-local-e2e Skipped
pull-kubernetes-local-e2e-containerized Context retired without replacement.
Details
pull-kubernetes-node-e2e Job succeeded.
Details
pull-kubernetes-typecheck Job succeeded.
Details
pull-kubernetes-verify Job succeeded.
Details
pull-publishing-bot-validate Skipped.
tide In merge pool.
Details

@php-coder php-coder referenced this pull request Mar 5, 2019

Open

Episode #25-26 #22

jpetazzo added a commit to jpetazzo/container.training that referenced this pull request Mar 6, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.