Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allows to combine the -f and -l flags in kubectl logs #67573

Merged
merged 1 commit into from
Feb 26, 2019
Merged

Allows to combine the -f and -l flags in kubectl logs #67573

merged 1 commit into from
Feb 26, 2019

Conversation

m1kola
Copy link
Member

@m1kola m1kola commented Aug 19, 2018

What this PR does / why we need it:

This PR allows to combine the -f and -l flags in the kubectl logs by reading logs from multiple sources simultaneously. It is a part of what was requested in #52218 (without the part about -c).

Example:

kubectl logs -l app=logging_test --all-containers -f

Release note:

It is now possible to combine the `-f` and `-l` flags in `kubectl logs`

/sig cli
/kind feature

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cli Categorizes an issue or PR as relevant to SIG CLI. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/feature Categorizes issue or PR as related to a new feature. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Aug 19, 2018
@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 19, 2018
@neolit123
Copy link
Member

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 20, 2018
@m1kola
Copy link
Member Author

m1kola commented Aug 21, 2018

/assign @liggitt

pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
}
go func(request *rest.Request) {
if err := o.ConsumeRequestFn(request, pipew); err != nil {
pipew.CloseWithError(err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this closes all streams on first error. is that what we want?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think this is the closest to what we currently have. At the moment, it's possible to get multiple log sources using a command like kubectl logs -l app=logging_test --all-containers, but this code will fail on the first error:

for _, request := range requests {
if err := o.ConsumeRequestFn(request, o.Out); err != nil {
return err
}
}

So, I think, the concurrent version should behave in a similar way: fail on the first error. At least for now. We can do something like retries as a separate improvements, I think. But I'm open for suggestions.

Copy link
Member Author

@m1kola m1kola left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@liggitt, thanks for the review. I've updated the PR with changes (see the fixup commits since your review) and replied to your comments.

Please take another look.

pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
}
go func(request *rest.Request) {
if err := o.ConsumeRequestFn(request, pipew); err != nil {
pipew.CloseWithError(err)
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, I think this is the closest to what we currently have. At the moment, it's possible to get multiple log sources using a command like kubectl logs -l app=logging_test --all-containers, but this code will fail on the first error:

for _, request := range requests {
if err := o.ConsumeRequestFn(request, o.Out); err != nil {
return err
}
}

So, I think, the concurrent version should behave in a similar way: fail on the first error. At least for now. We can do something like retries as a separate improvements, I think. But I'm open for suggestions.

pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
@m1kola
Copy link
Member Author

m1kola commented Aug 25, 2018

@liggitt tests are passing now. Could you, please, take another look after the weekend?

Changes since your previous review: ba7cb09bcc36c1eb6acf398e9c65d4afefd1df40...e5549129c0db54659711e71317b3b3ed480ae65d

In these commits:

  • A channel was replaced with a WaitGroup which helps to close PipeWriter once all log sources are exhausted
  • DefaultConsumeRequest was modified to make sure that it doesn't interleave sub-line when running concurrently
  • DefaultConsumeRequest was covered with unit tests

I'll squash commits before merging this

Copy link
Member

@liggitt liggitt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one comment on error detection/handling on failed writes.

I'm still on the fence whether we want to support fan-out of long-running requests like this. @kubernetes/sig-cli-pr-reviews @kubernetes/sig-scalability-pr-reviews, any thoughts on that?

pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
@k8s-ci-robot k8s-ci-robot added the sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. label Aug 30, 2018
@m1kola
Copy link
Member Author

m1kola commented Sep 3, 2018

/retest

@m1kola
Copy link
Member Author

m1kola commented Sep 25, 2018

Hi @liggitt, is there anything I can do to move this forward? I'm open for further feedback and ready to discuss alternative approaches to implement this.

@liggitt
Copy link
Member

liggitt commented Sep 26, 2018

I'd recommend raising the question with the CLI and scalability teams (either in slack, or on their mailing lists, or in one of their sig meetings) and see what feedback you get

@pwittrock
Copy link
Member

@m1kola please put this on the agenda for the next sig-cli meeting (1 week from Wednesday.)

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's discuss in depth this on the next sig meeting on Oct 10th.

pkg/kubectl/cmd/logs.go Outdated Show resolved Hide resolved
@k8s-ci-robot k8s-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Oct 6, 2018
@m1kola
Copy link
Member Author

m1kola commented Oct 11, 2018

Hi. Yesterday, after the sig-cli meeting I:

  • e77d1c1556aff5d27beae7b6f06cd8bbc156f58b - rebased the PR on top of the master branch
  • 2255ba878e3c3f414ca0a323c2428d7a4e6de1c7 - added the flag that allows to control the number of concurrent of log streams with a (low) default limit

We also discussed the need to add the --prefix flag that will add source name to the log output. Everyone seemed to agree to handle this in a separate PR (I'll look into it once we are happy with this one).

pkg/kubectl/cmd/logs/logs.go Outdated Show resolved Hide resolved
@m1kola
Copy link
Member Author

m1kola commented Nov 17, 2018

/retest

@m1kola
Copy link
Member Author

m1kola commented Nov 20, 2018

@soltysh and @seans3 I think, I've addressed the feedback you gave during the meeting. Could you, please, take another look at this PR and let me know, if you have any further comments.

I'm now looking into prefixing log lines with source (pod & container name) as we discussed, but it will be a separate PR to keep things simpler for reviewers and for me.

/assign @soltysh
/assign @seans3

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I left you some comments

@@ -151,6 +158,7 @@ func NewCmdLogs(f cmdutil.Factory, streams genericclioptions.IOStreams) *cobra.C
cmd.Flags().StringVarP(&o.Container, "container", "c", o.Container, "Print the logs of this container")
cmdutil.AddPodRunningTimeoutFlag(cmd, defaultPodLogsTimeout)
cmd.Flags().StringVarP(&o.Selector, "selector", "l", o.Selector, "Selector (label query) to filter on.")
cmd.Flags().IntVar(&o.MaxFollowConcurency, "max-follow-concurency", o.MaxFollowConcurency, "Specify maximum number of concurrent logs to follow when using by a selector. Defaults to 5.")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd propose naming this max-requests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maximum number of parallel log requests when using selector. Defaults to 5.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could settle for a name in the middle, maybe something like max-log-requests

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sgtm

Copy link
Member Author

@m1kola m1kola Jan 17, 2019

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh @juanvallejo I'm not sure that about max-log-requests: sound too vague to me. When you do something like kubectl logs deployment/some-name, it will basically create N requests, but kubectl will be reading them sequentially. EDIT: Fixed call example. Also clarification: N is a number of sources from the deployment.

Probably it's ok for end-users, because they don't care about the number of sequentially requests, right?

if o.Follow && len(requests) > 1 {
if len(requests) > o.MaxFollowConcurency {
return fmt.Errorf(
"you are attempting to follow %d log streams, but maximum allowed concurency is %d. Use --max-follow-concurency to increase the limit",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Update when renaming the flag and use comma instead of a dot in that sentence.

}

func (o LogsOptions) concurrentConsumeRequest(requests []rest.ResponseWrapper) error {
piper, pipew := io.Pipe()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

reader and writer are better variable names

return o.sequentialConsumeRequest(requests)
}

func (o LogsOptions) concurrentConsumeRequest(requests []rest.ResponseWrapper) error {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

parallelConsumeRequest

}(request)
}

go func() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think you want to block the main flow until you hear back from the wait group, iow. this code should not be part of any goroutine.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

io.Copy below blocks the main flow. If we do not close pipe writer (pipew currently), it will be blocking the main flow forever (even if the server closed the connection). So we are waiting for all requests to finish in a separate goroutine and close the writer (as a result io.Copy stops blocking the main flow).

Or do I miss something?

return err
}

if err != nil {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if err != nil && err != io.EOF {
    return err
}
return nil

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@soltysh return nil will stop the for loop after the first iteration. We want to return from the function when the first error appears, but we don't treat io.EOF as an error (because it is what we are waiting for). In case of no error we want to continue the loop.

@juanvallejo
Copy link
Contributor

@soltysh @m1kola how about naming the flag something like max-log-requests? At least to be a bit more descriptive about its intention

@m1kola
Copy link
Member Author

m1kola commented Jan 20, 2019

/test pull-kubernetes-integration

Which makes possible to combile the `-f` and `-l` flags in kubectl logs
@m1kola
Copy link
Member Author

m1kola commented Jan 20, 2019

/retest

@m1kola
Copy link
Member Author

m1kola commented Jan 20, 2019

@soltysh please, take another look. I addressed your feedback in the latest changes and in comments.

And thanks for the review!

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm
/approve
@m1kola thanks for your patience 👍

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Feb 25, 2019
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: m1kola, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Feb 25, 2019
@m1kola
Copy link
Member Author

m1kola commented Feb 25, 2019

Test failures seem to be unrelated so let's try to rerun them.
/retest

@k8s-ci-robot k8s-ci-robot merged commit 1ddfd8f into kubernetes:master Feb 26, 2019
jpetazzo added a commit to jpetazzo/container.training that referenced this pull request Mar 6, 2019
@m1kola m1kola deleted the 52218_watching_selectors branch April 6, 2019 23:13
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. area/kubectl cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/feature Categorizes issue or PR as related to a new feature. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/cli Categorizes an issue or PR as relevant to SIG CLI. sig/scalability Categorizes an issue or PR as relevant to SIG Scalability. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants