Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kubectl trace describe #10

Open
fntlnz opened this issue Nov 28, 2018 · 3 comments
Open

kubectl trace describe #10

fntlnz opened this issue Nov 28, 2018 · 3 comments
Labels
good first issue Good for newcomers

Comments

@fntlnz
Copy link
Member

fntlnz commented Nov 28, 2018

Similar to the describe for normal kubernetes resources but for traces.
Nota bene: remember that kubectl trace does not create a custom resource but leverages kubernetes resources to inspect the target cluster with bpftrace programs, so this command can be slightly different from kubectl describe even if pursuing the same goals.

Should aggregate the events for the aggregated resources we create to do a trace.

The status information we will already get out to implement this will be usable to replace the <missing> fields in the get commands here

@fntlnz fntlnz added the good first issue Good for newcomers label Nov 28, 2018
@dalehamel
Copy link
Member

dalehamel commented Jan 28, 2019

does not create a custom resource

Would there be any benefit to doing this? This is a serious question (I don't know enough about what the solution here should be). This paradigm of using custom resources seems to be all-the-rage right now though, and in our environment I can think of a few benefits to having traces be a custom resource.

The status information we will already get out to implement this will be usable to replace the fields in the get commands here

Yeah this would be great. Right now when jobs fail to run, I find that I want to know what their status is and why they failed, and ultimately I want to know what's going on with the pod that it spawned.

@fntlnz
Copy link
Member Author

fntlnz commented Jan 28, 2019

@dalehamel I think we can do the status thing very easily without a custom resource. I was just saying that having a custom resource would make easier to manage the lifecycle of a trace, however I don't like the idea of forcing users to have some long running process solely for kubectl trace besides the trace they are running. The philosophy of this tool was to just run your traces and giving you results, and it would be cool to avoid any additional complexity for the user, that's why we didn't do any server side logic yet.

I'm just not 100% sure that can be avoided as this project develops.

@dalehamel
Copy link
Member

The philosophy of this tool was to just run your traces and giving you results, and it would be cool to avoid any additional complexity for the user, that's why we didn't do any server side logic yet.

Good to stick to the philosophy of keeping it simple 👍 I think that this should be documented somewhere - this mental framework will rule-out (or lower the priority of) solutions that would intoduce daemonsets, configmaps or other service-side resources in favor of alternatives that can be done clients-side.

I'm just not 100% sure that can be avoided as this project develops.

Better not to introduce it until it's needed / there's a compelling case.

In the meantime, for this issue, I think that narrows the solutions-space down to just walking through the API and constructing a data model to be displayed for objects that one wants described. I believe this is basically what the describe command does elsewhere, for example describe node includes data from a variety of sources.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants