-
Notifications
You must be signed in to change notification settings - Fork 69
streaming: support JSON flag #492
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
client OnExit -> OnDone
better docstrings asdf textDecoder
cmd/src/search_stream.go
Outdated
| panic(err) | ||
| } | ||
| } | ||
| _, err := w.Write([]byte("{\"Results\":[\n")) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why can't we just do line terminated json? ie we don't have a single json item, we have a json item per line. Then it is possible to pipe the output to jq. ie its more like a json encoded log file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure we could do that, but this would offload parsing work to the user because they have to distinguish between match events and metadata events like progress and alerts. If we stream out an object it is easier to work with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EDIT: I guess we just need to decide what the output format is. 1 big JSON object or a log-like file containing possibly thousands of objects, 1 per line.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
EDIT: Discussed offline with @keegancsmith, we go with log-like output, 1 result JSON-object per line.
cmd/src/search_stream.go
Outdated
| OnError: func(eventError *streaming.EventError) { | ||
| logError(eventError.Message) | ||
| }, | ||
| OnDone: func() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
as I mentioned in the other PR, why don't you just do this after streaming.Decoder returns, rather than creating a streaming.Decoder that writes out json. IE I would suspect the interface here is you take in a io.Reader and return an io.Writer. The io.Reader is the search response body, the io.Writer is os.Stderr (or piped to it) and will write out json?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why don't you just do this after streaming.Decoder returns
Do you mean when ReadAll returns?
IMO the "done" event is an event like any other and we should be able to hook into it. Like for example in this PR, I call ReadAll in a totally different place and it would be strange closing the JSON object there.
Sure, if we decide not to stream out 1 object but 1 object per result then we don't need the hook.
This reverts commit e91a111.
| if err != nil { | ||
| return err | ||
| } | ||
| d = textDecoder(query, t, w) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is nice!
This adds support for `-json` to streaming search. Events are split into individual results and written as JSON to stdout. Each line contains 1 result with possibly many matches.
This adds support for
-jsonto streaming search.Events are split into individual results and written as JSON to stdout.
Each line contains 1 result with possibly many matches.