Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Output items sequence as original bytes #56

Closed
skeggse opened this issue Aug 16, 2021 · 2 comments
Closed

Output items sequence as original bytes #56

skeggse opened this issue Aug 16, 2021 · 2 comments
Labels

Comments

@skeggse
Copy link

skeggse commented Aug 16, 2021

Is your feature request related to a problem? Please describe.

It'd be great if there were a way to rapidly break a large stream of multiple JSON values (i.e. the multiple_values option) into its constituent values. For use-cases where you just need to know e.g. the number of JSON values in a stream, or need to multiplex an incoming stream across threads, or simply substring match the entire raw JSON value without first interpreting it, this is a pretty useful feature. As a point of reference, some JSON libraries like Golang's support this out of the box: in that case, you can decode a JSON-containing byte array into a json.RawMessage, which just copies the byte array.

Describe the solution you'd like

I'd like some equivalent to ijson.items that simply produces the original bytes (possibly copied) instead of parsing the items themselves.

Describe alternatives you've considered

If I had full control over the production of these JSON streams, I could require that the output were newline-delimited. At present, this is not the case.

I think the current workaround is to run jq -cM in a subprocess and pipe the stream into jq, which will force sequences like {}{} to get produced as {}\n{}\n. I could try to reserialize the original items, but that doesn't always result in the desired behavior (and would probably be slower than the jq equivalent). This is an imperfect solution because it'll mangle the original bytes, which may not be the desired behavior when searching for item-level substring matches.

@rtobar
Copy link

rtobar commented Aug 19, 2021

Thanks @skeggse for the interesting proposal, and sorry for the delay on this initial response, busy days.

First things first, let me rephrase your idea to make sure I'm understanding correclty. You basically want something like this:

data = b'{}{}'
for raw_json in ijson.new_method_you_want(data):
    # raw_json is b'{}' each time

Is that a fair depiction of what you're looking for?

As you mentioned, a way to currently achieve this is doing something like:

data = b'{}{}'
for raw_json in map(json.dumps, ijson.items(data, '', multiple_values=True)):
     # raw_json is '{}' each time

The drawback is that this indeed builds each document fully as a Python object just to dump it back into its string form. In the process you might also lose some information (but not necessarily).

From the point of view of ijson and its inner workings here are some thoughts:

  • In the example above, and the one you mention in your original comment, the top-level documents in the single stream consist on JSON objects. Note however that in general they could be any JSON value; e.g. {} [], [] [], 1 2, true {2}, etc.
  • The above means that ijson cannot simply look for a starting/ending bracket, parenthesis or the like. Instead, and to ensure correct behavior, parsing of the original document must be done; in other words, there are no shortcuts. In particular, also note that although it might work most of the times, using newlines to determine the end of JSON value is obviously not fully reliable (e.g., {\n} is a valid, single JSON value).
  • To produce individual documents consisting of verbatim copies of the original bytes we then need to fully parse the document, while keeping track of the bytes the parser considered in the process (this is they key). To begin with, none of the ijson routines is "low-level" enough to offer this information -- we need to go to the parser technologies that power our backends.
  • Of those we currently have a few: our own pure python parser, the yajl library (versions 1 and 2), and a not-yet-on-the-master-branch boost json parser.
    • We could change our own python parser to keep track of input bytes
    • From memory the boost json library might keep track of this information already
    • But the yajl parser (neither version) doesn't, so for most of our backends it would be simply impossible to provide this information.
  • Moreover, even if all underlying parsers exposed this information, it would still require some non-trivial amount of work to add your desired functionality on top of that.

In summary, I think this is simply not exactly possible because of the restrictions imposed by the underlying parsing technologies we use, and even if it was, at least for some of the backends, it would be too much effort for little gain.

@skeggse
Copy link
Author

skeggse commented Nov 10, 2021

Okay, I might just pull in a parser backend. It seems like a relatively simple parser atop a tokenizer would be sufficient. Thanks for your consideration!

@skeggse skeggse closed this as completed Nov 10, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants