Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Python Lexer Is Excessively Greedy #15

Closed
pydsigner opened this issue Nov 20, 2019 · 7 comments
Closed

Python Lexer Is Excessively Greedy #15

pydsigner opened this issue Nov 20, 2019 · 7 comments
Labels
enhancement New feature or request

Comments

@pydsigner
Copy link

ijson.backends.python.Lexer() has a main loop which looks for a number or a single-character lexeme and enters a simple decision tree. If the lexeme starts a string, the rest of the string is read in, with buffer updates as necessary, and then yielded out. If it does not start a string, the Lexer always attempts to extend the lexeme. In general, this isn't an issue, but if the file stream is wrapped around a socket, this can lead to significant parser lag and handshake stalemates as both parties wait for the other to transmit another chunk of data.

if lexeme == '"':
pos = match.start()
start = pos + 1
while True:
try:
end = buf.index('"', start)
escpos = end - 1
while buf[escpos] == '\\':
escpos -= 1
if (end - escpos) % 2 == 0:
start = end + 1
else:
break
except ValueError:
data = f.read(buf_size)
if not data:
raise common.IncompleteJSONError('Incomplete string lexeme')
buf += data
yield discarded + pos, buf[pos:end + 1]
pos = end + 1
else:
while match.end() == len(buf):
data = f.read(buf_size)
if not data:
break
buf += data
match = LEXEME_RE.search(buf, pos)
lexeme = match.group()
yield discarded + match.start(), lexeme
pos = match.end()

My guess is that the yajl backends do not share this issue, but I've not pulled up my Linux machine to check.

@rtobar
Copy link

rtobar commented Nov 24, 2019

Hi @pydsigner,

First of all, thanks for your interest in contributing to ijson!

I finally got around to see in more detail this issue, and its related PR. After looking at it, I have to admit it's still not clear what the actual problem is.

In your comment above you say that the Lexer might cause a stalemate becase "both parties wait for the other to transmit another chunk of data", but I can't see how this leads to a deadlock, as ijson's python backend (and all backends in general) reads unconditionally until the file-object is exhauste. If the file object is a socket, then the other end, which is sending data, will continue to be able to send data, as ijson will continue consuming it.

I was actually expecting a test in your PR to highlight this stalemate scenario somehow, but what I found instead was a test case that checks how many times ijson reads data off a file (which is a different thing). In general it's difficult to map directly high-level ijson.item iterations onto file reading operations. ijson reads by default 16k of data at a time, and for a non-empty file it will need to read at least twice: once to read data off the file, and another one (returning an empty string) to deduce the input has been consumed. On top of that, the ErroringFile class doesn't obey the general contract of the read() method, in that it should return an empty string when no more data can be read (instead it raises an error). So, overall, I find the test case bogus, and not really addressing the problem suggested in this issue.

Another note on these tests: they've uncovered something else really oddly ugly; suppressed UnexpectedSymbol errors that only come up if I iterate over items less.

'[1, "abc"' produces said suppressed error if next() is called on backend.python.items() once, but not if it's called twice.

I couldn't reproduce this (I checked out your branch and changed the expected number of iterations to 1 for that case), but I guess it was connected to the bogus implementation of ErroringFile.

With all this said, I don't think I'm going to pull your changes over, as I don't see how the lexer is being a problem. I will leave both the issue and the PR open for now, but the intention of closing them soon if no good reason is given to keep them alive.

@pydsigner
Copy link
Author

pydsigner commented Nov 24, 2019 via email

@rtobar
Copy link

rtobar commented Nov 24, 2019

@pydsigner, I think I understand now a bit better your issue.

From your description, it seems you are trying to have ijson consume directly data (the "credential request" coming from the server?) directly from the client-side socket, but the socket carries more data than just the JSON content over its lifetime. On the other hand ijson's iteration protocol is based on the idea of hitting the end of the input stream, which finishes all iterations. In other words, from ijson's point of view there is now way to know that bytes read from the stream belong to different message exchange phases of your client-side protocol.

I assume you have some wait of telling how much data is being sent by the server and into the client during the initial credential request (if not, you probably have bigger problems!). If that's the case you could wrap up the original client-side socket in a file-like object that reads only up to N bytes off the socket, and then returns an empty bytes string on later read attempts; you can then give this size-aware, file-like object to ijson for it to read from.

@pydsigner
Copy link
Author

@rtobar Unfortunately that's still not quite it. There's only one stream of data on the connection each way — an open ended array containing a indefinite series of objects. Each object contains a single message. The problem is that unpatched ijson does not emit the end object event until further data is sent, which means in this context that the recipient side is always at least one message behind.

In a more concrete form, if I establish a connection and send [{"type": "username_request", "confirmation": "abc123"} from the server, I expect to immediately be able to fetch {"type": "username_request", "confirmation": "abc123"} out by iterating over items(..., 'item'), but this is not the case.

rtobar pushed a commit that referenced this issue Nov 26, 2019
Originally written by Daniel Foerster, adjusted by Rodrigo Tobar to
reflect current situation of object building, and help clarifying what
is causing issues in #15.
@pydsigner
Copy link
Author

The problem with those tests is that they will pass. Once the Lexer gets a 0 length string from the file, it stops. The issue is that socket reads will block indefinitely.

rtobar pushed a commit that referenced this issue Nov 26, 2019
@rtobar
Copy link

rtobar commented Nov 26, 2019

@pydsigner thanks for the extra details in #15 (comment). I finally understood your problem, and your PR makes much more sense now. In summary, you basically are trying to avoid further reads from the input stream as much as possible. I hadn't seen that intent, or the need of it, at first, but with your latest example it became very clear that this is a problem in your use case.

In that context, your change makes complete sense. I actually already took your new unit tests and adjusted them for eagerness item construction (see eb63625). They all work as you expect, when data is readily available, which is something I wanted to double-check before fully understanding your problem. In the meanwhile I also found out what was causing the suppressed error messages, which I've fixed now on the master branch. I also took your changes to eagerly yield the single-character lexemes out of the Lexer, and finally yet another test with a class similar to ErroringFile which I named SingleReadFile to better reflect what it's meant to do.

A couple of things to be noted:

  • The fix makes sense when reading string data, not bytes -- in the latter case, as you noted, the decoder will take over
  • That also means that this fix has no effect in python 2.7 -- which I made more explicit in the tests
  • You are in a sense "lucky" that the first read actually doesn't block -- given the default buffer size of 16k it could well happen that even the first read could block until either 16k of data are read, or the remote end is closed.
  • This is because, more fundamentally, ijson is designed to exhaust the input stream, which doesn't play nicely with your use case. There is some work in progress to invert this logic, decoupling the I/O from the JSON iterative parsing itself, but it's not in a publishable state yet.

So yes, this is a welcomed change, but doesn't mean you will be fully free of this type of problems.

@rtobar rtobar added the enhancement New feature or request label Nov 26, 2019
@rtobar
Copy link

rtobar commented Nov 26, 2019

I see some thumbs up, so I'm closing the issue now, and #16 along with it (although in the end all commits made it into the repository)

@rtobar rtobar closed this as completed Nov 26, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants