New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
rethinkdb.errors.RqlClientError: RqlClientError: Token 1 not in stream cache. #2337
Comments
@chrisguidry, thanks for putting together an example. @AtnNn, @Tryneus, could you take a look at this? |
I was able to reproduce and I'm testing a possible solution. |
Wow, outstanding response time. I'm not near the code right now, but I'll patch my Python driver in the morning and test with our real data. |
That patch breaks other tests. I am working on a better fix. |
Moving to 1.12.x. |
Hi @AtnNn, I see you made that "better fix", how did it work out? We can try applying this patch to our driver if you think it's working. Also, if this is moving to 1.12.x, does that mean we shouldn't expect to see the fix backported to the 1.11 driver? |
@chrisguidry The branch is based on the 1.11 driver. I will release a new version of it today. |
I have released a new version of the Python driver, 1.11.0-3: https://pypi.python.org/pypi/rethinkdb The only change from the previous version is a fix for this issue. The fix causes the driver to ignore "Token X not in stream cache" errors when it has tried to read past the end of a cursor. |
@AtnNn I just upgraded our production system with the new driver and our problem just evaporated. You guys are amazing! Just one more outstanding issue before you can close this ticket: where do we send the beer? Shoot us back an address and your favorite style of beer, and @iloveagent57 has agreed to brew a custom batch. Should be ready in 3 weeks. |
Heh, that's awesome, thank you! You really don't have to do that, but in case you do, we'll be delighted to enjoy it responsibly. 156 E Dana St, |
@coffeemug -- > Shoot us back an address and your favorite style of beer Can't close this ticket quite yet <.< |
I vote for a stout : ) |
Chocolate stout or amber gets my vote, but really @AtnNn should choose. |
+1 |
How many folks at the RethinkDB office? |
We are 12. |
@chrisguidry What a nice offer! I also enjoy stout. I was planning on closing the ticket when these changes get merged into v1.12.x, which theoretically also needs to be fixed (although I have not been able to reproduce the bug in v1.12.x). I am also worried that this bug might be present in the other drivers. |
As far as I can tell, this error is not possible in the ruby and javascript drivers. |
OK, there's a recipe for Chocolate Coffee Stout I've been meaning to try, so a stout it will be. We'll give you guys a holler when it's ready to go. And as @chrisguidry already said, thanks! |
Heh, you guys are awesome, thanks! |
We just upgraded a development machine to server version 1.12.4 and python driver 1.12.0-1. We recreated the Token X Not in Stream Cache Exception as soon as we finished deployment. It looks like you're working on patching the driver in #2364. Do you think the patch you made for the 1.11 driver would fix this issue for the 1.12 driver? |
We just manually applied your "better fix patch" for 1.11 to the 1.12.0-1 driver and this seemed to fix the problem. Any chance you can push a new 1.12 driver package, @AtnNn? |
@iloveagent57 While testing the patch I ran into the issues described in #2364 and decided to fix the issue in a different way. I will try to release a new version of the 1.12 driver today. |
Alright, thank you (once again) for being amazingly responsive. |
@iloveagent57 A new version of the python driver is available, v1.12.0-2. This new version only keeps one outstanding request to the server, avoiding the "Token X not in stream cache" error altogether. |
Following up on our IRC conversation, @iloveagent57 and I were able to reproduce the "Token X not in stream cache..." error.
We're on RethinkDB 1.11.3 using the Python 1.11.0-1 driver.
To reproduce, run this script once with the argument 'create_data'. It will build two tables:
lookup: a very small table with 10 documents and numerical ids
big: a table with 1000 documents, each of which has 10 string fields of about 39,000 bytes, a field referring to one of the 10 integer IDs from lookup and an auto-generated ID.
The run_query function simulates some of our production code, where we are lazy-instantiating lookup values as we iterate queries (that's what all the first=True business is about). For brevity, you can run the script without recreating the data each time.
In our tests, if we do size_of_field=500 (which means 500 copies of the ASCII letters, about 26,000 bytes) the queries run just fine.
When we up that to 750 copies of ASCII (39,000 bytes), it fails after the first loop iteration with this exception:
We know these document sizes are somewhat pathological, but we have lots of documents storing HTML text content of variable length. It seems that we only see this error for larger streams of larger documents.
The text was updated successfully, but these errors were encountered: