-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache decrypted content based on record buffer #349
Conversation
Well, all indexes in db2 share the same stream on index update. So you'll only try to decrypt something once and give that to all indexes. It's true that jitdb has it's own stream, so there could be something here. This was one of the big differences compared to flume and db1 where each flume index was completely separate. |
Well, that's the unsolved mystery: why do I get 4 calls to If you want to try it yourself, put a console.log in |
strange, I'll have a look |
It's actually jitdb that is doing the extra calls:
The last 2 calls are !streaming, because they are log.get calls. |
Ah right, I was instrumenting this and it does publish and then 2 queries for that author, so 3 decrypts is the correct number. |
Have you tried |
Ahh :) It's because we are posting the same message 4 times. So by the time you get to |
How so? If I run only the first test in that file, it has just 1
|
This seems to be a bug in AAOL related to multiple streams and a half-empty database. I'm going to reproduce this in AAOL and fix it there. |
That sounds like a good hypothesis! It does feel to me like there are non-shared streams going on. |
mystery solved solved. The 4 calls to decrypt were:
|
Thanks for figuring this out! Indeed, that's the truth. I think we can close this PR, there's nothing else here we can make use of. |
This is a just-FYI PR. I ran unit tests (e.g.
test/private.js
) and confirmed that the cache causes the decryption to be skipped if it has been already done.In the big picture though, unfortunately it doesn't seem like caching does any difference to performance. In fact in some cases it seems like performance got worse. I don't understand why. :(
I'm leaving this PR here in case I made a dumb mistake somewhere and there would still be hope for this optimization technique.
Numbers below