Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Chaining custom commands in cluster mode #876

Closed

Conversation

r3nat
Copy link

@r3nat r3nat commented May 23, 2019

@luin probably a fix for #536, please take a look

@luin
Copy link
Collaborator

luin commented May 23, 2019

It seems the scripts are sent with eval directly, without bandwidth optimization with evalsha?

@r3nat
Copy link
Author

r3nat commented May 23, 2019

Yep, but it seems to work.

@r3nat
Copy link
Author

r3nat commented May 24, 2019

@luin so what do you think? Is it good or bad? Should I change something or add some tests?

@luin
Copy link
Collaborator

luin commented May 25, 2019

@r3nat Thank you for the pull request, but I think there are some issues still under discussion.

Above all, sending commands within a pipeline is aimed at improving the network performance. However, if we send custom commands with eval instead of evalsha just in order to make custom commands works in the cluster pipeline, we are having some nontrivial performance tradeoff, which I think is not worth, especially given Node.js send commands in a way pretty similar to pipeline (not waiting for the previous command being handled before sending the next command).

@r3nat
Copy link
Author

r3nat commented May 27, 2019

@luin

Above all, sending commands within a pipeline is aimed at improving the network performance.

Pipelines are indeed aimed on that, but transactions are all about atomicity.

Why it is possible to run eval inside transaction, but not possible to do literally same thing with custom commands? It should fallback to eval's if evalsha optimization could not be applied.

BTW I've run benchmarks. evalsha optimization is slightly winning in cases of really big scripts (32kb) or big number of calls (100) inside single pipeline. In other cases it is faster to just run eval's. And I've tested with redis that run locally. If redis is on another machine (adding 0.1-0.5ms of rtt) then bare eval's may have become better in all cases.

After all I think custom commands is a great way of thinking about scripts - they become simple redis commands.

@r3nat
Copy link
Author

r3nat commented Jun 3, 2019

@luin please take a look on message above

@luin
Copy link
Collaborator

luin commented Jun 4, 2019

Thank you for the benchmark script. It makes a lot of sense.

I updated the script, added cases that sending commands without a pipeline (https://gist.github.com/luin/c8fd968f50c0a7abcdc79f0947274729). According to the results, not only should us send commands in a pipeline with eval in cluster mode, but we also may do that when connecting to a standalone Redis server for a better performance. However, as the result suggested, it seems better to send lua scripts without pipeline in terms of performance. So only atomicity makes it worthwhile.

For this pull request, I'm wondering if there's a better way to deal with constructor detection since there's an issue related to constructor.name approach. I also think it may be better to have an option like useEvalsha: false to tell Script use eval directly to make things more explicit. What do you think? Anyway, tests are required for these changes :-).

@stale
Copy link

stale bot commented Jul 4, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 7 days if no further activity occurs, but feel free to re-open a closed issue if needed.

@stale stale bot added the wontfix label Jul 4, 2019
@stale stale bot closed this Jul 11, 2019
@RPallas92
Copy link

RPallas92 commented Jul 29, 2019

Any update on this ?
@luin are we going to merge?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants