Skip to content

v.2.0.1 proposal #5

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Jun 5, 2016
Merged

v.2.0.1 proposal #5

merged 6 commits into from
Jun 5, 2016

Conversation

BridgeAR
Copy link
Member

@BridgeAR BridgeAR commented Jun 4, 2016

The bufferPool is not working correct currently. The issue is that slice does not copy the buffer but returns a reference only.

Using a individual bufferPool per parser is not an option, as it would increase the memory usage significantly in case someone uses many clients (there are people who reach the maximum clients limit and in such a case this would mean a huge memory usage).

Instead I'm thinking of copying the part that we need for the next chunk. @Salakar can you think of any other solution?

@BridgeAR
Copy link
Member Author

BridgeAR commented Jun 4, 2016

@Salakar would you mind looking at the code again? I don't like my fix (and the code is ugly) but I have the feeling it's pretty much the best solution in this case. Maybe you could come up with a nicer way to write this though?

@BridgeAR BridgeAR changed the title Add failing test case for interfering parsers v.2.0.1 proposal Jun 4, 2016
@BridgeAR BridgeAR merged commit 8b77169 into master Jun 5, 2016
@jbt
Copy link

jbt commented Jun 5, 2016

This definitely seems to fix up the issues I was seeing over on gosquared/redis-clustr#10, nice one 👍

@Salakar
Copy link
Member

Salakar commented Jun 5, 2016

@BridgeAR can't think of a better way at the moment, per parser pool would of probably been the best solution performance wise, but as you say the mem usage would be high on people using a high number of clients

In fairness no one should really have the need to be maxing out the number of clients, at most it should be 3-4, 1 client for commands, 1 for pub, 1 for sub and a couple blocking clients, I've seen a lot of these these repos with high client numbers going to the same redis connection doing so in the hopes of "increasing performance" but considering most of redis ops are atomic and that the parser is synchronous blocking this would make no diff in performance if not make it worse even. But that's not our issue... rant over haha 😅

Does this hit the perf hard then, I imagine so?

@BridgeAR
Copy link
Member Author

BridgeAR commented Jun 5, 2016

@Salakar in real live code this does not have a huge impact as the default is not incomplete data and we only copy minimal data. There are ways to optimize this penalty in almost all cases away but it's more complex and currently I do not have the time to improve this further.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants