Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimized autoMemoryFreed loop #3244

Merged
merged 2 commits into from Jun 23, 2016

Conversation

dvirsky
Copy link
Contributor

@dvirsky dvirsky commented May 19, 2016

This fixes two issues:

  1. If we have a lot of objects allocated and we're freeing, it's a better heuristic to start from the recent elements than from the oldest elements if we want to find the element freed.
  2. The loop didn't have a break after marking the element as freed, making it slow as the element list grew.

In a huge SCAN loop adding this made it run X100 faster

@antirez
Copy link
Contributor

antirez commented May 19, 2016

Good work Dvir! It's much better to optimize for releasing the last element, without to mention how bad it is to traverse it all after we found the element :-) Will review and merge soon.

@charsyam
Copy link
Contributor

+1

@dvirsky
Copy link
Contributor Author

dvirsky commented May 19, 2016

@antirez Following our IRC talk I'll optimize it even a bit further: check head and tail first, then loop backwards

@dvirsky
Copy link
Contributor Author

dvirsky commented May 19, 2016

An alternative and more complex approach - #3245

@antirez antirez merged commit 2fe9b79 into redis:unstable Jun 23, 2016
@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

Thanks @dvirsky, I merged this simpler implementation because the other looked too complex (by attempting to track the bottom element). However zig-zag scanning, as you suggested, or just checking the last element without caring about tracking the bottom element, looks like a good idea and can implement in a few lines of code, I'll try to add it and report here in a minute.

@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

Possible implementation here: f2dbc02

@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

Oh, I see only know that you switch the elements inside the queue, probably no longer needed. Checking better in a moment.

@dvirsky
Copy link
Contributor Author

dvirsky commented Jun 23, 2016

@antirez looks good to me

@dvirsky dvirsky deleted the optimize_autoMemoryFreed branch June 23, 2016 07:17
@dvirsky
Copy link
Contributor Author

dvirsky commented Jun 23, 2016

why is not not needed? you'd still grow the queue unnecessarily if the user allocates and frees constantly.

@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

Yes makes sense but it can leave holes (not just after your patch, it used to be like that) so I was thinking about adding this:

                 /* Reduce the size of the queue because we either moved the top
                  * element elsewhere or freed it */
                 ctx->amqueue_used--;
+
+                /* Remove empty elements at the end if created by random
+                 * order deallocations. */
+                while(ctx->amqueue_used &&
+                      ctx->amqueue[ctx->amqueue_used-1].type ==
+                      REDISMODULE_AM_FREED)
+                {
+                    ctx->amquue_used--;
+                }
                 return;

@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

My fear is in general that the switch thing reshuffles the array in bad ways compared to the access pattern.

Example: we usually free the last element allocated, but at some time we free an element in the middle.
When we free the element in the middle we pay inner access to the array. But because we switch the last element with the freed element, we'll pay the price again when the last element is freed even if it was otherwise near the ends of the array.

@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

However it looks like a tradeoff, because instead in random deallocations in the middle, swapping reduces the queue length and allows for faster scanning.

@dvirsky
Copy link
Contributor Author

dvirsky commented Jun 23, 2016

my approach would never leave holes. If we removed the top element - we just decrease the queue size, else we fill the hole with the top element.

the switching is indeed a tradeoff but I think in real world use cases you'll either remove from the top or bottom, most likely from the top, so the zigzag scanning takes care of this.

@antirez
Copy link
Contributor

antirez commented Jun 23, 2016

Yes you are right, sorry, so basically swapping never give holes but shuffles the array. However as you suggest, it only shuffles the array when the access pattern is random to start with, so we would anyway be in O(N) territory. Ok I think we are good, thank you for clarifying.

JackieXie168 pushed a commit to JackieXie168/redis that referenced this pull request Aug 29, 2016
pulllock pushed a commit to pulllock/redis that referenced this pull request Jun 28, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants