Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

quicklist adjacent small nodes #12856

Open
imchuncai opened this issue Dec 12, 2023 · 0 comments
Open

quicklist adjacent small nodes #12856

imchuncai opened this issue Dec 12, 2023 · 0 comments

Comments

@imchuncai
Copy link

The following code can create a quicklist with 5 adjacent small nodes, which is waste of memory.

    TEST("adjacent small nodes") {
        unsigned char *big_string = zmalloc(packed_threshold);
        randstring(big_string, packed_threshold);

        quicklist *ql = quicklistNew(-2, 0);
        quicklistPushHead(ql, "1", 1);
        quicklistPushHead(ql, big_string, packed_threshold);
        quicklistPushHead(ql, "1", 1);
        quicklistPushHead(ql, big_string, packed_threshold);
        quicklistPushHead(ql, "1", 1);
        quicklistPushHead(ql, big_string, packed_threshold);
        quicklistPushHead(ql, "1", 1);
        quicklistPushHead(ql, big_string, packed_threshold);
        quicklistPushHead(ql, "1", 1);
        quicklistDelRange(ql, 1, 1);
        quicklistDelRange(ql, 2, 1);
        quicklistDelRange(ql, 3, 1);
        quicklistDelRange(ql, 4, 1);
        assert(ql->count == 5);
        quicklistIter *iter = quicklistGetIterator(ql, AL_START_HEAD);
        quicklistEntry entry;
        while (quicklistNext(iter, &entry)) {
            assert(entry.longval == 1);
        }
        zfree(big_string);
        quicklistReleaseIterator(iter);
        quicklistRelease(ql);
    }
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
imchuncai pushed a commit to imchuncai/redis that referenced this issue Dec 12, 2023
Why:
I tried to solve the issue I found earlier, but found myself stuck in a quagmire
because the issues kept coming up while I fix the old one, so I finally decided
to rewrite it.

Issues with the old one:
- A node which should be compressed stays raw
  This is due to by poor design of quicklist->recompress, the design forgot the
  situation that a node could stay uncompressed if it can not compress small
  enough. And if we changed the node, wo should perform compress on it again.
  See issue redis#12563.
- Iterator don't behave like iterator
  Iterator will be reset and not avaliable for further use after replace or
  insert, see marcro resetIterator(). The only operation that athe iterator does
  not get reset is quicklistDelEntry(), and it has a commment about it, but the
  comment is wrong, the iterator may not behave like the comment says.
  See issue redis#12614.
- Packed node violate size limit
  Certen call to function quicklistReplaceEntry(), quicklistInsertBefore() and
  quicklistInsertAfter() will cause a packed node violate size limit.
  See issue redis#12548.
- Merge operation only performed in insert
  There is no merging in delete nor replace, which can make the quicklist
  contain adjacent small nodes.
  See issue redis#12856.
- Algorithms that maintain compress depth are not efficient
  the algorithm to maintain compress depth after add or delete is to check nodes
  on both sides of the list, time complexity is O(n), where n is the uncompressed
  depth on both sides of the list.

All the changes:
- Partition the node
  Divide the node into three partitions: head, middle and tail. The head and
  tail partitions hold uncompressed nodes, and the middle partition holds
  compressed nodes. Therefore,the time complexity of maintaining compress depth
  after adding or deleting a node will drop to O(1), moving at most one node
  from one partition to another.
- Removed annoying members recompress, attempted_compress and dont_compress from
  quicklist node structure
- Merge structure quicklistIter and quicklistEntry
- The historical parameter packed_threshold has been removed
  This is mentioned by @sundb and @oranagra in pull request redis#12568.
- Merge strategy is added
  That is that any adjacent node in quicklist can not be merged.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant