Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Free tuples on space drop/truncate in background fiber #3408

Closed
locker opened this issue May 18, 2018 · 1 comment
Closed

Free tuples on space drop/truncate in background fiber #3408

locker opened this issue May 18, 2018 · 1 comment
Assignees
Labels
ddl feature A new functionality memtx
Milestone

Comments

@locker
Copy link
Member

locker commented May 18, 2018

Deleting all tuples stored in a memtx space on drop/truncate can block the tx thread for seconds. We can avoid that by delegating this work to a background fiber.

@locker locker added feature A new functionality ddl memtx labels May 18, 2018
@kyukhin kyukhin added this to the 1.10.1 milestone May 18, 2018
locker added a commit that referenced this issue May 20, 2018
When a memtx space is dropped or truncated, we have to unreference all
tuples stored in it. Currently, we do it synchronously, thus blocking
the tx thread. If a space is big, tx thread may remain blocked for
several seconds, which is unacceptable. This patch makes drop/truncate
hand actual work to a background fiber.

Before this patch, drop of a space with 10M 64-byte records took more
than 0.5 seconds. After this patch, it takes less than 1 millisecond.

Closes #3408
@locker locker self-assigned this May 20, 2018
locker added a commit that referenced this issue May 21, 2018
When a memtx space is dropped or truncated, we have to unreference all
tuples stored in it. Currently, we do it synchronously, thus blocking
the tx thread. If a space is big, tx thread may remain blocked for
several seconds, which is unacceptable. This patch makes drop/truncate
hand actual work to a background fiber.

Before this patch, drop of a space with 10M 64-byte records took more
than 0.5 seconds. After this patch, it takes less than 1 millisecond.

Closes #3408
@locker
Copy link
Member Author

locker commented May 21, 2018

Fixed by 2a1482f

@locker locker closed this as completed May 21, 2018
locker added a commit that referenced this issue May 21, 2018
Currently, space_vtab::destroy() only frees a space struct while the
base space is destroyed from space_delete(). In order to free tuples
asynchronously from space_vtab::destroy() (see the next patch), we need
to let the engine decide when the base space is destroyed.

Follow-up #3408
locker added a commit that referenced this issue May 21, 2018
The primary goal of this is to simplify merge to 2.0, where we have
ephemeral spaces. For ephemeral spaces we don't want to use background
fiber for freeing tuples, as this may deplete available memory in case
of a series of complex SQL SELECT statements. In index_vtab::destroy,
where async code resides now, we can't differentiate between ephemeral
and normal spaces, so let's move it to space_vtab::destroy.

Follow-up #3408
locker added a commit that referenced this issue May 21, 2018
The primary goal of this is to simplify merge to 2.0, where we have
ephemeral spaces. For ephemeral spaces we don't want to use background
fiber for freeing tuples, as this may deplete available memory in case
of a series of complex SQL SELECT statements. In index_vtab::destroy,
where async code resides now, we can't differentiate between ephemeral
and normal spaces, so let's move it to space_vtab::destroy.

Follow-up #3408
locker added a commit that referenced this issue May 22, 2018
When a memtx space is dropped or truncated, we delegate freeing tuples
stored in it to a background fiber so as not to block the caller (and tx
thread) for too long. Turns out it doesn't work out well for ephemeral
spaces, which share the destruction code with normal spaces: the problem
is the user might issue a lot of complex SQL SELECT statements that
create a lot of ephemeral spaces and do not yield and hence don't give
the garbage collection fiber a chance to clean up. There's a test that
emulates this, 2.0:test/sql-tap/gh-3083-ephemeral-unref-tuples.test.lua.
For this test to pass, let's run garbage collection procedure on demand,
i.e. when any of memtx allocation functions fails to allocate memory.

Follow-up #3408
locker added a commit that referenced this issue May 22, 2018
When a memtx space is dropped or truncated, we delegate freeing tuples
stored in it to a background fiber so as not to block the caller (and tx
thread) for too long. Turns out it doesn't work out well for ephemeral
spaces, which share the destruction code with normal spaces: the problem
is the user might issue a lot of complex SQL SELECT statements that
create a lot of ephemeral spaces and do not yield and hence don't give
the garbage collection fiber a chance to clean up. There's a test that
emulates this, 2.0:test/sql-tap/gh-3083-ephemeral-unref-tuples.test.lua.
For this test to pass, let's run garbage collection procedure on demand,
i.e. when any of memtx allocation functions fails to allocate memory.

Follow-up #3408
locker added a commit that referenced this issue May 22, 2018
When a memtx space is dropped or truncated, we delegate freeing tuples
stored in it to a background fiber so as not to block the caller (and tx
thread) for too long. Turns out it doesn't work out well for ephemeral
spaces, which share the destruction code with normal spaces: the problem
is the user might issue a lot of complex SQL SELECT statements that
create a lot of ephemeral spaces and do not yield and hence don't give
the garbage collection fiber a chance to clean up. There's a test that
emulates this, 2.0:test/sql-tap/gh-3083-ephemeral-unref-tuples.test.lua.
For this test to pass, let's run garbage collection procedure on demand,
i.e. when any of memtx allocation functions fails to allocate memory.

Follow-up #3408
locker added a commit that referenced this issue May 24, 2018
When a memtx space is dropped or truncated, we delegate freeing tuples
stored in it to a background fiber so as not to block the caller (and tx
thread) for too long. Turns out it doesn't work out well for ephemeral
spaces, which share the destruction code with normal spaces: the problem
is the user might issue a lot of complex SQL SELECT statements that
create a lot of ephemeral spaces and do not yield and hence don't give
the garbage collection fiber a chance to clean up. There's a test that
emulates this, 2.0:test/sql-tap/gh-3083-ephemeral-unref-tuples.test.lua.
For this test to pass, let's run garbage collection procedure on demand,
i.e. when any of memtx allocation functions fails to allocate memory.

Follow-up #3408
@kyukhin kyukhin added the tmp label Oct 9, 2018
@locker locker removed the tmp label Nov 28, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ddl feature A new functionality memtx
Projects
None yet
Development

No branches or pull requests

2 participants