-
Notifications
You must be signed in to change notification settings - Fork 883
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(AclFamily): add acl deluser #1773
Conversation
kostasrim
commented
Aug 30, 2023
- add acl deluser command
- add tests
src/server/acl/acl_family.cc
Outdated
auto connection = static_cast<facade::Connection*>(conn); | ||
auto ctx = static_cast<ConnectionContext*>(connection->cntx()); | ||
if (ctx && ctx->authed_username == user) { | ||
// ctx->CancelBlocking(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How do we abort already existing transactions? There seems to be done in two ways:
- For blocking transactions via
CancelBlocking
- From
Transaction::ScheduleInternal::632
which I am not 100% sure how it works
Is there a generic way to do this? If not I will need to investigate and figure the details.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is no currently no way as far as I know 😳 .... 😅 We also don't support force stop (twice ctrl+z). Why? Because there is no mechanism in place to stop running commands.
Try executing this script and there is no way of stopping it
for i := 0, 100000 do
redis.call('INCR', 'i')
end
It seems that most commands (short and quickly locking) are not an issue and don't need to be specifically aborted. If they're heavy (script or multi/exec) or hold expensive locks (placed in the transaction queue), it might make sense.
We can keep it as a to-do for the near future. We might need to add an abort
flag to the transaction and then check it in the right places & remove transaction from scheduled transactions queue
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can keep it as a to-do for the near future. We might need to add an abort flag to the transaction and then check it in the right places & remove transaction from scheduled transactions queue
I can take care of that.
my main concern for now is that if we release the connection_context
(since we are shutting down the connection) is that we will end up accessing a released connection context *
which means we flew to the realm of UB :D
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
honestly perfect opportunity for me to dive into our transactional framework (was eagerly waiting for that moment 👀 )
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we add a mechanism for aborting commands we'd also need a rollback mechanism, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, it's not something we should do now. What is important now though is to assert that if we kill a connection the underline context won't hang or be accessed by any transaction. We don't want to be accessing nullptr's (since the connection itself holds a pointer to the underline context)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually as of a second thought I don't know why you complicate things so much.
If we add a mechanism for aborting commands we'd also need a rollback mechanism, no?
No. Its nearly impossible. We should see what Redis does and choose the simplest approach.
ACL DELUSER is most likely handled just as a regular command, so because redis is single threaded, it executes after all preceding commands of the same user (same linear order). How do we achieve this 'cut' on our own? Lets make ACL DELUSER a global transaction - it guarantees us no commands are executing (We could guarantee no commands of the same user are running, which is more efficient, but the global transaction is surely enough and doesn't require additional mechanisms)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@dranikpg I discussed this with Roy. There is a better solution. We shutdown
the context, this won't cause any issue, it just closes the socket. If the transaction is executing that's fine, we let it finish. If it's in the middle of multi/exec
we handle this by polling the is_shutting down flag. Otherwise, if it's a script we let it finish, it started before we kicked the user out of the system so the behaviour is still correct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise, if it's a script we let it finish, it started before we kicked the user out of the system so the behaviour is still correct.
But multi/exec also started before....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hmm actually that's right!
tests/dragonfly/acl_family_test.py
Outdated
|
||
|
||
@pytest.mark.asyncio | ||
async def test_acl_deluser(df_local_factory): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These pass, but the above issues remain. What happens if we are executing a transaction and we suddenly "nuke" the connection. What happens to the context and the transaction?
tests/dragonfly/acl_family_test.py
Outdated
|
||
@pytest.mark.asyncio | ||
async def test_acl_deluser(df_local_factory): | ||
df = df_local_factory.create(multi_exec_squash=True, port=1111) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It has actually the same issue I mentioned in the previous pr 😢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are you referring that I am explicitly creating the dfly instance? We need two clients, currently with async_client
there can only be one (since I can't pass the same fixture twice, eg, (async_client, async_client)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is the df_server fixture which is the currently default running instance, you can get its port via df_server.port
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have looked at it, but now I know :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A test with a script running would be fine to make sure dropping the connection doesn't crash df there