New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mon: AuthMonitor: delete auth_handler while increasing max_global_id #135
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
By not deleting and setting NULL the session's auth_handler, we could hit a scenario in which we'd end up dispatching a previously-wait-listed auth message and we wouldn't start its auth session. This only happened when increasing max_global_id via Paxos (in which case we would wait-list the message) and would only be noticeable when running with cephx disabled. Fixes: #4519 Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
liewegas
pushed a commit
that referenced
this pull request
Mar 22, 2013
mon: AuthMonitor: delete auth_handler while increasing max_global_id Reviewed-by: Sage Weil <sage@inktank.com>
liewegas
pushed a commit
that referenced
this pull request
Dec 14, 2016
First draft of firefly-giant-x suite
ddiss
pushed a commit
to ddiss/ceph
that referenced
this pull request
Aug 3, 2017
qa: create rbd pool before running rbd test Reviewed-by: Ricardo Dias <rdias@suse.com>
sebastian-philipp
pushed a commit
to sebastian-philipp/ceph
that referenced
this pull request
Feb 26, 2018
mgr/dashboard_v2: Adapt status datatable to default design of cd-table
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 12, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 12, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 12, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 12, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 13, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 14, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 14, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 15, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 15, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 15, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
ErwanAliasr1
pushed a commit
to ErwanAliasr1/ceph
that referenced
this pull request
Jun 18, 2018
This patch is about adding some parallelism into this test. Every "action" to test is spawn in a subshell with a custom testing env: - a separate directory - a different cluster id - a different port This way it is possible spawning several tests in parallel. The pids are stored in bash arrays and the exit status are double-checked after the run. Regarding the exit code, the pass is reported as passed or failed. This patch is saving 55 seconds. Before: 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 126.26 sec After : 1/1 Test ceph#135: safe-to-destroy.sh ............... Passed 71.47 sec Signed-off-by: Erwan Velu <erwan@redhat.com>
liewegas
pushed a commit
to liewegas/ceph
that referenced
this pull request
Nov 5, 2021
YouTube Shortcode
robbat2
pushed a commit
to robbat2/ceph
that referenced
this pull request
Feb 1, 2023
Pacific: Add Ubuntu 22.04 (Jammy) support.
jecluis
pushed a commit
to jecluis/ceph
that referenced
this pull request
Mar 25, 2023
tobias-urdin
pushed a commit
to tobias-urdin/ceph
that referenced
this pull request
Aug 2, 2023
Fix If-Match test Reviewed-by: Yehuda Sadeh <yehuda@redhat.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
By not deleting and setting NULL the session's auth_handler, we could
hit a scenario in which we'd end up dispatching a previously-wait-listed
auth message and we wouldn't start its auth session.
This only happened when increasing max_global_id via Paxos (in which case
we would wait-list the message) and would only be noticeable when running
with cephx disabled.
Fixes: #4519
Signed-off-by: Joao Eduardo Luis joao.luis@inktank.com