-
Notifications
You must be signed in to change notification settings - Fork 24.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure necessary security context for s3 bulk deletions #108280
Ensure necessary security context for s3 bulk deletions #108280
Conversation
This PR moves the doPrivileged wrapper closer to the actual deletion request to ensure the necesary security context is established at all times. Resolves: elastic#108049
Documentation preview: |
Hi @ywangd, I've created a changelog YAML for you. |
Pinging @elastic/es-distributed (Team:Distributed) |
.put("client", "repo_test_kit") | ||
.put("bucket", bucket) | ||
.put("base_path", basePath) | ||
.put("deletion_batch_size", between(1, 1000)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took the easy way to piggyback on the existing repo analysis test. Please let me know if this works for you. Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 should be fine, but could you just confirm that this exposes the bug?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes I tested locally with real s3 by setting the properties listed here. I can confirm the test can fail without the fix.
A few notes:
- It does not fail with the fixture. I wonder whether it has something to do with the fixture running on localhost.
- When the size is 1, it will fail at repo verification without even start running the test.
- I expected it to fail anytime the deletion is done inside the lambda but that is not always the case. I guess maybe if there is existing connection to perform the deletion request, it does not trigger the permission check. I did not look into it deeply since it feels sufficient to me that it does fail sometimes without the fix.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple of nits, overall looks great tho. WDYT about backporting all the way to 8.13? I think there'll be another release before 8.14, would be good to fix this ASAP.
* The batch size for bulk deletion | ||
*/ | ||
static final Setting<Integer> DELETION_BATCH_SIZE_SETTING = Setting.intSetting( | ||
"deletion_batch_size", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Naming nit/suggestion: WDYT about delete_objects_max_size
? The relevant S3 API is called DeleteObjects
so I think mentioning that in the name would be helpful.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sounds good to me. Updated it as you suggested 61255e5
.put("client", "repo_test_kit") | ||
.put("bucket", bucket) | ||
.put("base_path", basePath) | ||
.put("deletion_batch_size", between(1, 1000)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 should be fine, but could you just confirm that this exposes the bug?
modules/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3BlobStore.java
Outdated
Show resolved
Hide resolved
modules/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3BlobStore.java
Outdated
Show resolved
Hide resolved
I agree. Added v8.13.4. Thanks! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Hi @ywangd, I've updated the changelog YAML for you. |
Hi @ywangd, I've updated the changelog YAML for you. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM2
@elasticmachine update branch |
This PR moves the doPrivileged wrapper closer to the actual deletion request to ensure the necesary security context is established at all times. Also added a new repository setting to configure max size for s3 deleteObjects request. Fixes: elastic#108049
💔 Backport failed
You can use sqren/backport to manually backport by running |
💚 All backports created successfully
Questions ?Please refer to the Backport tool documentation |
This PR moves the doPrivileged wrapper closer to the actual deletion request to ensure the necesary security context is established at all times. Also added a new repository setting to configure max size for s3 deleteObjects request. Fixes: elastic#108049 (cherry picked from commit bcf4297) # Conflicts: # docs/reference/snapshot-restore/repository-s3.asciidoc # modules/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3BlobStore.java
…08299) This PR moves the doPrivileged wrapper closer to the actual deletion request to ensure the necesary security context is established at all times. Also added a new repository setting to configure max size for s3 deleteObjects request. Fixes: #108049 (cherry picked from commit bcf4297) # Conflicts: # docs/reference/snapshot-restore/repository-s3.asciidoc # modules/repository-s3/src/main/java/org/elasticsearch/repositories/s3/S3BlobStore.java
This PR moves the doPrivileged wrapper closer to the actual deletion request to ensure the necesary security context is established at all times. Also added a new repository setting to configure max size for s3 deleteObjects request.
Fixes: #108049