New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
LifeCycle feature #6331
LifeCycle feature #6331
Conversation
be5c6fa
to
45deb2d
Compare
@chenji-kael this is very interesting, looks like a good start. I do see it needs some cleanups, and we'll need to take a closer look at it to make sure it conforms to what we had in mind when thinking about this feature. Assigining it to @dang to take a closer look. |
ce2d6a5
to
5bfb8de
Compare
@yehudasa ping |
@chenji-kael there are a few problems that I see. It still does it using the bucket name, not the bucket instance which is problematic. I also think that modifying the bucket metadata is problematic because it's racy (can race with another change to the bucket metadata). I still think we need to keep the list of bucket instances that need to be scanned in a separate sharded omap. Each gateway will try to grab a lease on a shard, then process it. This way multiple gateways can coordinate the work. |
@yehudasa OK,I will get it fix soon |
5bfb8de
to
febb9b2
Compare
@yehudasa ping |
@yehudasa ping |
@chenji-kael haven't forgotten you. Will try to get it reviewed soon. Thanks! |
@chenji-kael |
febb9b2
to
2117868
Compare
@dang hi,rebase has been done |
@chenji-kael please ignore the bot failure, it is a false negative (see http://tracker.ceph.com/issues/13997 for more information). You can re-schedule a job by rebasing your branch and repushing. |
@chenji-kael The call to get_bucket_info() needs to be updated for tenants from commit f7ca00a this is causing me build failures. I suspect this will need a new argument, but I'm not sure. Also, can you add your new files to src/CMakeList.txt? |
2117868
to
46c04cc
Compare
@dang sorry for have not noticed tenant has been merged, now it works fine, please check it, thanks |
As same as amazon S3 interface,"PUT Bucket lifecycle" and "DELETE Bucket lifecycle" have been implemented, "GET Bucket lifecycle" not realized yet as S3cmd has not realize it also. The feature`s main point is to remove expire file per day. Files transfer from hot layer to cold layer is not supported. ToDo:Maybe to transfer from replicate pool to EC pool or from ssd to sata pool will be valuable. Now put all buckets which should do lifecycle into shard objects in .rgw.lc pool. lifecycle config file format: <LifecycleConfiguration> <Rule> <ID>sample-rule</ID> <Prefix></Prefix> <Status>enable</Status> <Expiration> <Days>1</Days> </Expiration> </Rule> </LifecycleConfiguration> Signed-off-by: Ji Chen <insomnia@139.com>
46c04cc
to
7d48f62
Compare
@chenji-kael @dang I reverted the merge off github for now. There still are some issues that I think will need to be fixed. Note that I pushed some fixes to wip-rgw-lifecycle, but we'll need to do some more work before we can merge it. |
As same as amazon S3 interface,"PUT Bucket lifecycle" and
"DELETE Bucket lifecycle" have been implemented,
"GET Bucket lifecycle" not realized yet as S3cmd has not
realize it also.
The feature`s main point is to remove expire file per day.
As ceph does not have a tier concept, so files transfer
from hot layer to cold layer is also not supported.
lifecycle config file format:
sample-rule
enable
1
Signed-off-by: Ji Chen insomnia@139.com