New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wip mon paxos fixes #230
Wip mon paxos fixes #230
Conversation
Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
Say a service establishes it will only keep 500 versions once a given condition X is true. Now say that said condition X only becomes true after said service committing some 800 versions. Once we decide to trim, this service would trim all 300 surplus versions in one go. After that, each committed version would also trim the previous version. Trimming an unbounded number of versions is not a good practice as it will generate bigger transactions (thus a greater workload on leveldb) and therefore bigger messages too. Constantly trimming versions implies more frequent accesses to leveldb, and keeping around a couple more versions won't hurt us in any significant way, so let us put off trimming unless we go over a predefined minimum. This patch adds two new options: paxos service trim min - minimum amount of versions to trigger a trim (default: 30, 0 disables it) paxos service trim max - maximum amount of versions to trim during a single proposal (default: 50, 0 disables it) Signed-off-by: Joao Eduardo Luis <joao.luis@inktank.com>
@@ -75,7 +75,7 @@ class MDSMonitor : public PaxosService { | |||
// we don't require full versions; don't encode any. | |||
virtual void encode_full(MonitorDBStore::Transaction *t) { } | |||
|
|||
bool should_trim() { return false; } | |||
bool service_should_trim() { return false; } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Huh, do we really never trim MDSMaps?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We really don't.
So we're now sharding a big trim into multiple smaller trim requests. Have you tested this to make sure that sufficiently breaks up the workload, or do we need to try and have a minimum interval between trims too? :/ |
I can't guarantee that it will enhance overall performance. It should however avoid big transactions when we trim osdmaps, instead splitting those trims in multiple smaller transactions. Having a minimum interval between trims would probably build up on the amount of versions we would need to trim without any significant gain. Then again, this is pure speculation. |
Wip mon paxos fixes Reviewed-by: Greg Farnum <greg@inktank.com> Reviewed-by: Sage Weil <sage@inktank.com>
rgw: allow fastcgi idle timeout to be adjusted Reviewed-by: Yehuda Sadeh <yehuda@inktank.com>
Changes to ansible post go-live
rgw/sfs: Don't return version id for unversioned buckets
Creating this for Joao to track comments on.