New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
random: revert change from boost::optional to std::optional #21567
Conversation
somehow this was breaking the seeding of thread-local engines on gcc. we'll have to investigate this further, but for now i'm reverting this piece to get messengers working again Fixes: http://tracker.ceph.com/issues/23778 Signed-off-by: Casey Bodley <cbodley@redhat.com>
tests running at http://pulpito.ceph.com/cbodley-2018-04-20_19:15:33-rgw-wip-cbodley-testing-distro-basic-smithi/ no unexpected failures so far |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. Apologies for my overzealousness in moving from boost::optional-- I'll try to discover what semantics are different, it may have to do with what happens on default construction (though from the documentation, that would surprise me).
@chardan we've been using std::optional without any problems, so i'm not sure why it's acting up here. maybe some interaction with thread_local? |
@cbodley That's my best guess right now-- still needs further investigation. TBH, as best as I can tell there shouldn't have been any issue, but reality didn't seem to match! It's possible that it's a compiler issue, but I don't have any evidence yet. |
Snapshot deletion requests on snap ids larger than the snap_seq of the pool will leave the pool in a state with snap_seq being less than max(removed_snaps). This is bad because further deletion requests to a pool in this state might crash the mon in some cases: the deletion also inserts the new snap_seq into the removed_snaps set -- which might already exist in this case and trigger an assert. Such bad requests will be generated by rbd clients without a fix for issue #21567. The change in OSDMonitor prevents pools from getting into this state and may prevent old broken clients from incorrectly deleting snaps. The change in osd_types avoids a crash if a pool is already in this state. Fixes #18746 Signed-off-by: Paul Emmerich <paul.emmerich@croit.io>
Snapshot deletion requests on snap ids larger than the snap_seq of the pool will leave the pool in a state with snap_seq being less than max(removed_snaps). This is bad because further deletion requests to a pool in this state might crash the mon in some cases: the deletion also inserts the new snap_seq into the removed_snaps set -- which might already exist in this case and trigger an assert. Such bad requests will be generated by rbd clients without a fix for issue ceph#21567. The change in OSDMonitor prevents pools from getting into this state and may prevent old broken clients from incorrectly deleting snaps. The change in osd_types avoids a crash if a pool is already in this state. Fixes ceph#18746 Signed-off-by: Paul Emmerich <paul.emmerich@croit.io> (cherry picked from commit f42a6ba)
Snapshot deletion requests on snap ids larger than the snap_seq of the pool will leave the pool in a state with snap_seq being less than max(removed_snaps). This is bad because further deletion requests to a pool in this state might crash the mon in some cases: the deletion also inserts the new snap_seq into the removed_snaps set -- which might already exist in this case and trigger an assert. Such bad requests will be generated by rbd clients without a fix for issue ceph#21567. The change in OSDMonitor prevents pools from getting into this state and may prevent old broken clients from incorrectly deleting snaps. The change in osd_types avoids a crash if a pool is already in this state. Fixes ceph#18746 Signed-off-by: Paul Emmerich <paul.emmerich@croit.io> (cherry picked from commit f42a6ba) Conflicts: src/osd/osd_types.cc
somehow this was breaking the seeding of thread-local engines on gcc.
we'll have to investigate this further, but for now i'm reverting this
piece to get messengers working again
Fixes: http://tracker.ceph.com/issues/23778