Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mgr/volumes: use dedicated libcephfs handles for subvolume calls #41966

Merged
merged 1 commit into from
Jul 6, 2021

Conversation

vshankar
Copy link
Contributor

…async jobs

Fixes: http://tracker.ceph.com/issues/51271
Signed-off-by: Venky Shankar vshankar@redhat.com

Checklist

  • References tracker ticket
  • Updates documentation if necessary
  • Includes tests for new functionality or reproducer for bug

Show available Jenkins commands
  • jenkins retest this please
  • jenkins test classic perf
  • jenkins test crimson perf
  • jenkins test signed
  • jenkins test make check
  • jenkins test make check arm64
  • jenkins test submodules
  • jenkins test dashboard
  • jenkins test api
  • jenkins test docs
  • jenkins render docs
  • jenkins test ceph-volume all
  • jenkins test ceph-volume tox

…async jobs

Fixes: http://tracker.ceph.com/issues/51271
Signed-off-by: Venky Shankar <vshankar@redhat.com>
@vshankar vshankar added the cephfs Ceph File System label Jun 22, 2021
@batrick
Copy link
Member

batrick commented Jun 23, 2021

Am I right you're fixing this from the other direction? The new connection pool is allocated to the purge/clone queues? I'd worry that unexpected contention from other other future modules may compete with the connection used by the dispatch thread. But perhaps this PR coupled with #41917 is sufficient to ease that concern.

@vshankar
Copy link
Contributor Author

Am I right you're fixing this from the other direction? The new connection pool is allocated to the purge/clone queues? I'd worry that unexpected contention from other other future modules may compete with the connection used by the dispatch thread. But perhaps this PR coupled with #41917 is sufficient to ease that concern.

Right -- this makes the async jobs use their own fs handles. I didn't quite get the point of other future modules? Do you mean modules in mgr/volumes itself (nfs, etc..)?

And yes, this can be coupled with the pr FTW.

@batrick
Copy link
Member

batrick commented Jun 25, 2021

Am I right you're fixing this from the other direction? The new connection pool is allocated to the purge/clone queues? I'd worry that unexpected contention from other other future modules may compete with the connection used by the dispatch thread. But perhaps this PR coupled with #41917 is sufficient to ease that concern.

Right -- this makes the async jobs use their own fs handles. I didn't quite get the point of other future modules? Do you mean modules in mgr/volumes itself (nfs, etc..)?

My understanding is the connection pool that mgr/volumes uses can be used by other modules (like potentially snap_schedule or the dashboard).

@vshankar
Copy link
Contributor Author

My understanding is the connection pool that mgr/volumes uses can be used by other modules (like potentially snap_schedule or the dashboard).

The restricting factor for this is that each mgr plugin runs in a separate sub-interpreter (shared nothing). So, each plugin gets its own connection pool object.

@batrick
Copy link
Member

batrick commented Jul 3, 2021

My understanding is the connection pool that mgr/volumes uses can be used by other modules (like potentially snap_schedule or the dashboard).

The restricting factor for this is that each mgr plugin runs in a separate sub-interpreter (shared nothing). So, each plugin gets its own connection pool object.

Wow, really? I didn't know it worked that way atm. I thought there was talk about switching to that model....

@batrick
Copy link
Member

batrick commented Jul 6, 2021

@batrick batrick merged commit 097b678 into ceph:master Jul 6, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants