Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Analyze if spinlocks have any benefit and remove them if not #1996

Closed
vh05 opened this issue Jan 12, 2021 · 11 comments · Fixed by #2007
Closed

Analyze if spinlocks have any benefit and remove them if not #1996

vh05 opened this issue Jan 12, 2021 · 11 comments · Fixed by #2007

Comments

@vh05
Copy link
Contributor

vh05 commented Jan 12, 2021

Glusterfs currently support spinlocks and mutexes. It's not quite understandable to that if spinlocks have any advantage on user mode when running more threads than cores regularly and threads themselves don't run at high priority. If they don't really provide any benefit, we should remove them and simplify locking macros.

@itisravi
Copy link
Member

With commit ac248c8, spinlocks were made to be enabled on multi core systems only, so not sure if we need to remove them.

@vh05
Copy link
Contributor Author

vh05 commented Jan 12, 2021

Thanks, Ravi for the input.

The concern is why we need to use spin_locks when the job can be done using mutex.
The places where we used spin_locks

  1. do not share the context with interrupts
  2. priority is not a problem
  3. the code under spin_lock is going to take more time than the context switch
  4. The code may go to sleep

ex: aws_init() in cloudsync, doing dict_get holding a priv->spin_lock. Not sure why we need spin_lock in such instances.

Mostly in our code, we use spin_lock where the processing time is high.

Please give your opinion further

@itisravi
Copy link
Member

itisravi commented Jan 12, 2021

@VHariharmath-rh I agree that mutexes are better than spinlocks in general. I was only saying that if you run tests on a multi core system (which is pretty much all machines), it already uses mutexes (because of the above patch). So I was wondering how you were planning to analyse the performance difference.

@vh05
Copy link
Contributor Author

vh05 commented Jan 12, 2021

According to the patch, on multicore systems, the spin_locks are used by default, Am I missing anything?

So I was wondering how you were planning to analyse the performance difference.
I am thinking to do it in 2 phases.

  1. analyze and replace explicit declarations spin_lock with mutex. (maybe gf_lock_t)
  2. The macros LOCK and UNLOCK (introduced in the patch) made to use mutex default if spin_locks found to be not necessary.

With these changes, I collect the numbers running fio.

@itisravi
Copy link
Member

According to the patch, on multicore systems, the spin_locks are used by default, Am I missing anything?

Ah sorry, my bad. I got it mixed up. The steps makes sense.

@mykaul
Copy link
Contributor

mykaul commented Jan 12, 2021

If we can also remove this use_spinlocks variable and just normally ifdef the code, that'd be nice. I'm sure in debug mode it is calling this variable, I don't know about release. I don't see the value in that.

@vh05
Copy link
Contributor Author

vh05 commented Jan 12, 2021

If I understand correctly, use_spinlocks is used to decide between spin_lock or mutex. If mutex makes sense than a spin_lock then HAVE_SPINLOCK section along with the use_spinlocks can be removed

@mykaul
Copy link
Contributor

mykaul commented Jan 12, 2021

I'd start with a simple ifdef instead of consulting a variable.

@vh05
Copy link
Contributor Author

vh05 commented Jan 12, 2021

The variable use_spinlocks is commented in the current codebase, holds the value = 0, and the default behavior is to use a mutex lock. So HAVE_SPINLOCK section in locking.h simply lead to mutex always.

libglusterfs/src/locking.c

int use_spinlocks = 0;
static void __attribute__((constructor)) gf_lock_setup(void)
{
    // use_spinlocks = (sysconf(_SC_NPROCESSORS_ONLN) > 1);
}

@mykaul
Copy link
Contributor

mykaul commented Jan 12, 2021

Are you sure?

int use_spinlocks = 0;

vh05 added a commit to vh05/glusterfs that referenced this issue Jan 13, 2021
Glusterfs currently support spinlocks and mutexes. It's not
quite understandable that if spinlocks have any advantage on
user mode when running more threads than cores regularly and
threads themselves don't run at high priority. If they don't
really provide any benefit, we should remove them and simplify
locking macros.

The concern is why we need to use spin_locks when the job can
be done using mutex. The places where we used spin_locks

1. do not share the context with interrupts
2. priority is not a problem
3. the code under spin_lock is going to take more time than the
   context switch
4. the code may go to sleep

Next, use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: gluster#1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
@vh05
Copy link
Contributor Author

vh05 commented Jan 13, 2021

Yes, @mykaul. I did not find anywhere in the code use_spinlocks is set.

vh05 added a commit to vh05/glusterfs that referenced this issue Jan 18, 2021
use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: gluster#1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
vh05 added a commit to vh05/glusterfs that referenced this issue Jan 18, 2021
use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: gluster#1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
vh05 added a commit to vh05/glusterfs that referenced this issue Jan 18, 2021
use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: gluster#1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
vh05 added a commit to vh05/glusterfs that referenced this issue Jan 18, 2021
use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: gluster#1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
vh05 added a commit that referenced this issue Jan 19, 2021
* locks: remove ununsed conditional switch to spin_lock code

use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: #1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>

* locks: remove unused conditional switch to spin_lock code

use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: #1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>

* locks: remove unused conditional switch to spin_lock code

use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Fixes: #1996
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
csabahenk pushed a commit to csabahenk/glusterfs that referenced this issue Mar 7, 2023
use of spin_locks is depend on the variable use_spinlocks
but the same is commented in the current code base through
https://review.gluster.org/#/c/glusterfs/+/14763/. So it is
of no use to have conditional switching to spin_lock or
mutex. Removing the dead code as part of the patch

Backport of:
> Upstream-patch: gluster#2007
> Fixes: gluster#1996
> Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
> Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>

BUG: 1925425
Change-Id: Ib005dd86969ce33d3409164ef3e1011bb3169129
Signed-off-by: Vinayakswami Hariharmath <vharihar@redhat.com>
Reviewed-on: https://code.engineering.redhat.com/gerrit/c/rhs-glusterfs/+/244965
Tested-by: RHGS Build Bot <nigelb@redhat.com>
Reviewed-by: Sunil Kumar Heggodu Gopala Acharya <sheggodu@redhat.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants