Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Concurrentmap in CacheProvider to reduce lock granularity #205

Closed
liu-song opened this issue Dec 23, 2021 · 5 comments · Fixed by #254
Closed

Use Concurrentmap in CacheProvider to reduce lock granularity #205

liu-song opened this issue Dec 23, 2021 · 5 comments · Fixed by #254
Assignees
Labels
enhancement New feature or request need discuss Need to discuss

Comments

@liu-song
Copy link
Contributor

What is the feature you want to add?
使用concurrentmap 来减少目前 healthcheck / CacheProvider,https://github.com/polarismesh/polaris/blob/main/healthcheck/cache.go#L30,
中全局读写锁的粒度,划分为多个map+sync.RWMutex 来实现分片map的效果,提高相应的性能
Why do you want to add this feature?

Use concurrentmap to replace map+sync.RWMutex

How to implement this feature?
将原先map 的key值 instanceid 进行hash得到的值对concurrentmap 的分片数量来求余 ,
来确定分发到指定的map 上,从而来实现减少读写锁的粒度。

type sharedMap struct {
	size   uint32
	shared []*shared
}

type shared struct {
	healthCheckInstances map[string]*InstanceWithChecker
	healthCheckMutex     sync.RWMutex
}

func (m *sharedMap) getShard(instanceId string) *shared {
	return m.shared[fnv32(instanceId)%m.size]
}

// FNV hash
func fnv32(key string) uint32 {
	hash := uint32(2166136261)
	const prime32 = uint32(16777619)
	for i := 0; i < len(key); i++ {
		hash *= prime32
		hash ^= uint32(key[i])
	}
	return hash
}

Additional context
Add any other context or screenshots about the feature request here.

@liu-song liu-song added the enhancement New feature or request label Dec 23, 2021
@liu-song
Copy link
Contributor Author

如果合适的话,我尝试实现这个,并给出相应的benchmark

@chuntaojun
Copy link
Member

可以先出一个benchmark,还有一个,这个size,有没有办法动态调整扩容呢?

@liu-song
Copy link
Contributor Author

OK,理论上使用分片的效果会好些,这个size目前大部分Concurrentmap的实现都是创建的时候作为参数就确定了,如果动态的话,感觉实现会比较复杂会涉及到再分配之类的,size值的大小一般不会太小也不会太大

@chuntaojun
Copy link
Member

OK,理论上使用分片的效果会好些,这个size目前大部分Concurrentmap的实现都是创建的时候作为参数就确定了,如果动态的话,感觉实现会比较复杂会涉及到再分配之类的,size值的大小一般不会太小也不会太大

如果能有的话最好,是在不好做的话,建议benchmark的时候可以带有,比如1000个实例下,这个分片map的size多大是最好的,类似这样子,有个参考建议,

@chuntaojun chuntaojun added the need discuss Need to discuss label Dec 23, 2021
@liu-song
Copy link
Contributor Author

OK

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request need discuss Need to discuss
Projects
None yet
2 participants