-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
35 Added the support for FILTERKEYS #84
base: master
Are you sure you want to change the base?
Conversation
GetCount() uint64 | ||
GetStorage() *sync.Map |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's rename it to Count and Storage ... more go-like. and as discussed, the Storage should return an iterator - a function that upon invocation returns the next KV pair.
package istorageengines | ||
|
||
// This is for future usage to use Redis Storage | ||
// as the main storage engine instead of KV |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's remove this file.
package istorageengines | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
name of the package should be storage_engines
"github.com/dicedb/dice/object" | ||
) | ||
|
||
type IKVStorage interface { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename the interface and keep it StorageEngine
@@ -6,28 +6,31 @@ import ( | |||
"os" | |||
"strings" | |||
|
|||
dbEngine "github.com/dicedb/dice/IStorageEngines" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
dbEngine -> storage_engine
"github.com/dicedb/dice/config" | ||
"github.com/dicedb/dice/object" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of creating another package for object see if we can fit it in storage_engine
or core
@@ -26,13 +28,13 @@ func (c *Client) TxnBegin() { | |||
c.isTxn = true | |||
} | |||
|
|||
func (c *Client) TxnExec() []byte { | |||
func (c *Client) TxnExec(dh *handlers.DiceKVstoreHandler) []byte { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of calling it handler, let's name our storage engine and use that here.
I propose the name - ozone
package expiry | ||
|
||
import ( | ||
dbEngine "github.com/dicedb/dice/IStorageEngines" | ||
"github.com/dicedb/dice/object" | ||
) | ||
|
||
func expireSample(dh dbEngine.IKVStorage) float32 { | ||
var limit int = 20 | ||
var expiredCount int = 0 | ||
dh.GetStorage().Range(func(k, v interface{}) bool { | ||
key := k.(string) | ||
value := v.(*object.Obj) | ||
// fmt.Printf("The key is : %v and the value is %v\n", key, *value) | ||
limit-- | ||
if object.GetDiceExpiryStore().HasExpired(value) { | ||
dh.Del(key) | ||
expiredCount++ | ||
} | ||
if limit == 0 { | ||
return true | ||
} | ||
return true | ||
}) | ||
return float32(expiredCount) / float32(limit) | ||
} | ||
|
||
// Deletes all the expired keys - the active way | ||
// Sampling approach: https://redis.io/commands/expire/ | ||
func DeleteExpiredKeys(dh dbEngine.IKVStorage) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let expiry be part of the storage engine implementation and expose it as an interface for an explicit trigger.
type DiceKVstoreHandler struct { | ||
object.DiceKVstore | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rename it to our storage engine ozone
// Max number of workers 256 * 1024 | ||
DefaultPoolSize = 1 << 18 | ||
|
||
// If we should wait when pool workers ain't available | ||
// false make it won't wait | ||
Nonblocking = false | ||
|
||
// ExpiryDuration is the interval time to clean up those expired workers. | ||
ExpiryDuration = 10 * time.Second |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
DefaultPoolSize -> MaxConcurrency and let the default value be part of the config.
Expiry duration can also be part of the config. Nonblocking should always be true. Don't see a reason for it to be configurable.
Fixes #35
It seems I have added too much but there is a reason why something so trivial should take this much time. Apologies for such an extreme level of changes and please allow me to explain it.
FILTERKEYS
is supposed to work on multiple keys and keeping in mind that we are not using any special data structure to store the keys in any ORDERED manner and the fact the regex can be complete random, the best time complexity that we can achieve isO(n)
via iterating over all the keys.sync.Map
fromgo1.9
.sync.Mutex
but there is alsosync.RWMutex
. Nevertheless, performance of map with mutex doesn't scale well in my experience and has a lot of implementation blockers.sync.Map
was introduced keeping in mind this issue in go 1.9. It scales EXCEPTIONALLY better if the vertical scaling of the system is considerable.sync.Map
has a problem. We can't uselen
function on it. So I had to create my own custom Storage(DiceKVStore
) with an atomic count in it.I know I have introduced a lot of changes and I also know huge changes in a single PR is a bad practise but I couldn't stay away from optimising it. It took me a lot of time to rewire the whole thing. I would request the reviewers to review the code and give pointers. If you guys decide to merge this, I shall be ready to work along with other contributors to adopt change that happened in between. If not, you can put it in a separate experimental branch or else worst option, I can continue it on my fork.
Thanks
Mayukh