-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Development router #6
Conversation
The argument will allow user of the 0-fs to add the caching layer via passing the local-router configuration file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor comments but otherwise LGTM
@@ -0,0 +1,81 @@ | |||
# Block routing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this file doesn't seems to be link from anywhere in the doc. so unless you manually look in the repo you'll never find this file
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it's actually the other file, which I removed
docs/flists/router.md
Outdated
## Cache | ||
A `router.yaml` can define a `cache` list. Which lists a set of pools (one or more). A cache is always updated with a block once it's retrieved from the lookup. Usually an flist should not define a `cache` list. It's the user of the flist who would probably need to add his `cache` entries so blocks retrieved from remote are cached locally for faster access next time. | ||
|
||
`0-fs` provides a simple way to merge to `router.yaml`, a locally defined `router.yaml` with the one published by the flist itself. This way a user can hook in his local infrastructure for caching. For example, a user can has this local `router.yaml` file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
a user can has
=> a user can have
storage/router/router.go
Outdated
will not succeed unless the cache is updated. The errors of cache SET | ||
will only be logged thought and should not cause the GET operation to fail | ||
*/ | ||
func (r *Router) updateCache(src, key string, data []byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
doing this sequentially will have a big impact on performance, can we do this async instead ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I know this can has a big impact on the system. I thought about doing it asynchronusly but I was too worried that we might end up with too many routines that tries to write blocks to redis which can cause pool exhaustion. Or if we have a limited number of workers (say 10) we can still hit a blocking limit again. We can never tell what are the best number of routines to commit to the cache.
I ended up deciding to do sequentially since this happens to the blocks that are NOT retrieved from cache.
If u think having a limited (say 100) workers to commit to cache is a better solution we can do it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would just have a single gorountine to write to all the cache, so in the Get
method do go r.updateCache(src, key, data)
Regarding exhausting the redis pool, how is that a problem ? This code will run in goruntine, so you can even block when trying to get a connection from the redis pool it doesn't matter right ? you don't block any code path.
No description provided.