-
Notifications
You must be signed in to change notification settings - Fork 327
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Redis Cache for User table #3924
Conversation
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Signed-off-by: Matt Krick <matt.krick@gmail.com>
@@ -80,7 +77,6 @@ export default class User { | |||
this.createdAt = createdAt || now | |||
this.picture = picture || `${AVATAR_BUCKET}/${avatarName}.png` | |||
this.updatedAt = updatedAt || now | |||
this.emailVerified = emailVerified || false |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
emailVerified
lives on user.identities
this.redisCache.prime(table, docs) | ||
return this | ||
} | ||
async read<T extends keyof RethinkTypes>(table: T, id: string) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
read
is very similar to data loader's load
function.
@@ -52,6 +53,10 @@ const threadableLoaders = [ | |||
|
|||
// TODO: refactor if the interface pattern is used a total of 3 times | |||
|
|||
export const users = () => { | |||
return new ProxiedCache('User') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this lets folks still call dataLoader.get('users').load('user123')
. but now, it will attempt to fetch from the cache instead of going directly to the DB
@@ -0,0 +1,28 @@ | |||
/* |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@tianrunhe here's a general overview of the new architecture
Signed-off-by: Matt Krick <matt.krick@gmail.com>
private callbacks = [] as {resolve: (payload: any) => void; reject: (err: any) => void}[] | ||
private cacheHits = [] as (() => void)[] | ||
private writes = [] as (RWrite<any> & {resolve: (payload: any) => void})[] | ||
private ttl = ms('1h') |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we make this a configuration/env value? Benefit is that for local development we can set it to a rather low value so we could test the caching locally?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
for sure, we could make the ttl an argument to the constructor, but since this is a singleton i figured it'd be just as easy to change it here, no strong preference so if you've got one feel free!
i try to keep the .env free of client variables, with the exception of 3rd party client keys like google, github, etc, since those vary based on environment.
redis.multi(writes).exec() | ||
return keys.map((key, idx) => { | ||
const cachedDoc = cachedDocs[idx] | ||
if (cachedDoc) return JSON.parse(cachedDoc) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just return parsedDocs[idx]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that is a fantastic find! will fix
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM overall. Some comments and questions out of curiosity.
Signed-off-by: Matt Krick <matt.krick@gmail.com>
Today, almost every single request to the database requires looking up the User in the DB.
To minimize the hits to the DB we use a [DataLoader] (https://github.com/graphql/dataloader) inside the graphql resolvers. This reduces the number of DB hits to 1 per request, since the dataloader is disposed of after the request completes.
We can further reduce hits to the DB by caching the User in redis & busting the cache when the User is updated.
We can do better still by batching the queries together so only 1 message is sent to the DB.
We can use that same batching strategy for the cache so only 1 message is sent to redis.
We can also cache the document in memory to avoid repeatedly hitting redis & parsing the JSON.
This PR implements this batching + 3-stage cache strategy for all reads to the User table.
calling
db.read
returns the document locally, from redis, or from the DB.calling
db.write
updates the document in the DB and updates the redis and local cache.For now, this is ONLY implemented for reads to the User table when using the userId primary key.
In the future, we can reuse this strategy for foreign keys & more types
One severe limitation: this only works with a single node instance. if node1 sends a write, its cache, redis', and the DB are all updated. however, node2's local cache will now be stale. To fix, we can use Redis' new client-side-caching, but that feature is only in v6, which DO doesn't offer yet.
TEST