Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prefetch syncs to redis #1446

Draft
wants to merge 25 commits into
base: main
Choose a base branch
from
Draft

Prefetch syncs to redis #1446

wants to merge 25 commits into from

Conversation

kwitsch
Copy link
Collaborator

@kwitsch kwitsch commented Apr 12, 2024

Changes:

  • reloadCacheEntry removes EDNS0 records from response
  • reloadCacheEntry publishes its result to Redis

Closes #1422

Related #1420

@kwitsch kwitsch added the 🐞 bug Something isn't working label Apr 12, 2024
@kwitsch kwitsch added this to the v0.24 milestone Apr 12, 2024
Copy link

codecov bot commented Apr 12, 2024

Codecov Report

Attention: Patch coverage is 62.98077% with 77 lines in your changes are missing coverage. Please review.

Project coverage is 92.92%. Comparing base (fe84ab8) to head (7b2fcc2).
Report is 65 commits behind head on main.

Files Patch % Lines
util/dns.go 17.85% 44 Missing and 2 partials ⚠️
redis/redis.go 80.64% 9 Missing and 9 partials ⚠️
resolver/caching_resolver.go 76.78% 8 Missing and 5 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #1446      +/-   ##
==========================================
- Coverage   93.88%   92.92%   -0.97%     
==========================================
  Files          78       79       +1     
  Lines        6361     5030    -1331     
==========================================
- Hits         5972     4674    -1298     
+ Misses        300      251      -49     
- Partials       89      105      +16     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@kwitsch kwitsch enabled auto-merge (squash) April 12, 2024 19:36
Copy link
Collaborator

@ThinkChaos ThinkChaos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have a couple comments but it's mostly stuff I noticed about the existing code when comparing with the new one!

resolver/caching_resolver.go Outdated Show resolved Hide resolved
} else {
util.LogOnError(ctx, fmt.Sprintf("can't prefetch '%s' ", domainName), err)

if r.redisClient != nil {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The behavior here is slightly different from putInCache: if .Pack() failed, we're not sending the response to Redis. I think not sending it is better, so maybe update putInCache to match?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ur correct sending it when pack fails is an error in putInCache

r.redisClient.PublishCache(cacheKey, &res)
}

return &packed, r.adjustTTLs(response.Res.Answer)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is also a slight difference compared to the putInCache flow: here Redis sees the unadjusted TTLs whereas when we add values using putInCache Redis gets the adjusted TTLs.
My initial reaction is this is correct, and the other code should change since other blocky instances might have different TTL configuration.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sending the unadjusted TTL to Redis isn't a problem since received entries are TTL adjusted upon receiving before they are put in the local cache.

This way instances with different configs can share the same sync channel without breaking each other's settings.

That was the base idea when I implemented it. I get that this is counterintuitive and should be changed.

Comment on lines 125 to 126
// don't cache any EDNS OPT records
util.RemoveEdns0Record(respCopy)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice catch!

Based on my other comments and this being missing before, I think we should try and factorize the code to have a single func we can call from here and putInCache. Maybe something that adds the response to Redis, and returns the local cache TTL + the entry with adjusted TTL.

@kwitsch kwitsch disabled auto-merge April 12, 2024 22:09
@kwitsch
Copy link
Collaborator Author

kwitsch commented Apr 13, 2024

@ThinkChaos I started refactoring the whole caching_resolver.
May take a bit more time but I think there's some enhancement potential regarding code duplication and inconsistent behavior.

Thanks for the input. 👍

I'll let you know when I'm done.

@kwitsch kwitsch marked this pull request as ready for review April 19, 2024 21:53
@kwitsch
Copy link
Collaborator Author

kwitsch commented Apr 19, 2024

@ThinkChaos still needs some tests but may I get your opinion on the current changes?
I'm a bit unsure if the direction I took was an advancement? 🤔

r.redisClient.PublishCache(cacheKey, cacheCopy)
}

return &packed, util.ToTTLDuration(ttl)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the biggest cause of dupe code is that here we need to return the new cache entry,
but in r.Resolver we need to call r.resultCache.Put.
So we could change the prefetch API to be more like the normal caching flow:
PrefetchingOptions.ReloadFn func(ctx context.Context, key string) (*T, time.Duration)
could be something like OnAboutToExpire func(ctx context.Context, key string),
where the callback is expected to call Put if they want to update the value.
That way reloadCacheEntry could become a tiny wrapper around r.Resolve (or a new resolve method they both call).

From a quick check, ReloadFn is only used by this code (and the cache tests),
and Put allows replacing a value, so I think that should be doable!

From there I think most things will fall into place :)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the input I think that was the part which caused me the most headaches. 😅

I also try to refactor the caches to streamline the resolver behavior. 👍

Comment on lines 206 to 212
ttl := r.modifyResponseTTL(response.Res)
if ttl > noCacheTTL {
cacheCopy := r.putInCache(logger, cacheKey, response)
if cacheCopy != nil && r.redisClient != nil {
r.redisClient.PublishCache(cacheKey, cacheCopy)
}
}
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This still adjusts the TTL twice, but if we can get everything down to a single code path, it means putInCache can also publish to Redis, which will make this whole part simpler, and avoid needing to return a copy out etc. and just use it in a couple lines in a row.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe wrap it in another function? 🤔

I tried removing the publish Boolean parameter since it seemed misplaced.
On the other hand also a method to publish without put is needed (prefetching)... 🥴
I'm a little troubled fitting these 3 requirements in one logical block.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On the other hand also a method to publish without put is needed (prefetching)... 🥴

Not sure if you saw my other comment, but I think we can make this go away without too much trouble.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Already commented on it. 👍

Sorry, read and replayed in reverse order. 🫣

@ThinkChaos
Copy link
Collaborator

I had to checkout the branch locally to browse with my editor cause it is indeed a lot of functions calling combined in a couple different ways so definitely hard to follow!

@kwitsch
Copy link
Collaborator Author

kwitsch commented Apr 20, 2024

I had to checkout the branch locally to browse with my editor cause it is indeed a lot of functions calling combined in a couple different ways so definitely hard to follow!

Yeah I know, that's one of the reasons why I was a bit unsure about the direction. 🫣

@kwitsch kwitsch marked this pull request as draft April 20, 2024 20:25
@kwitsch
Copy link
Collaborator Author

kwitsch commented Apr 20, 2024

Good thing I looked deeper into the caches since I found a remaining context.Background() : https://github.com/0xERR0R/blocky/blob/main/cache%2Fexpirationcache%2Fexpiration_cache.go#L129

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐞 bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Prefetch updates aren't published to Redis
2 participants