Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement CachingExecutor using cache TTL, deprecate old CachedExecutor #129

merged 4 commits into from Jul 8, 2019


Copy link

@clue clue commented Jul 8, 2019

This PR implements a whole new CachingExecutor and deprecates the existing CachedExecutor because it is broken beyond repair. As part of this changeset, we now ensure that we cache the whole respone message including all records from the RRset (fxies #119). Additionally, we no longer cache truncated response messages as per the RFC.

This initial implements always uses a 60s cache TTL for each response message, irrespective of the TTL indicated for each record (see #81). We believe this is a reasonable compromise as an initial version that should not affect most common use cases. A follow-up PR should implement a more sophisticated TTL logic in the future and should respect the TTL values for each record (within reasonable limits as discussed in #81 and #116).
This implements now respects the TTL indicated for each record (see #81) and uses a 60s cache TTL for negative messages.

Resolves #119
Resolves #81
Builds on top of #127

@clue clue added this to the v0.4.18 milestone Jul 8, 2019
@WyriHaximus WyriHaximus requested review from WyriHaximus and jsor Jul 8, 2019
Copy link

@WyriHaximus WyriHaximus left a comment

LGTM :shipit:


Copy link
Member Author

@clue clue commented Jul 8, 2019

@WyriHaximus I've just updated this to respect the TTL values given in each record 👍


Copy link

@WyriHaximus WyriHaximus commented Jul 8, 2019

@clue awesome!


@WyriHaximus WyriHaximus self-requested a review Jul 8, 2019
jsor approved these changes Jul 8, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
None yet
Linked issues

Successfully merging this pull request may close these issues.

3 participants