Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache policy #546

Merged
merged 10 commits into from
Jan 17, 2018
Merged

Cache policy #546

merged 10 commits into from
Jan 17, 2018

Conversation

davidor
Copy link
Contributor

@davidor davidor commented Jan 9, 2018

First version of the caching policy.

The functionality is almost the same as the behavior that can be achieved using APICAST_BACKEND_CACHE_HANDLER. Apart from the two modes supported (strict, resilient), this policy adds the possibility of configure Apicast to not use a cache and make all the calls synchronously.

This does not break compatibility. If this policy is not included, users can still use APICAST_BACKEND_CACHE_HANDLER.

This PR duplicates some of the code in cache_handler.lua. I think that's fine because we'll get rid of the duplication once we split Apicast into smaller policies.

"properties": {
"exit": {
"type": "caching_type",
"enum": ["resilient", "strict"]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We could offer none as suggested in https://issues.jboss.org/browse/THREESCALE-587.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@davidor davidor force-pushed the cache-policy branch 4 times, most recently from 69cf214 to 37f2f06 Compare January 15, 2018 11:14
@davidor davidor changed the title [WIP] Cache policy Cache policy Jan 15, 2018
-- cached_key is set in post_action and it is in in authorize
-- so to not write the cache twice lets write it just in authorize

if fetch_cached_key(cached_key) ~= cached_key then
ngx.log(ngx.INFO, 'apicast cache write key: ', cached_key, ', ttl: ', ttl, ' sub: ')
ngx.log(ngx.INFO, 'apicast cache write key: ', cached_key,
', ttl: ', ttl, ' sub: ')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hm. The sub: there does not really say anything. I guess 3e can remove it as it is some copy paste error.

Copy link
Contributor Author

@davidor davidor Jan 15, 2018

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah you're right. I didn't pay attention to this line and then copied it to the new policy.

local function strict_handler(cache, cached_key, status_code, ttl)
if status_code == 200 then
ngx.log(ngx.INFO, 'apicast cache write key: ', cached_key,
', ttl: ', ttl, ' sub: ')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here is also sub: without anything after it.

end

local function disabled_cache_handler()
return function() end
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is going to allocate function every time this is executed. It is not necessary no?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right. We can choose one in the initializer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line can be removed, right?
Or maybe we could add a debug log line saying that it skipped cache.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

if not config.caching_type then
ngx.log(ngx.ERR, 'Caching type not specified. Cache disabled.')
res.caching_type = 'none'
elseif config.caching_type == 'resilient' or
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This would be nicer by checking a key in a table no?

return
end

context.proxy.cache_handler = handler(self.config.caching_type)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if this policy would just export the correct key in :export() method?
Then the APIcast policy could set the cache_handler to the proxy if it is in the context.

Then the "ownership" of the proxy object would still be in the APIcast module.
I'd be a bit worried that this is pretty deep access into internals of a module from some unrelated part of code. APIcast module initializes the Proxy object, so it could pass the cache handler as an argument for example.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely. I was thinking about the phases (rewrite, access, etc.) and forgot about export().

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out that export() is not working correctly for policies included in the local policy chain.

I suggest we address that problem on a separate PR. I've been investigating a bit and didn't find a straightforward solution.

To address your concern, in the rewrite phase of the cache policy, I included the cache_handler in the context. Later, the Apicast policy checks if it has been included, and if so, overrides the cache_handler method of the instance of Proxy that it owns.

@@ -34,28 +34,29 @@ local function fetch_cached_key()
return ok and stored
end

function _M.handlers.strict(cache, cached_key, response, ttl)
if response.status == 200 then
function _M.handlers.strict(cache, cached_key, status_code, ttl)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is breaking change for someone who written a custom handler, right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Didn't think about users overridding this method.

@davidor davidor force-pushed the cache-policy branch 2 times, most recently from f3b4837 to f449e36 Compare January 15, 2018 17:53
@davidor davidor changed the title Cache policy [WIP] Cache policy Jan 15, 2018
This is based on the current CacheHandler module. Many parts of the code
are duplicated, but this duplication will disappear once we split the
current Apicast policy into smaller ones.

This does not break the current Apicast cache. Users can still use the
APICAST_BACKEND_CACHE_HANDLER to configure the cache.
@davidor davidor changed the title [WIP] Cache policy Cache policy Jan 17, 2018
@davidor
Copy link
Contributor Author

davidor commented Jan 17, 2018

@mikz I addressed your comments. Ready for review.

@davidor davidor requested a review from mikz January 17, 2018 11:24
Copy link
Contributor

@mikz mikz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking good!

I think it would be good to address super minor: https://github.com/3scale/apicast/pull/546/files#r162066396

@davidor davidor merged commit db74364 into master Jan 17, 2018
@davidor davidor deleted the cache-policy branch January 17, 2018 15:28
local handlers = {
resilient = resilient_handler,
strict = strict_handler,
none = disabled_cache_handler
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@davidor we also should have "allow" handler that will allow the the request when backend is down.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants