New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rails.cache.fetch unable to retrieve previous cached value if expired #45311
Comments
Hmm, I'm not sure I follow your example. I assume the |
The intent would be to get the entry just prior to the 'begin' block and just use its value inside the 'rescue'. I just didn't add it a second time. I just updated it to make more sense in the flow. |
But if you can get its entry just before the begin block doesn’t that mean it hasn’t expired? In which case fetch would not call its block, as it can also get the entry? |
Nope. The |
Not sure what you are trying to do, but it looks a lot like the |
As discussions are also occurring on the PR, this does not work. See my following reply: If an error occurs inside the yield block the opportunity to retrieve the previous value is lost. The previous 'entry' is deleted prior to performing the execution of the yield block so I cannot retrieve it after the fact. |
Ok, I didn't see the PR. I skimmed it and still don't understand your use case. But anyway making |
@byroot I have a very concrete example provide in my steps to reproduce in the opening of this discussion. My use case is simple in that when I am trying to perform my external service retrieval of a new value to cache (yield block) I am experiencing an error in that retrieval and I am unable to get a new value. When that error occurs, I want my consumers to be unaware that a downline service call failed and continue working with the previous cached value. So what I want to do is return that previous cached value and just try hitting the downstream service at a later point when it may or may not be back up. The way to accomplish is to simply place the previous entry's value back into the cache for a designated period of time and all the normal flow to simply retry next time when the cache once again expires. |
Ok, that makes more sense. What I propose it to either modify or add an option to But |
Cool, thanks @byroot. So, I added in my example the key = 'foo'
args = { expires_in: 10, race_condition_ttl: 60 }
Rails.cache.fetch(key, args) { 'bing' }
sleep(10)
# entry = Rails.cache.send(:read_entry, key, args)
begin
value = Rails.cache.fetch(key, args) { raise 'error' }
puts "I made it here with #{value}"
rescue StandardError
puts "I landed in the exception case"
end The If the |
Ok, I think I see where the confusion is. I think it also somewhat relate to #41344. What about begin
value = Rails.cache.fetch(key, args) { raise 'error' }
puts "I made it here with #{value}"
rescue StandardError
Rails.cache.read(key, stale: true)
end It means two reads instead of one, but as a fallback mechanism it's probably ok. |
You are correct that I have a single process that wants to read and then regenerate if it fails as a fallback. I've wondered how the Let me give you the real world scenario I ran into. I had a cache set to four hours for retrieving some data. The external service I attempted to call went down for over 6 hours. After all my 500+ rails applications with their multiple processes expired their cache entries, they were all absolutely bombarding the dead service. Once that service tried to come back online, I had so many services attempting to retrieve that new value to cache that it DDoS'ed the starting service and essentially took it back down. What I wanted to solve is that if a service goes down, my service can run indefinitely with cached results and at the cache TTL interval attempt to refresh its data. What I have implemented as a workaround for now is a dual cache system where I have a short-term and long-term cache that if I lose my short-term cached value I pull from the long-term and stuff it back into short-term and reset the long-term TTL. This way I solve if that service went down for 6 hours that all my rails applications continue to work without interruption. Now I understand if any service was restarted and lost the cache that it would fail and that is expected. Maybe at that point I should look into I am only looking to mitigate the situation for the population that knows of the existing value and keep them continuing as if nothing is wrong. |
I don't think so. It's hard for a framework to assume that X seconds stale responses are acceptable for the users. That's extremely use case dependent. That said, unless I'm mistaken all options can be passed to the constructor, so you can configure your cache with a default value to |
Maybe it will be easier for you in this case not to use caching, but retrieve the values from 3rd party, store them in redis indefinitely and use where you need it. And just periodically refresh it. |
@fatkodima but then I would have to call another external service to retrieve that data and then cache it locally so I don't waste processing time by making that external call. I just changed the potential point of failure and the issue could still arise if now my redis service goes down. If you're implying to just start a redis instance locally on the same server it just seems like a lot of overhead (and expensive $$) for something that could be simply baked into the existing cache design. |
I believe @fatkodima is saying just use Redis on your own, but don't set any expirations on the keys. Redis.current.hmset('cachedthing', 'value', 'This is my cached value', 'at', Time.now.to_f)
result = Redis.current.hgetall('cachedthing')
#=> {"value"=>"This is my cached value", "at"=>"1655534182.348695"}
if !result['at'] || Time.at(result['at'].to_f) < 10.minutes.ago
# enqueue an ActiveJob to fetch the new value and hmset it
end
return result['value'] |
Wait, what am I saying? You'd just need to use Rails Cache without an expiration: value, cached_at = Rails.cache.fetch('cachedthing') do # don't set expires_in: 10.minutes here
['This is my cached value', Time.now]
end
if !cached_at || cached_at < 10.minutes.ago
# enqueue an ActiveJob to fetch the new value and write it into the cache
end
return value Alternatively, you could do: Rails.cache.fetch('cachedthing') do # don't set expires_in: 10.minutes here
MyCacheJob.set(wait: 10.minutes).perform_later # enqueue a job to rebuild the cache in 10 minutes
'This is my cached value'
end |
This issue has been automatically marked as stale because it has not been commented on for at least three months. |
Steps to reproduce
Problem statement
If an unexpected error occurs within the yield block say to a service call being down for a short period, it would be nice to provide the option to retrieve the existing cached value and re inject the value back into the cache to check again when the new cache period expires. This would allow servers to continue to function with a value rather than crashing because a consuming service went down for retrieving updates making my server more resilient to other consuming service problems.
Actual behavior
If the yield block fails, the previous cached value is deleted and no longer accessible after the fact.
Enhancement request
My suggestion is to move
read_entry
to be publicly accessible allowing consumers to first hold the previous result prior to calling thefetch
. If thefetch
yields an error, the consumer can then catch their acceptable errors, determine their acceptable retries for exponential backoff, and thenwrite
back into the cache their wait period for attempting the 'refresh' of their cached content.System configuration
Rails version:
5.2 or greater
Ruby version:
2.6 or greater
The text was updated successfully, but these errors were encountered: