New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Solid Cache should be the default caching backend for Rails 8 #50443
Comments
Could this be a switch like |
Would |
As mentioned in person in Amsterdam, I don't think SolidCache is a good default because:
Given that a very large part of users (especially people trying Rails for the very first time) are using a managed database with fairly low limits on the number of rows and total storage, it would be a big footgun. Even for users going the rented server + Kamal route, hosting a And even for users going with SQLite, it means they are on a single server, so I agree it's annoying we can't have a cache store setup by default, but I don't think SolidCache is the solution. If anything I'd be more in favor of enabling the |
Here are the problems I'd like to solve:
But let's validate whether these design goals are compatible with the performance of envelopes of common, low-end VMs. I'd be surprised if you run into DB-related issues, given a low default Solid Cache limit, before you'd run out of other resources at the low end. If anything, the reverse may well be true. That more effective and longer lived caches makes your app perform better, even if it does lean on a constrained DB. But this would be good to test! So let's open this issue to people who'd like to help us discern those factors. Try to setup Solid Cache on small VMs, run a bunch of benchmarks against an app that uses caching, and let's see where things might fall over. Appreciate the concerns you raise, @byroot! It's absolutely possible to add bad caching to the mix and make things worse. So we need to avoid that. |
That's a good point It's something that we could try to improve though. Memsize would be complicated, but we could cap the number of entries, and LRU. Wouldn't be exactly ideal but doable.
Yes and no. It only rule it out of you delete or overwrite existing keys, which IMO isn't great design since most caches are eventually consistent, but that's a much longer debate 😄 . There is also the question of whether once you have multiple machine you aren't already past the point where it's OK to setup a dedicated cache service. I agree that Redis being a swiss army knife it requires a careful config, but Memcached is beyond trivial to setup. So that's where I don't quite follow the direction. As I believe we both agree, defaults should try to optimize for the most common use case, people who know what they are doing shouldn't be afraid to change the defaults. So I'm trying to put myself in the shoes of someone starting a new Rails app, that's not gonna handle a huge load right away. In my mind they are either starting with a PaaS (e.g. Heroku, Fly, Render, etc):
Or they are starting with some cheap VPS or bare metal (e.g. capistrano or Kamal), in which case:
That's why I don't see SolidCache fitting the bill for being the default. It's a bit too situational, and can turn into a footgun if it fills the limited database. |
What are those limits? On say Heroku? If you're running a tiny app on a tiny dyno, you'll presumably also have few users, and thus not much data to cache? So I think these things go together, but let's explore. I think the current situation is not good. There's no default, persistent cache that won't fill up your disk. That's a problem we should fix. Solid Cache fixes that, but maybe in a way that demands too much of tiny DBs? On small systems, though, I reckon you're more likely to be constrained with memory (Redis) and CPU (no cache) before disk (DB). Will explore some testing to validate this hypothesis. |
From: https://elements.heroku.com/addons/heroku-postgresql
I totally agree with the premise, I just don't see a solution 😢. |
Okay. If you're on that tiny tier, you may well not want to use that space for caching. But you could also just not. By default, Rails doesn't actually cache anything. I don't want to design primarily for such a poor setup. 1GB/10K seems like limits that probably made sense for Heroku in 2009, and then just weren't ever updated since. A $7 DO Droplet has 25GB of storage, which could be used by a database. We are never going to satisfy all the constraints. But a default setup that's multi-machine, uses disks over RAM, and supports both auto-trimming and encryption out of the box seems superior to what we have now. Then we can document what to do if you still want a large cache but live under the severe constraints of the smallest possible cloud VMs. Either way, even at 10K/1GB, you're going to be fine for a long time. |
But would love if anyone is ALSO interested in improving the filestore with trimming controls and encryption. Would be nice to have great options for both files, DB, and Redis/memcached. |
Note that my apprehension isn't so much about being suitable for such small setups, but more about failure mode. My big worry is someone create a new Rails app, deploy it to one of these platforms (there are tons of tutorials for that), it works fine for a while until suddenly the DB is full and everything falls apart. If it was only taking down the cache I wouldn't mind, but here it would also take down the app primary features. It's really about "unknown unknowns", when a user opt-in to a feature, we can consider they are responsible for making sure it will work for their setup. When it's the default it's more our responsibility to make sure it won't bite them.
I have a big refactoring of Active Support Cache in the back of my mind for a couple years now, to solve a few perf problems but also make this sort of stuff easier. Not sure if / when I'll get around to work on it though. |
Yeah filling up the DB with caching entries is a no go. Let's make this contingent on having a space-based limit in Solid Cache rather than just the current time-based limit. Then we can ensure that we only start the original setting at 100-250mb, leaving lots of room for data on a 1GB-capped DB. We will get that sorted before proceeding 👌 |
But I would ALSO love to see a better file store with both auto-trimming and encryption. |
I wish I could have solid_cache, file_store_cache, and redis_cache in one app. e.g. Rails.cache_storage(:redis).cache do ... end
Rails.cache_storage(:memcached).cache do ... end
# or
config.controller.cache_storage = :file |
@igorkasyanchuk Can you explain more why you need/want to use multiple stores? |
Solid Cache with SQLite seems like a better choice than Filestore (probably in most cases). SQLite seems more efficient at storage, it reads less from disk, more performant in my synthetic tests.
composite_cache_store explains really well why you might want to use multiple cache stores. |
Not sure that's worth the effort for most, but not opposed to let people use different stores in different blocks. Like we do with the multi-db setup. PDI. |
Would be interesting to hear what you would change? Somebody else might pick up this work (maybe me, I've been dabbling with rails cache related functionality last year). |
for example, if I have a server with not a lot of RAM, but I still want to use Memcached for example + use file storage to do page/action caching because I don't want to use a lot of RAM. |
Is this a situation you've actually been in or a theory? Again, not necessarily against exploring it, but it's gotta be an extraction, not a speculation. |
I would appreciate multi-backend for transition period (for migrating from Memcached to Solid Cache for example) to deploy code able to serve from old cache, but warw the new one to prevent missing the whole cache after cache storage switch. That would make transition to Solid Cache for example much smoother for some apps relying on cache a much, since cold cache could put a stress on DB (or other backend doing the hard work to warm the cache). On the other side, it could be done manually by initializing cache store temporarily manually (like |
yes, for me was a real case. It was some time ago and my app was very simple, with many almost static pages and some stats that I wanted to cache in memory (for better performance). If in the future we can have such flexibility it would be great to be able to specify storage for caching. And thanks for your questions. |
Gotcha, yeah, I like the idea of a certain cache store governing a block. Please do look into that. |
@igorkasyanchuk - you can set a different cache store for fragment caching already with @simi - this is what we used in Basecamp to switch from Redis to Solid Queue. We assigned X percent of traffic to Solid Cache or Redis by hashing the cache key and gradually shifted more traffic over over the course of a week. I don't know if it would work as a generic cache splitter - we were only using it for Rails fragment caching so there may be some cases or cache methods it doesn't work well with. |
I've got some notes on how I'm planning to estimate the cache size here. I'd also like to introduce an indexed Would that be acceptable, or it be a requirement to support it if Solid Cache was going to be the Rails default?
|
Yup, that should quite drastically reduce your index size.
IMO I did ask on campfire why it was there though, and @dhh and @jeremy suggested a few use cases like clearing cache in development, and clearing a customer cache. IMO the former is better handled by clearing the entire cache, and the later by just throwing the decryption key given your cache entries are encrypted. All this to say I think |
I'm going to keep the
Sounds good! Another point in favour of this is that |
If that is a concern, you can store 128 bit hashes in a pair of bigint (int64) columns. But yes, keeping the key doesn't cost much as long as it's not indexed, so why not. |
Like with Solid Queue, Solid Cache gives us a database-agnostic backend for Rails.cache that works well as an out-of-the-box default in production – without any configuration needed or dependencies (like Redis) required.
The tables should be setup out of the box with "rails new", but you should be able to avoid this using
--skip-solid-cache
or just--skip-solid
.Work outstanding:
cc @djmb
The text was updated successfully, but these errors were encountered: