Skip to content

Commit

Permalink
Merge pull request #50496 from skipkayhil/hm-doc-cache
Browse files Browse the repository at this point in the history
Cache::Store doc improvements [ci-skip]
  • Loading branch information
skipkayhil committed Dec 31, 2023
2 parents e7826d8 + f042e02 commit 2dda042
Show file tree
Hide file tree
Showing 4 changed files with 29 additions and 25 deletions.
8 changes: 4 additions & 4 deletions activesupport/lib/active_support/cache.rb
Expand Up @@ -160,8 +160,8 @@ def retrieve_store_class(store)
# Some implementations may not support all methods beyond the basic cache
# methods of #fetch, #write, #read, #exist?, and #delete.
#
# ActiveSupport::Cache::Store can store any Ruby object that is supported by
# its +coder+'s +dump+ and +load+ methods.
# +ActiveSupport::Cache::Store+ can store any Ruby object that is supported
# by its +coder+'s +dump+ and +load+ methods.
#
# cache = ActiveSupport::Cache::MemoryStore.new
#
Expand Down Expand Up @@ -370,8 +370,8 @@ def mute
#
# ==== Options
#
# Internally, +fetch+ calls #read_entry, and calls #write_entry on a cache
# miss. Thus, +fetch+ supports the same options as #read and #write.
# Internally, +fetch+ calls +read_entry+, and calls +write_entry+ on a
# cache miss. Thus, +fetch+ supports the same options as #read and #write.
# Additionally, +fetch+ supports the following options:
#
# * <tt>force: true</tt> - Forces a cache "miss," meaning we treat the
Expand Down
12 changes: 6 additions & 6 deletions activesupport/lib/active_support/cache/mem_cache_store.rb
Expand Up @@ -24,11 +24,11 @@ module Cache
#
# Special features:
# - Clustering and load balancing. One can specify multiple memcached servers,
# and MemCacheStore will load balance between all available servers. If a
# server goes down, then MemCacheStore will ignore it until it comes back up.
# and +MemCacheStore+ will load balance between all available servers. If a
# server goes down, then +MemCacheStore+ will ignore it until it comes back up.
#
# MemCacheStore implements the Strategy::LocalCache strategy which implements
# an in-memory cache inside of a block.
# +MemCacheStore+ implements the Strategy::LocalCache strategy which
# implements an in-memory cache inside of a block.
class MemCacheStore < Store
# These options represent behavior overridden by this implementation and should
# not be allowed to get down to the Dalli client
Expand Down Expand Up @@ -106,14 +106,14 @@ def self.build_mem_cache(*addresses) # :nodoc:
end
end

# Creates a new MemCacheStore object, with the given memcached server
# Creates a new +MemCacheStore+ object, with the given memcached server
# addresses. Each address is either a host name, or a host-with-port string
# in the form of "host_name:port". For example:
#
# ActiveSupport::Cache::MemCacheStore.new("localhost", "server-downstairs.localnetwork:8229")
#
# If no addresses are provided, but <tt>ENV['MEMCACHE_SERVERS']</tt> is defined, it will be used instead. Otherwise,
# MemCacheStore will connect to localhost:11211 (the default memcached port).
# +MemCacheStore+ will connect to localhost:11211 (the default memcached port).
# Passing a +Dalli::Client+ instance is deprecated and will be removed. Please pass an address instead.
def initialize(*addresses)
addresses = addresses.flatten
Expand Down
6 changes: 3 additions & 3 deletions activesupport/lib/active_support/cache/memory_store.rb
Expand Up @@ -18,13 +18,13 @@ module Cache
# a cleanup will occur which tries to prune the cache down to three quarters
# of the maximum size by removing the least recently used entries.
#
# Unlike other Cache store implementations, MemoryStore does not compress
# values by default. MemoryStore does not benefit from compression as much
# Unlike other Cache store implementations, +MemoryStore+ does not compress
# values by default. +MemoryStore+ does not benefit from compression as much
# as other Store implementations, as it does not send data over a network.
# However, when compression is enabled, it still pays the full cost of
# compression in terms of cpu use.
#
# MemoryStore is thread-safe.
# +MemoryStore+ is thread-safe.
class MemoryStore < Store
module DupCoder # :nodoc:
extend self
Expand Down
28 changes: 16 additions & 12 deletions activesupport/lib/active_support/cache/redis_cache_store.rb
Expand Up @@ -19,22 +19,23 @@ module ActiveSupport
module Cache
# = Redis \Cache \Store
#
# Deployment note: Take care to use a *dedicated Redis cache* rather
# than pointing this at your existing Redis server. It won't cope well
# with mixed usage patterns and it won't expire cache entries by default.
# Deployment note: Take care to use a <b>dedicated Redis cache</b> rather
# than pointing this at a persistent Redis server (for example, one used as
# an Active Job queue). Redis won't cope well with mixed usage patterns and it
# won't expire cache entries by default.
#
# Redis cache server setup guide: https://redis.io/topics/lru-cache
#
# * Supports vanilla Redis, hiredis, and Redis::Distributed.
# * Supports Memcached-like sharding across Redises with Redis::Distributed.
# * Supports vanilla Redis, hiredis, and +Redis::Distributed+.
# * Supports Memcached-like sharding across Redises with +Redis::Distributed+.
# * Fault tolerant. If the Redis server is unavailable, no exceptions are
# raised. Cache fetches are all misses and writes are dropped.
# * Local cache. Hot in-memory primary cache within block/middleware scope.
# * +read_multi+ and +write_multi+ support for Redis mget/mset. Use Redis::Distributed
# 4.0.1+ for distributed mget support.
# * +read_multi+ and +write_multi+ support for Redis mget/mset. Use
# +Redis::Distributed+ 4.0.1+ for distributed mget support.
# * +delete_matched+ support for Redis KEYS globs.
class RedisCacheStore < Store
# Keys are truncated with the ActiveSupport digest if they exceed 1kB
# Keys are truncated with the Active Support digest if they exceed 1kB
MAX_KEY_BYTESIZE = 1024

DEFAULT_REDIS_OPTIONS = {
Expand Down Expand Up @@ -110,8 +111,11 @@ def build_redis_client(**redis_options)

# Creates a new Redis cache store.
#
# Handles four options: :redis block, :redis instance, single :url
# string, and multiple :url strings.
# There are four ways to provide the Redis client used by the cache: the
# +:redis+ param can be a Redis instance or a block that returns a Redis
# instance, or the +:url+ param can be a string or an array of strings
# which will be used to create a Redis instance or a +Redis::Distributed+
# instance.
#
# Option Class Result
# :redis Proc -> options[:redis].call
Expand All @@ -134,7 +138,7 @@ def build_redis_client(**redis_options)
#
# Race condition TTL is not set by default. This can be used to avoid
# "thundering herd" cache writes when hot cache entries are expired.
# See <tt>ActiveSupport::Cache::Store#fetch</tt> for more.
# See ActiveSupport::Cache::Store#fetch for more.
#
# Setting <tt>skip_nil: true</tt> will not cache nil results:
#
Expand Down Expand Up @@ -244,7 +248,7 @@ def increment(name, amount = 1, options = nil)
# Decrement a cached integer value using the Redis decrby atomic operator.
# Returns the updated value.
#
# If the key is unset or has expired, it will be set to -amount:
# If the key is unset or has expired, it will be set to +-amount+:
#
# cache.decrement("foo") # => -1
#
Expand Down

0 comments on commit 2dda042

Please sign in to comment.