ActiveSupport::Cache::RedisCacheStore (original) (raw)

Redis Cache Store

Deployment note: Take care to use a dedicated Redis cache rather than pointing this at a persistent Redis server (for example, one used as an Active Job queue). Redis won’t cope well with mixed usage patterns and it won’t expire cache entries by default.

Redis cache server setup guide: redis.io/topics/lru-cache

Methods

C

D

I

N

R

S

Constants

DEFAULT_ERROR_HANDLER = -> (method:, returning:, exception:) do if logger logger.error { "RedisCacheStore: #{method} failed, returned #{returning.inspect}: #{exception.class}: #{exception.message}" } end ActiveSupport.error_reporter&.report( exception, severity: :warning, source: "redis_cache_store.active_support", ) end
DEFAULT_REDIS_OPTIONS = { connect_timeout: 1, read_timeout: 1, write_timeout: 1, }
MAX_KEY_BYTESIZE = 1024
Keys are truncated with the Active Support digest if they exceed 1kB

Attributes

[R] max_key_bytesize
[R] redis

Class Public methods

Creates a new Redis cache store.

There are four ways to provide the Redis client used by the cache: the :redis param can be a Redis instance or a block that returns a Redis instance, or the :url param can be a string or an array of strings which will be used to create a Redis instance or a Redis::Distributed instance.

Option  Class       Result
:redis  Proc    ->  options[:redis].call
:redis  Object  ->  options[:redis]
:url    String  ->  Redis.new(url: …)
:url    Array   ->  Redis::Distributed.new([{ url: … }, { url: … }, …])

No namespace is set by default. Provide one if the Redis cache server is shared with other apps: namespace: 'myapp-cache'.

Compression is enabled by default with a 1kB threshold, so cached values larger than 1kB are automatically compressed. Disable by passing compress: false or change the threshold by passing compress_threshold: 4.kilobytes.

No expiry is set on cache entries by default. Redis is expected to be configured with an eviction policy that automatically deletes least-recently or -frequently used keys when it reaches max memory. See redis.io/topics/lru-cache for cache server setup.

Race condition TTL is not set by default. This can be used to avoid “thundering herd” cache writes when hot cache entries are expired. See ActiveSupport::Cache::Store#fetch for more.

Setting skip_nil: true will not cache nil results:

cache.fetch('foo') { nil }
cache.fetch('bar', skip_nil: true) { nil }
cache.exist?('foo') # => true
cache.exist?('bar') # => false

Source: show | on GitHub

def initialize(error_handler: DEFAULT_ERROR_HANDLER, **redis_options) universal_options = redis_options.extract!(*UNIVERSAL_OPTIONS)

if pool_options = self.class.send(:retrieve_pool_options, redis_options) @redis = ::ConnectionPool.new(pool_options) { self.class.build_redis(**redis_options) } else @redis = self.class.build_redis(**redis_options) end

@max_key_bytesize = MAX_KEY_BYTESIZE @error_handler = error_handler

super(universal_options) end

Advertise cache versioning support.

Source: show | on GitHub

def self.supports_cache_versioning? true end

Instance Public methods

Cache Store API implementation.

Removes expired entries. Handled natively by Redis least-recently-/ least-frequently-used expiry, so manual cleanup is not supported.

Clear the entire cache on all Redis servers. Safe to use on shared servers if the cache is namespaced.

Failsafe: Raises errors.

Source: show | on GitHub

def clear(options = nil) failsafe :clear do if namespace = merged_options(options)[:namespace] delete_matched "*", namespace: namespace else redis.then { |c| c.flushdb } end end end

Decrement a cached integer value using the Redis decrby atomic operator. Returns the updated value.

If the key is unset or has expired, it will be set to -amount:

cache.decrement("foo") # => -1

To set a specific value, call write passing raw: true:

cache.write("baz", 5, raw: true)
cache.decrement("baz") # => 4

Decrementing a non-numeric value, or a value written without raw: true, will fail and return nil.

Failsafe: Raises errors.

Source: show | on GitHub

def decrement(name, amount = 1, options = nil) options = merged_options(options) key = normalize_key(name, options)

instrument :decrement, key, amount: amount do failsafe :decrement do change_counter(key, -amount, options) end end end

Cache Store API implementation.

Supports Redis KEYS glob patterns:

h?llo matches hello, hallo and hxllo
h*llo matches hllo and heeeello
h[ae]llo matches hello and hallo, but not hillo
h[^e]llo matches hallo, hbllo, ... but not hello
h[a-b]llo matches hallo and hbllo

Use \ to escape special characters if you want to match them verbatim.

See redis.io/commands/KEYS for more.

Failsafe: Raises errors.

Source: show | on GitHub

def delete_matched(matcher, options = nil) unless String === matcher raise ArgumentError, "Only Redis glob strings are supported: #{matcher.inspect}" end pattern = namespace_key(matcher, options)

instrument :delete_matched, pattern do redis.then do |c| cursor = "0"

  nodes = c.respond_to?(:nodes) ? c.nodes : [c]

  nodes.each do |node|
    begin
      cursor, keys = node.scan(cursor, match: pattern, count: SCAN_BATCH_SIZE)
      node.del(*keys) unless keys.empty?
    end until cursor == "0"
  end
end

end end

Increment a cached integer value using the Redis incrby atomic operator. Returns the updated value.

If the key is unset or has expired, it will be set to amount:

cache.increment("foo") # => 1
cache.increment("bar", 100) # => 100

To set a specific value, call write passing raw: true:

cache.write("baz", 5, raw: true)
cache.increment("baz") # => 6

Incrementing a non-numeric value, or a value written without raw: true, will fail and return nil.

Failsafe: Raises errors.

Source: show | on GitHub

def increment(name, amount = 1, options = nil) options = merged_options(options) key = normalize_key(name, options)

instrument :increment, key, amount: amount do failsafe :increment do change_counter(key, amount, options) end end end

Source: show | on GitHub

def inspect "#<#{self.class} options=#{options.inspect} redis=#{redis.inspect}>" end

Cache Store API implementation.

Read multiple values at once. Returns a hash of requested keys -> fetched values.

Source: show | on GitHub

def read_multi(*names) return {} if names.empty?

options = names.extract_options! options = merged_options(options) keys = names.map { |name| normalize_key(name, options) }

instrument_multi(:read_multi, keys, options) do |payload| read_multi_entries(names, **options).tap do |results| payload[:hits] = results.keys.map { |name| normalize_key(name, options) } end end end

Get info from redis servers.