Skip to content
This repository has been archived by the owner on Apr 24, 2024. It is now read-only.

Caching

Johannes Rudolph edited this page Aug 14, 2013 · 7 revisions

. . . Deprecation Note

This documentation is for release 0.9.0 (from 03/2012), which is built against Scala 2.9.1 and Akka 1.3.1 (see Requirements for more information). Most likely, this is not the place you want to look for information. Please turn to the main spray site at http://spray.io for more information about other available versions.

. . .

Often times you have routes that perform expensive and time-consuming operations in order to fulfill a client request, e.g. fetching and filtering order records from a database. In order to minimize server load and network traffic HTTP defines the Cache-Control header, which allows for rather detailed definition of caching policies at the client and in the network. However, for caching HTTP responses for similar requests right on the server HTTP cannot help you, the application has to perform this itself.

spray supports server-side caching with the cacheResults directive. It wraps its inner route with a response cache and takes two arguments, a Cache instance and a "cache-keyer" function of type RequestContext => Option[Any], which is responsible for providing the object to key on when performing a cache lookup.

The LruCache

Currently spray comes with two Cache implementations, the [SimpleLruCache] and the [ExpiringLruCache]. Both provide a last-recently-used cache with a fixed capacity, the latter adds support for expiring cache entries after a certain time of idleness.

One thing to note about spray cache implementations is that they do not store cached objects of type T directly, but rather Akka futures for T, i.e. instances of type Future[T]. This nicely takes care of the thundering herds problem, where many requests to a particular resource arrive, before the first one could be completed. Traditional cache implementation pull all kinds of tricks to deal with this situation (e.g. by introducing so-called "cowboy" entries). In spray caches the very first request to arrive causes a future to be put into the cache, that all later requests "hook into". As soon as the first request completes the future all other ones complete as well. This minimizes processing time and server load for all requests.

As a simple default caching solution spray also provides the cache directive, which is implemented as follows:

/**
 * Wraps its inner Route with caching support using a default [[cc.spray.caching.LruCache]] instance
 * (max-entries = 500, initialCapacity = 16, time-to-idle: infinite) and the `CacheKeyers.UriGetCacheKeyer` which
 * only caches GET requests and uses the request URI as cache key.
 */
 lazy val cache = cacheResults(LruCache())

It merely provides an alias to a preconfigured cacheResults directive.

The CacheKeyer

The cache-keyer function is responsible for deciding, which responses are to be cached and if so, on what key. By default only the responses to GET requests are cached, with the request URI as key. This means that two requests to the same URI will receive the same response. To non-GET requests the cacheResults directive is completely transparent.

Example

Consider this example:

path("some" / "Resource") {
  cache {
    get {
      ... // expensive logic for retrieving the resource representation
    } ~
    put {
      ... // route B
    }
  }
}

In this example the "expensive logic" will only run once (if the incoming requests do not differ in URI parts different from the path, like host or query string). Afterwards all subsequent GET requests to the resource will be served from the underlying LruCache instance. PUT requests that are being handled by route B are unaffected by the cache.