Ferris provides extensive tools for caching data. Taking advantage of caching can significantly reduce your applications ongoing cost while decreasing latency and improving responsiveness. The caching utilities can use multiple storage backends to suit different purposes: App Engine’s Memcache API, the Cloud Datastore, or local in-process memory.


ferris3.caching.cache(key, ttl=0, backend=None)[source]

General-purpose caching decorator. This decorator causes the result of a function to be cached so that subsequent calls will return the cached result instead of calling the function again. The ttl argument determines how long the cache is valid, once the cache is invalid the function will be called to generate a new value and the cache will be refreshed. The backend argument can be used to determine how the value is cached- by default, the value is stored in memcache but there are built-in backends for thread-local caching and caching via the datastore.


@cache('something_expensive', ttl=3600)
def expensive_function():
ferris3.caching.cache_by_args(key, ttl=0, backend=None)[source]

Like cache(), but will use any arguments to the function as part of the key to ensure that variadic functions are cached separately. Argument must be able to be printed as a string- it’s recommended to use plain data types as arguments.

Utility Functions

When using the cache() decorator on a function, the caching module adds three helpful utility methods. Note that these methods are not available for the cached_by_args() decorator.


This will clear any cached data for the function so that the next call will execute the function and refresh the cached data.


def herd_cats():
    count = do_herd_cats()
    return count


Returns the cached value for the function if it’s set, otherwise it returns None.


Skips the caching layer completely and executes the function. This is essentially the same as calling the function without it ever being decoratored.


Three different backends are provided as well as a special layering backend.

class ferris3.caching.LocalBackend[source]

The local backend stores caches in a thread-local variable. The caches are available for this thread and likely just for the duration of one request.

class ferris3.caching.MemcacheBackend[source]

Stores caches in memcache. Memcache is available across instances but is subject to being dumped from the cache before the expiration time.

class ferris3.caching.MemcacheCompareAndSetBackend[source]

Same as the regular memcache backend but uses compare-and-set logic to ensure that memcache updates are atomic.

class ferris3.caching.DatastoreBackend[source]

Stores caches in the datastore which has the effect of them being durable and persistent, unlike the memcache and local backends. Items stored in the datastore are certain to remain until the expiration time passes.

class ferris3.caching.LayeredBackend(*args)[source]

Allows you to use multiple backends at once. When an item is cached it is put in to each backend. Retrieval checks each backend in order for the item. This is very useful when combining fast but volatile backends (like local) with slow but durable backends (like datastore).


@cache('something_expensive', ttl=3600, backend=LayeredBackend(LocalBackend, DatastoreBackend))
def expensive_function():