Caching

Ferris provides extensive tools for caching data. Taking advantage of the caching can significantly reduce your applications ongoing cost while at the same time decreasing latency and improving responsiveness. The caching utilities can use multiple storage backends to suit different purposes: App Engine’s Memcache API, the Cloud Datastore, or local in-process memory.

Decorators

ferris.core.caching.cache(key, ttl=0, backend=None)[source]

General-purpose caching decorator. This decorator causes the result of a function to be cached so that subsequent calls will return the cached result instead of calling the function again. The ttl argument determines how long the cache is valid, once the cache is invalid the function will be called to generate a new value and the cache will be refreshed. The backend argument can be used to determine how the value is cached- by default, the value is stored in memcache but there are built-in backends for thread-local caching and caching via the datastore.

Example:

@cache('something_expensive', ttl=3600)
def expensive_function():
    ...
ferris.core.caching.cache_by_args(key, ttl=0, backend=None)[source]

Like cache(), but will use any arguments to the function as part of the key to ensure that variadic functions are cached separately. Argument must be able to be printed as a string- it’s recommended to use plain data types as arguments.

Utility Functions

When using the cache() decorator on a function, the caching module adds three helpful utility methods. Note that these methods are not available for the cached_by_args() decorator.

cachedfunction.clear_cache()

This will clear any cached data for the function so that the next call will execute the function and refresh the cached data.

Example:

@cache('herd-cats')
def herd_cats():
    count = do_herd_cats()
    return count

herd_cats.clear_cache()
cachedfunction.cached()

Returns the cached value for the function if it’s set, otherwise it returns None.

cachedfunction.uncached()

Skips the caching layer completely and executes the function. This is essentially the same as calling the function without it ever being decoratored.

Backends

Several backend classes are are provided as well as a special layering backend, LayeredBackend. For caching large data, Ferris provides two classes, MemcacheChunkedBackend and DatastoreChunkedBackend, which automatically break objects larger than their backend’s limits into smaller chunks. These chunking classes should only be used for large objects, because chunking incurs a small overhead; otherwise, use MemcacheBackend and DatastoreBackend.

class ferris.core.caching.LocalBackend[source]

The local backend stores caches in a thread-local variable. The caches are available for this thread and likely just for the duration of one request.

class ferris.core.caching.MemcacheBackend[source]

Stores caches in memcache. Memcache is available across instances but is subject to being dumped from the cache before the expiration time.

class ferris.core.caching.MemcacheCompareAndSetBackend[source]

Same as the regular memcache backend but uses compare-and-set logic to ensure that memcache updates are atomic.

class ferris.core.caching.DatastoreBackend[source]

Stores caches in the datastore which has the effect of them being durable and persistent, unlike the memcache and local backends. Items stored in the datastore are certain to remain until the expiration time passes.

class ferris.core.caching.LayeredBackend(*args)[source]

Allows you to use multiple backends at once. When an item is cached it is put in to each backend. Retrieval checks each backend in order for the item. This is very useful when combining fast but volatile backends (like local) with slow but durable backends (like datastore).

Example:

@cache('something_expensive', ttl=3600, backend=LayeredBackend(LocalBackend, DatastoreBackend))
def expensive_function():
    ...