Skip to content

Caching

marimo comes with utilities to cache intermediate computations. These utilities can be applied as decorators to functions to cache their returned values; you can choose between saving caches in memory or to disk.

Basic usage

marimo provides two decorators for caching the return values of expensive functions:

  1. mo.cache, which saves cached values to memory;
  2. mo.persistent_cache, which saves cached values to disk.
import marimo as mo

@mo.cache
def compute_embedding(data: str, embedding_dimension: int, model: str) -> np.ndarray:
    ...
import marimo as mo

@mo.persistent_cache
def compute_embedding(data: str, embedding_dimension: int, model: str) -> np.ndarray
    ...

Roughly speaking, the first time a cached function is called with a particular sequence of arguments, the function will run and its return value will be cached. The next time it is called with the same sequence of arguments (on cache hit), the function body will be skipped and the return value will be retrieved from cache instead.

The in-memory cache (mo.cache) is faster and doesn't consume disk space, but it is lost on notebook restart. The disk cache (mo.persistent_cache) is slower and consumes space on disk, but it persists across notebook runs, letting you pick up where you left off.

(For an in-memory cache of bounded size, use mo.lru_cache.)

Where persistent caches are stored

By default, persistent caches are stored in __marimo__/cache/, in the directory of the current notebook. For projects versioned with git, consider adding **/__marimo__/cache/ to your .gitignore.

Caches are preserved even when a cell is re-run

If a cell defining a cached function is re-run, the cache will be preserved unless its source code (or the source code of the cell's ancestors) has changed.

Persistent cache context manager

You can also use marimo's mo.persistent_cache as a context manager:

with mo.persistent_cache("my_cache_name"):
    X = my_expensive_computation(data, model)

The next time this block of code is run, if marimo detects a cache hit, the code will be skipped and your variables will be loaded into memory. The cache key for the context manager is computed in the same way as it is computed for decorated functions.

Cache key

Both mo.cache and mo.persistent_cache use the same mechanism for creating a cache key, differing only in where the cache is stored. The cache key is based on function arguments and closed-over variables.

Function arguments

Arguments must either be primitive, marimo UI elements, array-like, or pickleable:

  1. Primitive types (strings, bytes, numbers, None) are hashed.
  2. marimo UI elements are hashed based on their value.
  3. Array-like objects are introspected, with their values being hashed.
  4. All other objects are pickled.

Closed-over variables

Syntactically closing over variables provides another way to parametrize functions. In this example, the variable x is "closed over":

x = 0
def my_function():
    return x + 1

Closed-over variables are processed in the following way:

  • marimo first attempts to hash or pickle the closed-over variables, just as it does for arguments.
  • If a closed-over variables cannot be hashed or pickled, then marimo uses the source code that defines the value as part of the cache key; in particular, marimo hashes the cell that defines the variable as well as the source code of that cell's ancestors. This assumes that the variable's value is a deterministic function of the source code that defines it, although certain side-effects (specifically, if a cell raised an exception or loaded from another cache) are taken into account.

Because marimo's cache key construction can fall back to source code for closed-over variables, closing over variables lets you cache functions even in the presence of non-hashable and non-pickleable arguments.

Limitations

marimo's cache has some limitations:

  • Side effects are not cached. This means that on a cache hit, side effects like printing, file I/O, or network requests will not occur.
  • The source code of imported modules is not used when computing the cache key.
    • By setting pin_modules to True, you can ensure that the cache is invalidated when module versions change (e.g., update when the module's __version__ attribute changes).
    • This limitation does not apply if the external module is a marimo notebook.
  • The return values of persistently cached functions must be serializable with pickle.

Don't mutate variables

marimo works best when you don't mutate variables across cells. The same is true for caching, since the cache key may not always be able to take mutations into account.

Decorators defined in other Python modules

Decorators defined in other Python modules that do not use functools.wraps cannot be correctly cached. This can lead to confusing bugs like the example below:

# my_lib.py
def my_decorator(func):
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper
# Cell 1
from my_lib import my_decorator

@mo.cache
@my_decorator
def expensive_function():
    # ... some computation
    return "result1"

@mo.cache
@my_decorator
def another_expensive_function():
    # ... different computation
    return "result2"

# This assertion may unexpectedly pass due to cache collision!
assert expensive_function() == another_expensive_function(), "But why?"

The fix is to make sure the decorator uses functools.wraps:

# my_lib.py (fixed)
from functools import wraps

def my_decorator(func):
    @wraps(func)
    def wrapper(*args, **kwargs):
        return func(*args, **kwargs)
    return wrapper

In this instance, the cache will work as expected because the decorated function has the same signature and metadata as the original function.

Comparison with functools.cache

Here is a table comparing marimo's cache with functools.cache:

Feature marimo cache functools.cache
Cache return values in memory?
Cache return values to disk?
Preserved on cell re-runs?
Tracks closed-over variables
Allows unhashable arguments?
Allows Array-like arguments?
Suitable for lightweight functions (microseconds)?

When to use functools.cache

Prefer functools.cache for extremely lightweight functions (that execute in less than a millisecond). Using memoization to calculate the Fibonacci sequence is a classic example of using functools.cache effectively. On a basic macbook in pure python, fib(35) takes 1 second to compute; with mo.cache it takes 0.000229 seconds; with functools.cache, it takes 0.000025 seconds (x9 faster!!). Although relatively small, the additional overhead of mo.cache (and more so mo.persistent_cache) is larger than functools.cache. If your function takes more than a few milliseconds to compute, the difference is negligible.

Tips

Isolate cached code blocks to their own cells

Isolating cached functions in separate cells improves cache reliability. When dependencies and cached functions are in the same cell, any change to the cell invalidates the cache, even if the cached function itself hasn't changed. Separating them ensures the cache is only invalidated when the function actually changes.

Don't do this:

# Cell 1
llm_client = ...
@mo.cache
def prompt_llm(query, **kwargs):
    message = {"role": "user", "content": query}
    return llm_client.chat.completions.create(messages=[message], **kwargs)

Do this instead:

# Cell 1
llm_client = ...
# Cell 2
@mo.cache
def prompt_llm(query, **kwargs):
    message = {"role": "user", "content": query}
    return llm_client.chat.completions.create(messages=[message], **kwargs)

Close over unhashable or un-pickleable arguments

The cache key is constructed in part by hashing or pickle-ing function arguments. When you call a cached function with arguments that cannot be processed in this way, an exception will be raised. To parametrize cached functions with unhashable or un-pickleable arguments, syntactically close over them instead.

You can't do this:

# Cell 1
@mo.cache
def query_database(query, engine):
    return engine.execute(query)

# This won't work because my_database_engine is not hashable
query_database("SELECT * FROM my_table", my_database_engine)

Instead, you can close over my_database_engine:

Do this:

# Cell 1
my_database_engine = ...
# Cell 2
@mo.cache
def query_database(query):
    return my_database_engine.execute(query)

Close-over low-memory-footprint variables

Non-primitive closed-over variables are serialized for cache key generation. When possible, compute derived values (like length) outside the cache block and only use the small values inside.

Don't do this:

with mo.persistent_cache("bad example"):
    length = len(my_very_large_dataset)
    ... # uses length

Do this instead:

length = len(my_very_large_dataset)  # my_very_large_dataset is not needed for cache invalidation

with mo.persistent_cache("good example"):
    ... # uses length

Use mo.watch.file when working with files

Don't do this:

my_file = open("my_file.txt")
with mo.persistent_cache("my_file"):
    data = my_file.read()
    # Do something with data
my_file.close()

Do this instead:

# Cell 1
my_file = mo.watch.file("my_file.txt")

# Cell 2
with mo.persistent_cache("my_file"):
    data = my_file.read()
    # Do something with data

API

marimo.cache

cache(fn: Optional[Callable[..., Any]] = None, pin_modules: bool = False, loader: LoaderPartial | LoaderType = MemoryLoader) -> _cache_call
cache(name: str, pin_modules: bool = False, loader: LoaderPartial | Loader | LoaderType = MemoryLoader) -> _cache_context
cache(name: Union[str, Optional[Callable[..., Any]]] = None, *args: Any, pin_modules: bool = False, loader: Optional[Union[LoaderPartial, Loader]] = None, _frame_offset: int = 1, _internal_interface_not_for_external_use: None = None, **kwargs: Any) -> Union[_cache_call, _cache_context]

Cache the value of a function based on args and closed-over variables.

Decorating a function with @mo.cache will cache its value based on the function's arguments, closed-over values, and the notebook code.

Examples:

import marimo as mo


@mo.cache
def fib(n):
    if n <= 1:
        return n
    return fib(n - 1) + fib(n - 2)

mo.cache is similar to functools.cache, but with three key benefits:

  1. mo.cache persists its cache even if the cell defining the cached function is re-run, as long as the code defining the function (excluding comments and formatting) has not changed.
  2. mo.cache keys on closed-over values in addition to function arguments, preventing accumulation of hidden state associated with functools.cache.
  3. mo.cache does not require its arguments to be hashable (only pickleable), meaning it can work with lists, sets, NumPy arrays, PyTorch tensors, and more.

mo.cache obtains these benefits at the cost of slightly higher overhead than functools.cache, so it is best used for expensive functions.

Like functools.cache, mo.cache is thread-safe.

The cache has an unlimited maximum size. To limit the cache size, use @mo.lru_cache. mo.cache is slightly faster than mo.lru_cache, but in most applications the difference is negligible.

Note, mo.cache can also be used as a drop in replacement for context block caching like mo.persistent_cache.

PARAMETER DESCRIPTION
pin_modules

if True, the cache will be invalidated if module versions differ.

TYPE: bool DEFAULT: False

Context manager to cache the return value of a block of code.

The mo.cache context manager lets you delimit a block of code in which variables will be cached to memory when they are first computed.

By default, the cache is stored in memory and is not persisted across kernel runs, for that functionality, refer to mo.persistent_cache.

Examples:

with mo.cache("my_cache") as cache:
    variable = expensive_function()

PARAMETER DESCRIPTION
name

the name of the cache, used to set saving path- to manually invalidate the cache, change the name.

TYPE: Union[str, Optional[Callable[..., Any]]] DEFAULT: None

pin_modules

if True, the cache will be invalidated if module versions differ.

TYPE: bool DEFAULT: False

loader

the loader to use for the cache, defaults to MemoryLoader.

TYPE: Optional[Union[LoaderPartial, Loader]] DEFAULT: None

**kwargs

keyword arguments

TYPE: Any DEFAULT: {}

*args

positional arguments

TYPE: Any DEFAULT: ()

marimo.lru_cache

lru_cache(fn: Optional[Callable[..., Any]] = None, maxsize: int = 128, pin_modules: bool = False) -> _cache_call
lru_cache(name: str, maxsize: int = 128, pin_modules: bool = False) -> _cache_call
lru_cache(name: Union[str, Optional[Callable[..., Any]]] = None, maxsize: int = 128, *args: Any, pin_modules: bool = False, _internal_interface_not_for_external_use: None = None, **kwargs: Any) -> Union[_cache_call, _cache_context]

Decorator for LRU caching the return value of a function.

mo.lru_cache is a version of mo.cache with a bounded cache size. As an LRU (Least Recently Used) cache, only the last used maxsize values are retained, with the oldest values being discarded. For more information, see the documentation of mo.cache.

Examples:

import marimo as mo


@mo.lru_cache
def factorial(n):
    return n * factorial(n - 1) if n else 1

PARAMETER DESCRIPTION
maxsize

the maximum number of entries in the cache; defaults to 128. Setting to -1 disables cache limits.

TYPE: int DEFAULT: 128

pin_modules

if True, the cache will be invalidated if module versions differ.

TYPE: bool DEFAULT: False

Context manager for LRU caching the return value of a block of code.

PARAMETER DESCRIPTION
name

Namespace key for the cache.

TYPE: Union[str, Optional[Callable[..., Any]]] DEFAULT: None

maxsize

the maximum number of entries in the cache; defaults to 128. Setting to -1 disables cache limits.

TYPE: int DEFAULT: 128

pin_modules

if True, the cache will be invalidated if module versions differ.

TYPE: bool DEFAULT: False

**kwargs

keyword arguments passed to cache()

TYPE: Any DEFAULT: {}

*args

positional arguments passed to cache()

TYPE: Any DEFAULT: ()

marimo.persistent_cache

persistent_cache(name: str, save_path: str | None = None, method: LoaderKey = 'pickle', pin_modules: bool = False) -> _cache_context
persistent_cache(fn: Optional[Callable[..., Any]] = None, save_path: str | None = None, method: LoaderKey = 'pickle', pin_modules: bool = False) -> _cache_call
persistent_cache(name: Union[str, Optional[Callable[..., Any]]] = None, save_path: str | None = None, method: LoaderKey = 'pickle', store: Optional[Store] = None, fn: Optional[Callable[..., Any]] = None, *args: Any, pin_modules: bool = False, _internal_interface_not_for_external_use: None = None, **kwargs: Any) -> Union[_cache_call, _cache_context]

Context manager to save variables to disk and restore them thereafter.

The mo.persistent_cache context manager lets you delimit a block of code in which variables will be cached to disk when they are first computed. On subsequent runs of the cell, if marimo determines that this block of code hasn't changed and neither has its ancestors, it will restore the variables from disk instead of re-computing them, skipping execution of the block entirely.

Restoration happens even across notebook runs, meaning you can use mo.persistent_cache to make notebooks start instantly, with variables that would otherwise be expensive to compute already materialized in memory.

Examples:

with persistent_cache(name="my_cache"):
    variable = expensive_function()  # This will be cached to disk.
    print("hello, cache")  # this will be skipped on cache hits

In this example, variable will be cached the first time the block is executed, and restored on subsequent runs of the block. If cache conditions are hit, the contents of with block will be skipped on execution. This means that side-effects such as writing to stdout and stderr will be skipped on cache hits.

Note that mo.state and UIElement changes will also trigger cache invalidation, and be accordingly updated.

Warning. Since context abuses sys frame trace, this may conflict with debugging tools or libraries that also use sys.settrace.

Decorator for persistently caching the return value of a function.

persistent_cache can also be used as a drop in function-level memoization for @mo.cache or @mo.lru_cache. This is much slower than cache, but can be useful for saving function values between kernel restarts. For more details, refer to mo.cache.

Usage.

import marimo as mo


@mo.persistent_cache
def my_expensive_function():
    # Do expensive things

# or

@mo.persistent_cache(save_path="my/path/to/cache")
def my_expensive_function_cached_in_a_certain_location():
    # Do expensive things
PARAMETER DESCRIPTION
name

the name of the cache, used to set saving path- to manually invalidate the cache, change the name.

TYPE: Union[str, Optional[Callable[..., Any]]] DEFAULT: None

save_path

the folder in which to save the cache, defaults to __marimo__/cache in the directory of the notebook file

TYPE: str | None DEFAULT: None

method

the serialization method to use, current options are "json", and "pickle" (default).

TYPE: LoaderKey DEFAULT: 'pickle'

store

optional store.

TYPE: Optional[Store] DEFAULT: None

fn

the wrapped function if no settings are passed.

TYPE: Optional[Callable[..., Any]] DEFAULT: None

*args

positional arguments passed to cache()

TYPE: Any DEFAULT: ()

pin_modules

if True, the cache will be invalidated if module versions differ between runs, defaults to False.

TYPE: bool DEFAULT: False

**kwargs

keyword arguments passed to cache()

TYPE: Any DEFAULT: {}