Function caching is a technique used to improve program efficiency by storing the result of a function for specific inputs.
When the function is called again with the same inputs, the stored result is returned immediately, avoiding redundant calculation.
Caching is especially useful in contexts where
- Functions are expensive in terms of execution time
- The same inputs are used repeatedly.
Implementing Caching with functools.lru_cache
The functools module in Python includes the @lru_cache decorator (Least Recently Used cache), which is a simple and efficient way to implement caching in functions.
This decorator maintains a fixed number of results and removes the least recently used ones when the limit is reached.
from functools import lru_cache
@lru_cache(maxsize=128)
def factorial(n):
if n == 0:
return 1
return n * factorial(n-1)
print(factorial(10)) # Normal calculation
print(factorial(10)) # Cached result
In this example,
- The
@lru_cache(maxsize=128)decorator caches the results of thefactorialfunction up to a maximum of 128 entries. - Subsequent calls with the same arguments return the cached result instead of recalculating it.
The @lru_cache decorator supports the following parameters.
| Parameter | Description |
|---|---|
| maxsize | Defines the maximum cache size. A value of None indicates an unlimited size |
| typed | If set to True, different types of arguments will be considered as different cache entries |
Manual Implementation of Caching
Besides lru_cache, we can also implement caching manually using data structures like dictionaries.
cache = {}
def factorial(n):
if n in cache:
return cache[n]
if n == 0:
result = 1
else:
result = n * factorial(n-1)
cache[n] = result
return result
print(factorial(10)) # Normal calculation
print(factorial(10)) # Cached result
In this example
- The
cachedictionary stores the results offactorial - Before calculating the result, we check if it’s already stored in the cache, returning the stored value if available
In general, the result is similar to what we would get using @lru_cache. But knowing how to do it manually is useful if we need to customize the cache behavior
