Function caching is a technique used to improve the efficiency of programs by storing the result of a function for specific inputs.
When the function is called again with the same inputs, the stored result is returned immediately, avoiding redundant computation.
Caching is especially useful in contexts where
- Functions are expensive in terms of execution time
- The same inputs are used repeatedly.
Caching Implementation with functools.lru_cache
The functools
module in Python includes the @lru_cache
(Least Recently Used cache) decorator, which is a simple and efficient way to implement caching in functions.
This decorator maintains a fixed number of results and removes the least used ones when the limit is reached.
from functools import lru_cache
@lru_cache(maxsize=128)
def factorial(n):
if n == 0:
return 1
return n * factorial(n-1)
print(factorial(10)) # Normal calculation
print(factorial(10)) # Cached result
In this example,
- The
@lru_cache(maxsize=128)
decorator caches the results of thefactorial
function up to a maximum of 128 entries. - Subsequent calls with the same arguments return the cached result instead of recalculating it.
The @lru_cache
decorator supports the following parameters.
Parameter | Description |
---|---|
maxsize | Defines the maximum size of the cache. A value of None indicates an unlimited size |
typed | If set to True , different types of arguments will be considered different entries in the cache |
Manual Caching Implementation
In addition to lru_cache
, we can also implement caching manually using data structures like dictionaries.
cache = {}
def factorial(n):
if n in cache:
return cache[n]
if n == 0:
result = 1
else:
result = n * factorial(n-1)
cache[n] = result
return result
print(factorial(10)) # Normal calculation
print(factorial(10)) # Cached result
In this example
- The
cache
dictionary stores the results offactorial
- Before calculating the result, we check if it is already stored in the cache, returning the stored value if it is available
In general, the result is similar to what we would obtain using @lru_cache
. But knowing how to do it manually is useful if we need to customize the behavior of the cache.