Automatic memoization in C++0x

Memoization is a pretty well-known optimization technique which consists in “remembering” (i.e.: caching) the results of previous calls to a function, so that repeated calls with the same parameters are resolved without repeating the original computation.

Some days ago, while trying to show a colleague the benefits of a modern high-level language like Python over C++, I came up with the following snippet:

def memoize(fn):
     cache = {}
     def memoized_fn(*args):
         if args not in cache:
             cache[args] = fn(*args)
         return cache[args]
     return memoized_fn

It is a small function which takes a function as its only parameter, and returns a memoized version of that function. It is short, it shows some interesting Python features, like built-in dictionaries and tuples, or functions as first-class objects, and it should be pretty readable.

To make a fair comparison I needed to code a C++ version too. I was thinking about writing, just to prove my point, the classic boilerplate-filled template to create a function object, and using typelists and compile-time recursion to allow an arbitrary number of parameters. But it would have been quite boring. Also, it turns out that, with the upcoming C++ standard supporting lambda functions, tuples and variadic templates, it is possible to get rid of most of the boilerplate and use pretty much the same functional approach. Moreover, gcc 4.5 already supports these things, so I decided to give it a go:

template <typename ReturnType, typename... Args>
std::function<ReturnType (Args...)> memoize(std::function<ReturnType (Args...)> func)
    std::map<std::tuple<Args...>, ReturnType> cache;
    return ([=](Args... args) mutable  {
            std::tuple<Args...> t(args...);
            if (cache.find(t) == cache.end())                
                cache[t] = func(args...);
            return cache[t];

Tricky things to note about the C++ version:

  • The new lambda syntax: the equals sign in [=] means “capture local variables in the surrounding scope by value”, which is needed because we are returning the lambda function, and the local variable will disappear at that moment, so we can’t hold a reference to it. As we are capturing by value and we pretend to change this captured value, the function should be marked as mutable (see “Appearing and disappearing consts in C++” by Scott Meyers)
  • Lambda functions are function objects of implementation-dependent type, so we need to use std::function as the return type from memoize() to wrap our lambda.

I still like the Python version better, as it looks cleaner to me, but I’m glad the new features can help us reduce the amount of boilerplate where switching to newer languages is not possible (sometimes you just NEED the extra speed). Kudos to the C++ standards committee for the improvements and the gcc team for keeping up with them.

EDIT (2011/03/21): Thanks everyone for the feedback, both in the comments here and in the reddit thread. Some additional notes:

  • Here you have the complete sample file. I tested it under g++ 4.5.2 on MinGW, with -std=c++0x.
  • This is a proof of concept, which I did to become familiar with the new language features. As some people pointed out, a map is not the best data structure to use as a cache (it has O(log n) lookups). You will probably do better if you use a hash map (like the new std::unordered_map in C++0x). I chose map just for the sake of clarity. Also, you will need to define an operator< for any type you would like to use (or a hash function for unordered_map).
  • There were also suggestions to use lower_bound or equal_range to avoid the second map lookup and use the resulting iterator also as an insertion hint. I thought about saving the result in a local variable to avoid another lookup, but I wanted it to be as close as possible to the python version, just for clarity. I also didn’t know about these functions, so thanks for the tip! :D
  • Some people also pointed out that this example doesn’t work with recursive functions. That’s completely true. In this post on stackoverflow the user larsmans suggests that I’m leaving the implementation of a fixed-point combinator as an exercise to the reader. Maybe it would be a good exercise for the writer, too… if I’m able to write something like that it will surely deserve its own post ;D

Comments (15)