I just discovered dynamic programming and damn, I’m blown away by the concept. The other day, hile working through a homework assignment, I compared the run times between two python functions that I wrote, one function written recursively and the other written in a dynamic programming fashion. Starting with the recursive solution, I arrived at the following:
def fibonacci(n): if n == 0: return 0 if n == 1: return 1 return fibonacci(n-1) + fibonacci(n-2)
That’s a fairly standard implementation of Fibonacci. There are two base cases; n=0; n=1. So when n is either of these two numbers, the function simply returns 0 or 1, respectively. But for any other number, the function recursively calls itself until reaching the aforementioned base cases.
So far so good, right? And for calculating small values of n, this implementation doesn’t really present a problem. But say we want to calculate fibonacci when n equals 40. How long does this take? Alarmingly, this computation hovers around 45 seconds:
./fibonacci.py 40 fibonacci(40) = 102334155 took 45.22154 seconds
Now, what if we run the same calculation. But this time, we run it using a dynamic programming technique? How much time does that shave off?
./fibonacci.py --dynamic-programming 40 fibonacci(40) = 102334155 took 0.00002 seconds
What ?! From 45 seconds down to under a millisecond ?! How is that possible?
def fibonacci_dynamic_programming(n): fibonacci = [0,1] for _ in range(0, n): fibonacci.append(fibonacci[-1] + fibonacci[-2]) return fibonacci[n-1] + fibonacci[n-2]
As you can see from the code above, instead of recurisvely calling fibonacci, we iteratively calculate all the values. In other words, this implementation runs linearly (i.e. direct proportion to n), unlike the first, recursive implementation, which runs exponentially.