The Fibonacci numbers are a well known recursive sequence, which is defined as followed

```
f[0] = 0
f[1] = 1
f[n] = f[n-1] + f[n-2]
```

The question is, how can we calculate them?

The first idea and probably most intuitive way is recursively. Why? Because the structure of the sequence itself is recursive, which means the implementation will be very similar to our definition.

I'll chose JavaScript as the implementation language, simply because you can just open the developer console in your browser and paste in the snippets to see the results immediately.

## 1. Straightforward recursive implementation

We can simply take our definition, add a little bit of syntax, and voilà, we're done.

```
function fib(n) {
if (n < 2) {
return n;
} else {
return fib(n - 1) + fib(n - 2);
}
}
```

First thing we should test to see if this function actually works.

```
> [fib(0), fib(1), fib(2), fib(3), fib(4), fib(5)]
[0, 1, 1, 2, 3, 5]
```

Everything looks nice, but what if we try to calculate a larger number? What is the largest number that our computer will be able to calculate using this implementation? You might be tempted to figure this out by trial and error, but let's try calculating this first.

By a very rough estimate, we could say that a modern computer does around 1000 000 000 operations per second. One computer might be 10 or 100 times faster than another computer, but that won't really bother us, since the end result will be the same.

To get any reasonable estimate, we should first figure out the
algorithmic complexity of our little function. At first it seems it
should be linear, since to calculate `fib(20)`

you only need `fib(19)`

and `fib(18)`

, and so on. Except that `fib(19)`

will calculate `fib(18)`

again. We can see this more easily by visualizing it as a tree:

As you can see, we're calling `fib(n)`

multiple times for the same
input. Specifically, the height of the tree will be $n$, since the
calculated value decreases by $1$ on each level. If this was a balanced
binary tree, we could easily conclude that it has an exponential number
of nodes, $2^n$ to be specific, but you can already see that one of the
two branches will have fewer children. How many exactly? Let's use a bit
of math.

We can use the same exact formula for calculating the number of nodes, since:

- the trees for both
`fib(0)`

and`fib(1)`

have`1`

node. - the tree for
`fib(2)`

has`3`

nodes, since it needs to calculate`fib(0)`

and`fib(1)`

, which both have`1`

node, and then put those two together. - the tree for
`fib(3)`

has the height`1 + fib(2), fib(1)`

- in general, the tree for
`fib(N)`

has exactly`fib(N-1) + fib(N-2) + 1`

nodes.

Given $f_0 = 0,\ f_1 = 1$ and $f_{n+2} = f_{n+1} + f_{n} + 1$, we can see that it already grows faster than Fibonacci numbers, so if we could simplify this and show that the Fibonacci numbers grow exponentially, it would also mean that the number of nodes in the tree grow exponentially. There are many ways to derive the closed form formula for Fibonacci numbers, but here's a link to a really nice explanation using generating functions (the same formula can also be derived using linear algebra.) The resulting formula is:

$$f_n = \frac{1}{\sqrt{5}} \left( \left( \frac{1 + \sqrt{5}}{2} \right)^n - \left(\frac{1 - \sqrt{5}}{2} \right)^n \right)$$

At this point we can see that the Fibonacci numbers grow exponentially, and so will the number of nodes in the computation tree for our naive recursive implementation.

This is where the problem comes, since a binary tree of height $n$ will have $O(2^n)$ nodes, meaning our complexity is exponential (even though the real complexity is something like $O(1.6^n)$, it is still exponential.)

For `fib(40)`

this would be roughly $10^{12}$, `fib(50)`

would be $10^{15}$,
and so on. Even if we get a very fast computer, we wouldn't be able to
get anywhere near `fib(100)`

.

We can already see that this algorithm is clearly bad and wouldn't be very practical in real life (if you ever needed Fibonacci numbers in real life.), so let's try to improve it.

It feels as if the algorithm should be linear. I bet that if someone asked you to calculate the first 20 Fibonacci numbers on paper, you'd start with $0$ and $1$, and then just iterate forward.

## 2. Recursive implementation with dynamic programming

Ideally we'd like to keep our simple recursive implementation while improving the performance to a point where it's comparable to an iterative solution (in terms of speed.) Earlier we established that the main bottleneck lies in the repetitive computations.

We'll use dynamic programming to fix this, which basically introduces a cache (or a memoization mechanism, also called a dynamic programming table) which is used to store the intermediary result. Once we compute a number for a specific parameter, we'll store it in the table and never compute it again. This way we only need to compute each number once, landing at linear time complexity.

```
var table = [0, 1];
function fib(n) {
if (typeof table[n] !== "undefined") {
return table[n];
}
table[n] = fib(n - 1) + fib(n - 2);
return table[n];
}
```

Note that we don't need to resize the array to fit the values. This is only possible due to JavaScript's array implementation, which behaves a lot more like hash-maps than like arrays.

While we did speed up the algorithm significantly, we also traded
computation time for memory, as computing `fib(N)`

will require a table of
size `N`

to store the intermediate results. Before optimizing this
further, we can look at one more dynamic programming solution.

In general there are two ways to approach dynamic programming. One is top-down, which is what we've done in the previous example, and the second one is bottom-up, which is shown in the next snippet.

```
function fib(n) {
var table = [0, 1];
for (var i = 2; i <= n; i++) {
table[i] = table[i - 1] + table[i - 2];
}
return table[n];
}
```

Highlighting the differences between top-down and bottom-up:

- top-down generally starts at the solution, and recursively computes all the dependencies using memoization
- bottom-up starts builds up bigger solutions from smaller ones, until it reaches the final solution

This is an obviously simple example, but it is quite useful to know both of these approaches, as some problems are easier to solve top-down, and some are easier to solve using the bottom-up approach.

Regardless of which approach you chose, it still uses $O(N)$ memory.
While this is not ideal if you just want to compute a single value,
having the table pre-computed might come in very handy if you're calling
the function often to get different Fibonacci numbers. I'll leave it as
an exercise to the reader to modify the bottom-up approach to remember
the cached values between the calls, and only compute the needed part of
the table, such that calling `fib(10)`

and then `fib(15)`

would compute
the first 10 Fibonacci numbers only once.

## 3. Iteration

Last but not least, we can get rid of the memoization table, and only compute the n-th number. This is rather easy by modifying the bottom-up dynamic programming approach.

```
function fib(n) {
var x = 0;
var y = 1;
if (n > 2) {
for (var i = 2; i <= n; i++) {
var tmp = y;
y = x + y;
x = tmp;
}
} else {
return n;
}
return y;
}
```

The advantage of this approach is that we get the best of both worlds. The algorithm runs in linear time and consumes only a constant amount of memory, and there's no recursion, so we don't have to worry about stack-limit-exceeded types of errors.

The downside is that if you're going to be calculating lots of Fibonacci numbers, it will do the work over and over again, while the dynamic programming approaches could make use of memoization. (Note that the iterative approach could be also classified as bottom-up dynamic programming, but for the sake of illustration, I'm showing dynamic programming with an explicit use of a memoization table.)

## Summary

In conclusion, there's no single best solution, and you should pick one based on your use case. While Fibonacci numbers are a very simple example, you can already see that there are multiple approaches to the same problem, without a clear winner.

#### Liked this blog post? Share it!

comments powered by Disqus