# Dynamic programming with fibonacci numbers

I think that going through the different ways to calculate a fibonacci number is a great way to understand dynamic programming, so let's do it!

## Fibonacci numbers

First let's talk about what a fibonacci number is.

The fibonacci number of 8 is equal to the fibonacci number of 7 plus the fibonacci number of 6. In code: `fib(8) = fib(7) + fib(6)`.

With this definition, it's not easy to figure out what `fib(8)` is quickly. You have to first figure out `fib(7)` and `fib(6)`. But figuring out those isn't something you can do quickly via mental math either.

If you do go through it, you'll end up running into a hairy situation. What happens when you get up to `fib(2)`? You have to get `fib(1)` and `fib(0)`. But for `fib(1)` and `fib(0)` you'd have to get into `fib(-1)` and `fib(-2)`.

Gross. Instead of doing this, we say that `fib(1)` is defined as 1, and `fib(0)` is defined as 0. So when we try to get `fib(1)`, there's no `fib(0) + fib(-1)` stuff. We just know the answer is 1 because that's how we defined it.

Ok, so we know what `fib(0)` and `fib(1)` are. Great. Now what about `fib(2)`? Well, it's `fib(1) + fib(0)`, which is equal to `1 + 0 = 1`. And what about `fib(3)`? `fib(3) = fib(2) + fib(1) = 1 + 1 = 2`.

You can take these answers and form a sequence out of them. So far we have:

```0, 1, 1, 2
```

Going through the sequence from the left to the right like this makes it a whole lot easier than starting from something like `fib(8)`. The next number is `1 + 2 = 3`, and our sequence becomes:

```0, 1, 1, 2, 3
```

The next number is `2 + 3 = 5`:

```0, 1, 1, 2, 3, 5
```

The next number is `3 + 5 = 8`:

```0, 1, 1, 2, 3, 5, 8
```

So on and so forth.

## Approach #1: Basic recursion

How would we calculate a fibbonaci number with code?

```function fib(n) {
// how do we implement this?
}

/*

fib(0) === 0
fib(1) === 1
fib(2) === 1
fib(3) === 2
...

*/
```

Well, think about what the definition of a fibonacci number is. Honestly, it's easier to express the definition in code than in english:

```function fib(n) {
if (n === 0) {
return 0;
} else if (n === 1) {
return 1;
}

return fib(n-1) + fib(n-2);
}
```

So `fib(1)` is going to hit the second if statement and return `1`, but `fib(4)` is going to hit the `return fib(n-1) + fib(n-2);` line and return `fib(3) + fib(2)`.

Let's talk about what happens here. Let's talk about how recursion works.

Consider this code:

```function first() {
second();
console.log("first");
}

function second() {
console.log("second");
}

first();
```

How do things work when one function calls another? Well, it uses what's called the call stack. When the code executes `first()`, it pushes `first` onto the call stack like this:

```first
```

Then when `first` executes `second()`, `second` gets pushed on top of `first`, so the call stack looks like this:

```second
first
```

When `second` finishes running it gets popped off, so the call stack becomes:

```first
```

And when `first` finishes running, it gets popped off and the call stack is empty.

I think a great analogy for this is when you're working on something and get interrupted. Imagine that you're watching a movie. You're 1/3 the way through, and you get a phone call. You pause the movie and talk on the phone until you're done. When you're finished, you hang up the phone and pick up where you left off in the movie.

Or imagine that while you're on the phone, your dog starts barking and you need to take him for a walk. So you tell your friend you'll call them back, go walk your dog, call your friend back, and finally return to your movie once you finish talking to your friend.

Here's how the "call stack" would look.

Start watching movie:

```movie
```

Friend calls:

```phone
movie
```

Dog barks:

```dog
phone
movie
```

You finish walking the dog and call your friend back:

```phone
movie
```

```movie
```

And once you finish watching your movie, that gets popped off your "call stack" and you can finally take a deep breath and relax. Phew!

Now let's return to the code I wrote for `fib(n)`. Here it is again so you don't have to scroll up:

```function fib(n) {
if (n === 0) {
return 0;
} else if (n === 1) {
return 1;
}

return fib(n-1) + fib(n-2);
}
```

What happens when we call `fib(4)`? Well, `fib(4)` calls `fib(3)` and `fib(2)`, but `fib(2)` doesn't get called right away. First `fib(3)` is called and we enter into that function call. So our call stack looks like this:

```fib(3)
fib(4)
```

Now inside of the `fib(3)` call it hits that same `return fib(n-1) + fib(n-2);` line and calls `fib(2)`. And `fib(2)` gets put on top of our call stack. So our call stack now looks like this and we enter into the call to `fib(2)`.

```fib(2)
fib(3)
fib(4)
```

Now `fib(2)` calls `fib(1)`. Again, we put `fib(1)` on top of the call stack and enter into that call.

```fib(1)
fib(2)
fib(3)
fib(4)
```

Now what happens inside of the call to `fib(1)`? Well, it hits the second if statement and just returns `1`.

```else if (n === 1) {
return 1;
}
```

Cool! Now our computer can take a deep breath and sigh. It's exhausting having so many things put on your plate like this!

This point where your computer sighs, this is called a "base case". It's when the function call doesn't make any additional recursive calls. If you didn't have any base cases, it'd just keep making recursive calls over and over again, infinitely, until your computer crashes.

After `fib(1)` returns `1`, it's finished, so it gets popped off the call stack.

```fib(2)
fib(3)
fib(4)
```

And now we're back at the call to `fib(2)`.

But we don't start over! We pick up where we left off. We were previously at this line:

```return fib(n-1) + fib(n-2);
```

We made the call to `fib(n-1) = fib(1)` and got `1`. Cool. So now, basically, that line of code looks like this:

```return 1 + fib(n-2);
```

So now we have to deal with `fib(n-2)`. `fib(n-2)` is equal to `fib(0)` here, so `fib(0)` gets put on top of the call stack.

```fib(0)
fib(2)
fib(3)
fib(4)
```

Like `fib(1)`, `fib(0)` just returns `0` and gets popped off.

So now our call stack looks like this again:

```fib(2)
fib(3)
fib(4)
```

And we're back to that `fib(2)` call. Now that `return 1 + fib(n-2);` line looks like this:

```return 1 + 0;
```

So, `fib(2)` returns `1` and gets popped off the call stack.

```fib(3)
fib(4)
```

Woah, we're making progress! How exciting!

So now, before we so rudely interrupted `fib(3)`'s execution, it was over here:

```return fib(n-1) + fib(n-2);
```

Well, `fib(n-1)` just returned `1` to us, so now we're here:

```return 1 + fib(n-2);
```

Oh no! We're going backwards again! Since we're in the function call of `fib(3)`, `fib(n-2) = fib(1)`, so the line becomes:

```return 1 + fib(1);
```

And we put `fib(1)` on top of the call stack and move into it.

```fib(1)
fib(3)
fib(4)
```

Ugh, how exhausting! We're going backwards again!

And didn't we already compute `fib(1)`? It seems inefficient to compute it again. Whatever, it's not actually us doing it, it's our computer, and our computer is pretty fast at doing this sort of stuff. Right?

Well, yes and no. It is very fast, but calculating `fib(n)` for a large `n` is going to involve a disgustingly large number of function calls. Check out the call tree for `fib(5)` and notice how often `fib(1)` is called. A lot of repetition right? Just imagine what happens for `fib(100)`, or `fib(1000000)`!

It turns out that the time complexity of this approach is `O(2^n)`. `2^100` is `1.26 x 10^30`. `2^1000000` is so big that the calculator won't even give me an answer.

So even though our computers are fast, they aren't that fast. In practice you can't really get away with `O(2^n)` for larger values of `n`.

But remember that insight we had before when we realized that we had already computed `fib(1)` and it's annoying that we have to do it again. What if we could "remember" that initial result we got for `fib(1)` and use that "remembered" solution?

Well, that's exactly how memoization works, which is the approach we'll talk about next.

## Approach #2: Recursion with memoization

Memoization is a technique where you "remember" previous computations so you don't have to perform them over and over and over again. It's simple, but it could save us a ton of time. Think about how the term "memoize" sounds a lot like "memorize".

The "basic recursion" approach we did before was `O(2^n)`. So `fib(100)` takes somewhere in the ballpark of `2^100 = 1.26 x 10^30` steps when using that approach. With memoization, we're only doing the computation once for each `n`, because subsequent calls just use the "remembered" result and don't involve any computation. So it's `O(n)`. So then, when using memoization, `fib(100)` only takes (roughly) `100` steps. `100` is a lot less than `1.26 x 10^30`.

Ok, cool. So memoization makes this stuff way faster. But how does it work?

First I'll show you the code, and then we'll walk through the execution like we did before.

```function fib(n, memo = {}) {
if (n === 0) {
return 0;
} else if (n === 1) {
return 1;
} else if (memo[n]) {
return memo[n];
}

const ans = fib(n-1, memo) + fib(n-2, memo);

memo[n] = ans;

return ans;
}
```

It looks pretty similar to the original version right? I'll copy-paste the original version below so you can compare them.

```function fib(n) {
if (n === 0) {
return 0;
} else if (n === 1) {
return 1;
}

return fib(n-1) + fib(n-2);
}
```

There are three main differences:

1. Before returning, we're keeping track of our result so that we remember it for later with `memo[n] = ans`.
2. We added an `else if` to check if we've computed `fib(n)` before. If we have, then we just use this "remembered" result.
3. We have `memo = {}` as a parameter: `function fib(n, memo = {})`. And we're passing in `memo` as an argument to the recursive calls: `fib(n-1, memo) + fib(n-2, memo)`.

So at a high level, our function is:

• Returning the hardcoded values for the base cases of 0 and 1.
• Returning the "remembered" value if there is one.
• Otherwise doing the recursive calls. And "remembering" the result before returning.

But to really understand how this all fits together, I think we need to spend some time tracing the execution, so let's give that a shot.

Let's try tracing `fib(4)` again. If you're new to this, I encourage you to actually do it in your debugger as well as reading through the article as well as practicing the trace by hand.

Ok, let's get started. We call `fib(4)`, it gets pushed onto the empty call stack, and execution begins.

```fib(4)
```

`memo` wasn't passed in as the second argument so it gets initialized to `{}`. We move past the 0 and 1 base cases. We also move past `else if (memo[n])`: we've never calculated `fib(4)` before so it doesn't "remember" the answer to `fib(4)`.

Then we move on to `const ans = fib(n-1, memo) + fib(n-2, memo)`. There's a lot to unpack here. How does this get evaluated?

• Well, we start off at the right-hand side (RHS) of the equals sign and evaluate that expression: `fib(n-1, memo) + fib(n-2, memo)`. Once we finish, we'll assign the result to `ans`.
• To evaluate that expression, we first have to evaluate the LHS of the `+` operator.
• The LHS of the `+` operator is `fib(n-1, memo)`.
• To evaluate that, we have to evaluate `n-1`.
• `n-1` is `3`, so we have `fib(3, memo)`.
• Now we evaluate `fib(3, memo)`.
• `fib(3, memo)` gets pushed on top of the call stack, and we begin it's execution.

So now the call stack looks like this:

```fib(3, memo)
fib(4)
```

How does the execution of `fib(3, memo)` go? I'll speed things up this time. It's not a base case and we don't remember a previous answer to `fib(3)`, so we hit `const ans = fib(n-1, memo) + fib(n-2, memo)` and move along to `fib(2, memo)`.

```fib(2, memo)
fib(3, memo)
fib(4)
```

I want to point out one thing though: `memo` is being passed along by reference to all of the different function calls on the call stack. Which means that `memo` refers to the same array in `fib(4)`, `fib(3, memo)`, and `fib(2, memo)`. That's a crucial point. We need this to happen because we need to "remember" everything that has been previously computed.

Ok, so how does `fib(2, memo)` go? We end up making a recursive call to `fib(1, memo)`.

```fib(1, memo)
fib(2, memo)
fib(3, memo)
fib(4)
```

`fib(1, memo)` is a base case, so it just returns `1`, and we return to the `fib(2, memo)` call.

```fib(2, memo)
fib(3, memo)
fib(4)
```

What does that mean, exactly? "Return to the `fib(2, memo)` call"? Well, it means that we "pick up where we left off". Sorta like how in the movie-phone-dog example from before, we were 1/3 of the way through the movie, so when we finish our phone call we pick up from where we left off. We don't start from the beginning. That would be silly.

So then, where did we leave off in the `fib(2, memo)` call? To answer that question, consider that overly-detailed trace I was doing with `fib(4)`.

• We were on the `const ans = fib(n-1, memo) + fib(n-2, memo)` line.
• We have to evaluate the RHS of the `=`.
• We have to evaluate the LHS of the `+`.
• We just finished evaluating the LHS of the `+`! `fib(n-1, memo)` got transformed into `fib(1, memo)`, we executed it, and it gave us back `1`. So now we have `const ans = 1 + fib(n-2, memo)`.
• The next step is to evaluate the RHS of the `+`. This is where we "left off".

Ok, so how do we proceed?

• `1 + fib(n-2, memo)` becomes `1 + fib(0, memo)`.
• `fib(0, memo)` gets pushed onto the stack, executed, popped off the stack, and returns `0`.
• `const ans = 1 + 0` does it's thing and assigns `1` to `ans`.

Now here's where the memoization begins!

• `memo = 1` happens.
• Since all of the calls on the call stack reference the same `memo`, from here on forward, they all can "remember" that `fib(2) = 1`. Cool!
• Next, we return `1` and pop `fib(2, memo)` off the call stack.

And here's where we're at:

```fib(3, memo)
fib(4)
```

So now we go back to the call to `fib(3, memo)` and pick up where we left off. Which was `const ans = 1 + fib(n-2, memo)`. So we end up executing `fib(n-2, memo) = fib(1, memo)`, which is a base case and gives us `1`.

Now, we move on to `memo[n] = ans` which becomes `memo = 2`. So we are "remembering" that `fib(3)` is `2`, and our `memo` object now looks like this:

```{
2: 1,
3: 2
}
```

After `memo[n] = ans`, we return `2`, pop `fib(3, memo)` off the call stack, and return to the `fib(4)` call, picking up where we left off.

And where did we leave off? Here:

```const ans = fib(n-1, memo) + fib(n-2, memo);
```

Well, we just finished the LHS of the `+`, so we're really here:

```const ans = 2 + fib(2, memo);
```

So we proceed by pushing `fib(2, memo)` to the top of the call stack and begin it's execution.

```fib(2, memo)
fib(4)
```

And now something cool happens. Instead of proceeding to the `const ans = 1 + fib(n-2, memo)` line like we had been doing, we hit `else if (memo[n])`! So instead of having to dig further into the recursive calls, we can just return `memo` which is `1`!

So `fib(2, memo)` just gets popped off the stack, returning `1`.

```fib(4)
```

And `fib(4)` picks up here:

```const ans = 2 + fib(2, memo);
```

`fib(2, memo)` returned `1` to us, so we have:

```const ans = 2 + 1;
```

And it's just smooth sailing from there: we "remember" that `fib(4)` is `3`, return `3` as our answer, pop `fib(4)`off the call stack, and breathe a slightly smaller sigh of relief than we did last time. Because last time when we hit `fib(2, memo)` the second time, we had to make more recursive calls. This time we were able to remember the previous result. ## Interlude: Stack overflows

You've heard of Y Combinator right? Well, the name "Y Combinator" isn't some random name. A "y combinator" is a thing from lambda calculus.

Same thing with Google. "Googol" is the number `10^100`.

And same thing with Stack Overflow.

You know that call stack we've been talking about? Well, there's only so many elements that it can hold. For example, if the call stack can hold 32,000 elements and you try to add a 32,001st element, you'll get a stack overflow error.

And how big is this call stack? How many elements can it hold? The answer depends the language and execution environment. For JavaScript running in web browsers, check out this post.

So this is something to think about. If you're writing a recursive function, you have to make sure that it won't recurse so deeply that it overflows the call stack. For `fib(n)`, that means a recursive approach won't work for `n > call stack size`. We'll talk about an alternative approach you can use soon.

## Interlude: What data structure to choose for your memo

In the previous example, I used an object as my `memo` data store. But I didn't have to. I could have chosen an array, or a map. Or even something funky like a trie. The key is being able to look up the "remembered" value in constant time, and you can accomplish that goal with different data structures.

But there are tradeoffs. If we used an array, we'd be able to benefit from spatial locality and get some small performance gains. So I think an array would have made for a better choice here. The reason I chose an object was actually because...

```{
2: 1,
3: 2
}
```

...makes it clearer that we're rembering "`fib(2)` is `1` and `fib(3)` is `2`" than this:

```[
undefined,
undefined,
1,
2
]
```

So it was just for the purpose of making the article easier to read.

If we looked at `fib(10)` or something, there wouldn't be any wasted space in our `memo` array, except for those first two slots, which correspond to the base cases of 0 and 1.

```[
undefined,
undefined,
1,
2,
3,
5,
8,
13,
21,
34,
55
]
```

But for a different problem, our memo array might end up looking something like this and have a bunch of wasted space:

```[
undefined,
undefined,
17,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
undefined,
...
undefined,
4,
]
```

In which case:

1. We're not even benefiting from spatial locality anymore.
2. We're wasting space. Which may or may not be a big deal, depending on how much space we're actually wasting and how much space we can afford to waste.

In this problem we're memoizing along one dimension. But in other problems, we might be memoizing along two dimensions.

In this problem, we want to remember that `fib(3)` is `2`. But in other problems, we might want to remember that `foo(124, 276)` is `773`.

In which case, we have two inputs and one output instead of one input and one output.

So, the question becomes, how do we map two inputs to one output? What data structure should we use?

Well, we could transform the two inputs into one input and use an object, like this:

```memo[`\${a}:\${b}`] = c;
```

So our `memo` would look like this:

```{
"124:276": 773,
}
```

And we know that for `"124:276"`, the LHS of the `:` is our first input and the RHS is our second input. And we have to cast them to `Number`s.

That's what I did for a while when I had two-dimensional dynamic programming problems. I had this idea in my head that "we use a hash for our memo". It took me a while to realize that this isn't necessarily the case.

In the "memoize along two dimensions" example, a more elegant approach is probably to use a two-dimensional array. But not necessarily. It's really up to you to evaluate all of the tradeoffs and decide which data structure makes the most sense in your situation.

## Approach #3: Bottom-up solution

Let's think back to the first section of this perhaps-too-long article. We tried to understand what the definition of a fibonnaci number is by saying that `fib(8) = fib(7) + fib(6)`, but we realized that this definition is pretty annoying. Imagine if I asked you to quickly tell me what `fib(8)` is. You'd have to figure out what `fib(7)` is. But to do that you'd have to figure out `fib(6)`, and `fib(5)` and... yeah, you see how this is annoying.

Then we realized that fibonnaci numbers form a sequence, and it's pretty easy to construct the sequence:

```0 1 1 2 3 5 8 13 21 34 55 89...
```

So to get `fib(8)`, we could just go through this sequence (counting from zero!) until we hit the eighth element. If someone asked me to get `fib(8)` in real life, this is how I would do it.

More generally, this approach of "starting from the beginning" is known as the "bottom-up" approach to dynamic programming problems. We'll unpack that statement some more in the following sections, but for now, let's understand how we'd write the code for a bottom-up solution.

```function fib(n) {
let prevPrev = 0;
let prev = 1;
let i = 2;
let curr;

while (i <= n) {
curr = prevPrev + prev;
prevPrev = prev;
prev = curr;
i++;
}

return curr;
}
```

We're basically just doing what we did with the sequence. Except we're starting at 2. Here's how it works.

Say we're trying to get `fib(6)`. We start the sequence off at:

```0 1
```

Now we look at the previous two values, add them together, get 1, add 1 to the sequence, and update what the previous two values are.

```0 1 1
```

And we do the same thing again. We look at the previous two values, add them together, get 2, add 2 to the sequence, and update what the previous two values are.

```0 1 1 2
```

Actually, that's not quite right. We're not actually keeping track of the full sequence. We only need to keep track of the last two values, so that's all we're doing here. So our "sequence" really looks like this:

```1 2
```

Continuing, we take those last two values, add them up, get 3, and update stuff.

```2 3
```

The `while` loop just continues this process until we get up to 6. And that's how the bottom-up solution works.

Something to note here is that while the time complexity is the same as approach two — `O(n)` — the space complexity has improved.

In the second approach, the space complexity is `O(n)`. Why? Because of the call stack. There's a point where the call stack has `n` items on it, and each of those calls take up space. Take a look at the call tree again, and review the trace of approach two if you don't see this. But with the bottom-up approach, we only use a constant amount of extra space, so the space complexity is `O(1)`. Let's dive into that some more.

We do use some extra space in the bottom-up solution, namely with the four variables `prevPrev`, `prev`, `curr` and `i`. But think about the following questions. How much extra space do we use when `n` is `10`? What about when `n` is `1000`? What about when it's `1000000` or `1000000000`? The answer is always the same: 4 "units".

So it's not that we're not using up any extra space. We are. It's just that the amount of extra space doesn't grow with `n`. It's a constant amount of extra space. And that almost always means that it isn't a lot of extra space. It's possible that it uses 100GB of extra space no matter what the size of `n` is. And in that case, the space complexity would be bad. But that isn't what happens in real life. In real life, extra space only becomes a problem when it scales with `n`.

## Recap

Let's recap where we've been and where we ended up.

With approach #1 we came up with a solution, but this solution took `O(2^n)` time. The code was really clean and readable, but `O(2^n)` time is just too long, so we pursued a better solution.

With approach #2 we added memoization on to the previous approach and got the runtime all the way down to `O(n)`! Nice! Now we have something that is performant, and the code is only slightly more hairy. But we are still using up `O(n)` memory.

With approach #3 we were able to keep the `O(n)` runtime and only use up `O(1)` memory. Nice! However, some would say that the bottom-up solution is a little less elegant and readable than the top-down solution, so you might count that as a downside.

There is a narrative developing where we've continuously made improvements to our solution. Ie. we started off with approach #1, improved it and got approach #2, and then improved approach #2 into approach #3.

I don't like to think about it that way though. Each approach has trade-offs. I like to consider the trade-offs I face and think about each approach as it's own special snowflake.

On the other hand, some people like to tell approach #1 that it's an ugly duckling and that it will never find true love. That's a little unfair, IMO. I believe that there is someone out there for you, approach #1. Sure, your `O(2^n)` runtime is a bit of an eyesore. But some people may actually find it kinda cute. And some people are just incredibly seduced by the elegance of your brevity.

Woah! How did we get here? This analogy is getting a little bit out of hand. Let's return to the real world before this turns into Fifty Shades of Fibonnaci.

I can imagine situations where you know that `n` will always be pretty small and differences in microseconds just aren't going to matter to your users. Or maybe you're running something in the background and there's no user waiting for you and tapping their foot impatiently. And maybe you're working with developers who don't understand memoization, so maybe then it'd make sense to use the first approach and optimize for developer experience. Someone should invent a big-O notation for that.

I'm sure you can imagine other situations where approach #2 makes the most sense, or where approach #3 makes the most sense.

I acknowledge that it is probably pretty rare for approach #1 to make the most sense, in practice. And that there's some value in being aware of this. You could probably just adopt approach #3 as "the best practice" and get away with it.

But in doing so, you are making a very dangerous shift in your mindset, IMHO. You're veering away from thinking from first principles. That might work out for you in some situations, but eventually I believe that it will come back to bite you. So then, we should seek to make our decisions based on the trade-offs that we face. With perhaps some awareness of and bias towards best practices sprinkled in.

## "Dynamic programming"?

"Dynamic programming" is one of those terms that everyone throws around, but few really understand ("abstraction" is another, IMO). To be fair, I don't think that having an incomplete understanding of terms like "dynamic programming" (and "abstraction") get in the way of things very much, in practice. But it always makes me uncomfortable when I hear or use the term and don't quite know what it means. So let's spend a little time talking about what it means.

Many people use the term "dynamic programming" to mean "recursion with memoization". That's not quite right. But in practice, "recursion with memoization" is usually the "flavor" of dynamic programming that is used.

Let's build off of this "recursion with memoization" definition. "Recursion with memoization" is one way to do dynamic programming, but a second way is to use the bottom-up approach we used with approach #3. This bottom-up approach is also known as tabulation. And "recursion with memoization" is also known as "top-down dynamic programming".

So we currently have "recursion with memoizaton, or tabulation" as our definition. That definition probably encompasses 99% of the real world usages of dynamic programming, and few people have understandings that go further than this definition. But if you want to, you can continue to explore how far the rabbit hole goes.

This Stack Overflow answer does a great job of clarifying what the term "dynamic programming" really means. In short, it means that you have a problem that, by it's nature, consists of subproblems, and that you solve this problem in such a way that you avoid duplicate work. So if your problem doesn't naturally consist of subproblems, then it wouldn't be solved with dynamic programming. And if your solution involves duplicate work, like our approach #1 to `fib(n)`, it isn't dynamic programming. Memoization and tabulation are the two main ways to do dynamic programming, but there are other possibilities, and it is also possible to mix and match different approaches.

## "Dynamic"?

If you're crazy like I am, this answer still probably isn't satisfying to you. Here are the questions I find myself asking:

I understand what the definition is saying, but I still don't get why it's called "dynamic programming"?

What is dynamic about it? I know what programming is. And I know what dynamic means. Something static is something that stays the same. Something dynamic is something that can change. So what about this is dynamic?

The fact that each recursive call is doing something different? But then approach #1 should be considered "dynamic programming" too.

And it just seems like way too broad a definition. If you have a function that logs to the console each key the user presses, well, that seems to also fit the idea of code that is dynamic. The code that runs changes at runtime based on what the user's input is. If they press "a", `console.log("a")` runs. If they press "b", `console.log("b")` runs. So shouldn't that also be considered "dynamic programming" since the code is dynamic/changing?

Well, I did some digging, and it turns out that the term was chosen because it sounded cool and it would help the guy who coined it get research grants. There's no deeper meaning that addresses my above questions. At least from what I could tell.

If you have any thoughts, I'd love to discuss them over email: adamzerner@protonmail.com.

If you'd like to subscribe, you can do so via email or RSS feed.

- 5 toasts