Dynamic programming: explain with examples

Are you trying to learn dynamic programming? Do you find it difficult to even know what that is? Don’t panic, here at algo.monster I’ll explain it to you in easy English. 

First of all, here’s an easy example that anyone including little kids can understand. 

Get a sheet of paper and a pencil; let’s do some simple math problems for pupils.

A: “Input: 1+1+1+1+1+1+1+1+1+1”, “Asking: What’s that equal to?”

B: “Output: Ten”, “Counting: It is ten”

A: “Input: one more +1 on the right now”, “Asking: How much is it now?”

B: “Output: It will be eleven”, “Answering without hesitation: You’ve got eleven now.”

A: “How did you come to the correct answer so quickly?”

B: “I didn’t count for the second time. You just added another one on the previous numbers.”

A: “So in order to not to do it again, you remembered the answers. What a brilliant idea!”

Dynamic programming remembers the past

Do you want to calculate the first nine numbers again when you add another one in? Certainly not. Why would anyone bother to recompute it when he can reuse the existed results? Those who can’t remember the past have to do the extra computation for the same work in the future. Isn’t that right? 

What is dynamic programming?

What is dynamic programming? Dynamic programming (DP) is an algorithmic technique. And it can solve a complex problem by splitting it into subproblems. Those subproblems tend to show up again and again. In order to fasten the computation process, DP stores the results of the subproblems. By doing that, you don’t have to do the same work repeatedly. 

For example, in the 100-th Fibonacci number, you need to add Fib (99) and Fib (98) to compute Fib (100). Likewise, to compute Fib (99), you need to do Fib (98) and Fib (97). It’s not hard to realize that during the whole process, you’ll need to calculate many problems that repeat themselves. To avoid doing each of these twice, dynamic programming is good to help. 

Dynamic programming is a practical strategy in solving various problems. However, only when the given problem follows the properties of DP, will it be suited to be solved by DP. Those properties consist of two parts: optimal substructure and overlapping subproblems.

Two features of DP

1. Optimal substructure

The given problem contains a recurrence relation. In the example above, you see there’s recurrence in Fib (98) and Fib (97) and many others when you continue the calculation. And these results will be combined to formulate the overall solution to the original problem.

2. Overlapping subproblems

When computing a solution for a bigger subproblem, the same subproblems reoccur. So we say the subproblems overlap. You can regard the subproblems as the smaller version of the given problem. Besides, you’ll find yourself compute the same subproblems twice or more. As in the previous example, Fib (98) has been evaluated twice.

Let’s check the example of Fibonacci numbers gets a clear understanding of these two properties of dynamic programming.

recursion tree for Fibonacci numbers

In this recursion tree for Fibonacci numbers, below is what we can see. 

1. To solve Fib (4), we split it into all these subproblems such as Fib (3), Fib (2) …

2. It is obvious that Fib (2) and Fib (1) recurred more than once. That is the overlapping subproblem pattern we mentioned.

3. Fib (n) = Fib (n-1) + Fib (n-2) is what is shown in this tree. Thus we say that it has an optimal substructure. 

So we can conclude from all the three points that this Fib (4) problem is suitable for the DP solution. It contains the two properties of DP.

DP approaches

  • Top-down approach

You calculate solutions to smaller subproblems so as to solve bigger problems. The previously computed results will be stores so that you won’t end up recalculate them again. You know, the same subproblem appears over and over again. 

In this approach, the strategy to save caches of the solved subproblems is memoization.

  • Bottom-up approach

The bottom-up approach avoids recursion. In this opposite approach, you solve the subproblems from the bottom. You’re filling up an n-dimensional table as you do the computation upwards. That’s how this method is called tabulation. The solution to the given problem is done based on the results you’ve saved in the table.

Dynamic programming examples

How does dynamic programming work? To put it in simple English words: You find your target problem complex. And it is easy to divide it into smaller problems so that you can solve them one by one. However, the same subproblems occur more than once, so you use caches to save results. In that way, your efficiency is improved without doing repeated calculations. Finally, you’ll get an optimal solution to your big problem by combining the solutions you’ve got from the subproblems. 

An example of how DP works

Say a mother wanted her two daughters to practice some math problems at home. She brought two bags of fruits include pitaya, bananas, and oranges. She gave each of them a bag of these fruits. Assume the same kind of fruits have identical sizes and are in the same quantities. 

Daughter A picked up the bag of fruits. She rushed into her room. She got herself a stopwatch. Then she grabbed one and started to peel. 

Daughter B thought it would be tiresome. And it is also a waste if you peel the fruits all at once and can’t eat them all. She realized that these 3 types of fruits are repeating. It means that at an average rate, she would need the same time to peel the same kind of each. She found a pen, a stopwatch, and a sheet of paper. After that, she made a table on paper. She calculated the time she needed to peel an orange, a banana, and a pitaya. Finally, she put the results of each in the paper. 

FruitsPeeling time for eachFruit quantityTime
Pitaya30 secondsN1X1
Banana10 secondsN2X2
Orange15 secondsN3X3

When she filled in the table after peeling only one of each of the three kinds, she got the total peeling time she needed. 

The concept is she only computed each of the peeling time once. She saved the results for later use when she has to peel the same fruit. 

This could be seen as one of the easiest examples of dynamic programming strategy as to how it solves a problem.

Conclusion

Given problem = Bags of fruits

Function calls = Fruits 

Peeling time = Output of the corresponding function which we called

So when the top problem consists of repeated function calls, the values you’ve saved on the table give you a direct answer without computing it again. So that’s how dynamic programming solves a problem. 

Leave a Reply