principleModerate
Dynamic Programming vs Memoization
Viewed 0 times
programmingmemoizationdynamic
Problem
I am having trouble to understand dynamic programming. Mainly because of its name. As far as I understand, it's just another name of memoization or any tricks utilizing memoization.
Am I understanding correctly? Or is DP something else?
Am I understanding correctly? Or is DP something else?
Solution
Summary: the memoization technique is a routine trick applied in dynamic programming (DP). In contrast, DP is mostly about finding the optimal substructure in overlapping subproblems and establishing recurrence relations.
Warning: a little dose of personal experience is included in this answer.
Reading suggestion: If this answer looks too long to you, just read the text in boldface.
Background and Definitions
Memoization means the optimization technique where you memorize previously computed results, which will be used whenever the same result will be needed.
Memoization comes from the word "memoize" or "memorize".
Dynamic programming (DP) means solving problems recursively by combining the solutions to similar smaller overlapping subproblems, usually using some kind of recurrence relations. (Some people may object to the usage of "overlapping" here. My definition is roughly taken from Wikipedia and Introduction to Algorithm by CLRS.) I will only talk about its usage in writing computer algorithms. Note that an actual implementation of DP might use iterative procedure.
Why is DP called DP? The word "dynamic" was chosen by its creator, Richard Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. For the full story, check how Bellman named dynamic programming?.
Nowadays I would interpret "dynamic" as meaning "moving from smaller subproblems to bigger subproblems". (The word "programming" refers to the use of the method to find an optimal program, as in "linear programming". People like me treat it as in software programming sometimes.)
Explanations
Why do some people consider they are the same?
It is understandable that Dynamic Programming (DP) is seen as "just another name of memoization or any tricks utilizing memoization". When the examples and the problems presented initially have rather obvious subproblems and recurrence relations, the most advantage and important part of DP seems to be the impressive speedup by the memoization technique.
In fact, for some time, I had been inclined to equating DP to mostly memoization technique applied to recursive algorithms such as computation of Fibonacci sequence or the computation of how many ways one can go from the left bottom corner to the top right corner of a rectangle grid.
How are DP and memoization different?
The memoization technique is an auxiliary routine trick that improves the performance of DP (when it appears in DP). It appears so often and so effective that some people even claim
that DP is memoization
Let me use a classic simple example of DP, the maximum subarray problem solved by Kadane's algorithm, to make the distinction between DP and memoization clear.
Since I was a kid, I had been wondering how I could find the maximum sum of a contiguous subarray of a given array. First thought was grouping adjacent positive numbers together and adjacent negative numbers together, which could simplify the input. Then I tried combine neighbouring numbers together if their sum is positive or, hm, negative. Then the uncertainty seemed attacking my approach from everywhere. Many years later, when I stumbled upon the Kadane's algorithm, I was awe-struck. It is such a beautiful simple algorithm, thanks to the simple but critical observation made by Kadane: any solution (i.e., any member of the set of solutions) will always have a last element. Trust me, only if you can appreciate the power of such a simple observation in the construction of DP can you fully appreciate the crux of DP.
Please note there is not any (significant) usage of memoization in Kadane's algorithm.
Just in case you might brush off Kadane's algorithm as being trivial, let me present two similar problems.
If you can find the solution to these two problems, you will, I believe, be able to appreciate the importance of recognizing the subproblems and recurrence relations more. That might just be the start of a long journey, if you are like me.
By Wikepedia entry on Dynamic programming, the two key attributes that a problem must have in order for DP to be applicable are the optimal substructure and overlapping sub-problems. In other words, the crux of dynamic programming is to find the optimal substructure in overlapping subproblems, where it is relatively easier to solve a larger subproblem given the solutions of smaller subproblem.
In summary, here are the difference between DP and memoization.
Warning: a little dose of personal experience is included in this answer.
Reading suggestion: If this answer looks too long to you, just read the text in boldface.
Background and Definitions
Memoization means the optimization technique where you memorize previously computed results, which will be used whenever the same result will be needed.
Memoization comes from the word "memoize" or "memorize".
Dynamic programming (DP) means solving problems recursively by combining the solutions to similar smaller overlapping subproblems, usually using some kind of recurrence relations. (Some people may object to the usage of "overlapping" here. My definition is roughly taken from Wikipedia and Introduction to Algorithm by CLRS.) I will only talk about its usage in writing computer algorithms. Note that an actual implementation of DP might use iterative procedure.
Why is DP called DP? The word "dynamic" was chosen by its creator, Richard Bellman to capture the time-varying aspect of the problems, and because it sounded impressive. For the full story, check how Bellman named dynamic programming?.
Nowadays I would interpret "dynamic" as meaning "moving from smaller subproblems to bigger subproblems". (The word "programming" refers to the use of the method to find an optimal program, as in "linear programming". People like me treat it as in software programming sometimes.)
Explanations
Why do some people consider they are the same?
It is understandable that Dynamic Programming (DP) is seen as "just another name of memoization or any tricks utilizing memoization". When the examples and the problems presented initially have rather obvious subproblems and recurrence relations, the most advantage and important part of DP seems to be the impressive speedup by the memoization technique.
In fact, for some time, I had been inclined to equating DP to mostly memoization technique applied to recursive algorithms such as computation of Fibonacci sequence or the computation of how many ways one can go from the left bottom corner to the top right corner of a rectangle grid.
How are DP and memoization different?
The memoization technique is an auxiliary routine trick that improves the performance of DP (when it appears in DP). It appears so often and so effective that some people even claim
that DP is memoization
Let me use a classic simple example of DP, the maximum subarray problem solved by Kadane's algorithm, to make the distinction between DP and memoization clear.
Since I was a kid, I had been wondering how I could find the maximum sum of a contiguous subarray of a given array. First thought was grouping adjacent positive numbers together and adjacent negative numbers together, which could simplify the input. Then I tried combine neighbouring numbers together if their sum is positive or, hm, negative. Then the uncertainty seemed attacking my approach from everywhere. Many years later, when I stumbled upon the Kadane's algorithm, I was awe-struck. It is such a beautiful simple algorithm, thanks to the simple but critical observation made by Kadane: any solution (i.e., any member of the set of solutions) will always have a last element. Trust me, only if you can appreciate the power of such a simple observation in the construction of DP can you fully appreciate the crux of DP.
Please note there is not any (significant) usage of memoization in Kadane's algorithm.
Just in case you might brush off Kadane's algorithm as being trivial, let me present two similar problems.
- Can you find efficiently the maximum sum of two disjoint contiguous subarray of a given array of numbers?
- Can you find efficiently two disjoint increasing subsequence of a given sequence of numbers the sum of whose lengths is the maximum?
If you can find the solution to these two problems, you will, I believe, be able to appreciate the importance of recognizing the subproblems and recurrence relations more. That might just be the start of a long journey, if you are like me.
By Wikepedia entry on Dynamic programming, the two key attributes that a problem must have in order for DP to be applicable are the optimal substructure and overlapping sub-problems. In other words, the crux of dynamic programming is to find the optimal substructure in overlapping subproblems, where it is relatively easier to solve a larger subproblem given the solutions of smaller subproblem.
In summary, here are the difference between DP and memoization.
- DP is a solution strategy which asks you to find similar smaller subproblems so as to solve big subproblems. It usually includes recurrence relations and memoization.
- Memoization is a technique to avoid repeated computation on the same problems. It is special form of caching that caches the values of
Context
StackExchange Computer Science Q#99513, answer score: 19
Revisions (0)
No revisions yet.