If you are trying to make it big as a coder, you’ve probably been advised by coding Gurus (more times than you can count) that mastering Dynamic Programming (DP) is a must. Don’t cower already, it’s not rocket science! It just needs a little patience and dedication on your part to learn and master Dynamic Programming.
And that’s precisely why we’re here to guide you through the concept of DP!
First, let’s start with the fundamental question.
What Is Dynamic Programming?
Developed by Richard Bellman way back in the 1950s, Dynamic Programming is a combination of computer programming and mathematical optimization aimed to avoid repetitions. Essentially, DP focuses on breaking down an optimization problem into smaller fragments of simpler sub-problems and stores the solution of all the subproblems. Thus, by remembering the partial results of these subproblems, Dynamic Programming can help avoid coders and programmers to avoid repetitions within their code. Unlike the naive approach that takes up a significant amount of time, DP allows you to solve an array of programming problems in O(n2) or O(n3) time.
Now, let’s talk about another crucial aspect of DP – Patterns!
When talking about DP, it is essential that you learn to recognize the common patterns of coding problems so that you can leverage those patterns to solve other similar problems. Let’s take the example of a coin change problem:
You have n types of coin denominations bearing values C1 < C2 < … < Cn (all integers). Assume C(1) = 1 to make changes for any amount of money (M). Create an algorithm to get the minimal number of coins that make change for an amount of money M.
So, what approach should you take in solving this problem? In four simple steps, we’ll show you how!
- Problem & Subproblem
When a problem is given to you, first you have to check whether or not it is suitable for dynamic programming. And how do you do that? By seeing if it is possible to break down the problem into smaller subproblems.
If you see the coin problem closely, you’ll find that it is quite similar to Fibonacci. Here, you can see that a subproblem is capable of making changes for a smaller value. So, by determining the minimal number of coins needed for all the values < M (1, 2, 3, … M – 1), we can find the answer for M simply by determining the best possible combination of them. In this light, we find that the solutions for subproblems of this problem can help to find the solution for the primary problem and hence, dynamic programming is worth a shot here.
- Identify The Formula
Once you’ve recognized that the problem can be broken down into subproblems, you must now determine how the subproblems can help in solving the primary problem by applying the relevant formula.
For the given coin change problem, let’s assume that F(m) indicates the minimum number of coins required to make M. So, we need to determine how to indicate F(m) using values less than M. Suppose C1 is needed, then F(m) = F(m – C1) + 1 to find out how many coins are required to make [m-C1].
As we do not know which value is exactly required (from C1 to Cn), we must iterate all the values. So, the formula will be as follows:
F(m) = min(F(m – Ci)) + 1, for i = 1 … n
The formula denotes that we need to iterate all the solutions for [m-Ci] to find the minimal value.
Memoization is pivotal in DP. Why so? If you merely implement the formula, you will find that to calculate F(m), the code will calculate a bunch of subproblems of F(m – Ci), whereas to calculate F(m – Ci), it will further calculate the sub-subproblem. The chain will continue and hence, the code might end up calculating many subproblems or sub-subproblems more than once. This is where memoization is helpful. Through memoization, you can store results of the subproblems/sub-subproblems so the next time you need to solve those, you can directly use the results.
To save the results of a subproblem, you must create an identifier for it. So, an array memory can be created as [m + 1] and for the subproblem F(m – Ci), you can store the result to memory [m – Ci] for solving future problems.
Implementation becomes easier and more convenient when you break it down into smaller fragments. This allows you to follow the same to solve similar questions. Here’s how you should go about the implementation process:
- Initialize memoization and define the array memory [m + 1].
- See if the problem has been solved using the memory. If yes, revert to the result directly.
- Now, implement the formula and save the result to memory.
While these four steps basically sum up the top-down approach, that is, we solve the primary problem by breaking it down into subproblems, there’s another approach to DP – bottom-up. Unlike the top-down approach, the bottom-up approach doesn’t use recursion but begins with subproblems and moves up towards the primary problem.
So, that’s Dynamic Programming simplified for you! Have any doubts? Drop them in the comments below!