Dynamic programming: Everything You Need To Know

dpp

What Is Dynamic Programming?

Wish to build a bright future in Coding? Enroll in our Premium Courses!

Developed by Richard Bellman way back in the 1950s, Dynamic Programming is a combination of computer programming and mathematical optimization aimed to avoid repetitions. Essentially, DP focuses on breaking down an optimization problem into smaller fragments of simpler sub-problems and stores the solution of all the subproblems. Thus, by remembering the partial results of these subproblems, Dynamic Programming can help avoid coders and programmers to avoid repetitions within their code. Unlike the naive approach that takes up a significant amount of time, DP allows you to solve an array of programming problems in O(n2) or O(n3) time.

Now, let’s talk about another crucial aspect of DP – Patterns!

When talking about DP, it is essential that you learn to recognize the common patterns of coding problems so that you can leverage those patterns to solve other similar problems. Let’s take the example of a coin change problem:

You have n types of coin denominations bearing values C1 < C2 < … < Cn (all integers). Assume C(1) = 1 to make changes for any amount of money (M). Create an algorithm to get the minimum number of coins that make change for an amount of money M.

So, what approach should you take in solving this problem? In four simple steps, we’ll show you how!

  1. Problem & Subproblem

When a problem is given to you, first you have to check whether or not it is suitable for dynamic programming. And how do you do that? By seeing if it is possible to break down the problem into smaller subproblems.

If you see the coin problem closely, you’ll find that it is quite similar to Fibonacci. Here, you can see that a subproblem is capable of making changes for a smaller value. So, by determining the minimum number of coins needed for all the values < M (1, 2, 3, … M – 1), we can find the answer for M simply by determining the best possible combination of them. In this light, we find that the solutions for subproblems of this problem can help to find the solution for the primary problem, and hence, dynamic programming is worth a shot here.

  1. Identify The Formula

We have launched a new Preparation Guide for your next interview.

Once you’ve recognized that the problem can be broken down into subproblems, you must now determine how the subproblems can help in solving the primary problem by applying the relevant formula.

For the given coin change problem, let’s assume that F(m) indicates the minimum number of coins required to make M. So, we need to determine how to indicate F(m) using values less than M. Suppose C1 is needed, then F(m) = F(m – C1) + 1 to find out how many coins are required to make [m-C1].

As we do not know which value is exactly required (from C1 to Cn), we must iterate all the values. So, the formula will be as follows:

F(m) = min(F(m – Ci)) + 1, for i = 1 … n

The formula denotes that we need to iterate all the solutions for [m-Ci] to find the minimal value.

  1. Memoization

Memoization is pivotal in DP. Why so? If you merely implement the formula, you will find that to calculate F(m), the code will calculate a bunch of subproblems of F(m – Ci), whereas to calculate F(m – Ci), it will further calculate the sub-subproblem. The chain will continue and hence, the code might end up calculating many subproblems or sub-subproblems more than once. This is where memoization is helpful. Through memoization, you can store results of the subproblems/sub-subproblems so the next time you need to solve those, you can directly use the results.

To save the results of a subproblem, you must create an identifier for it. So, an array memory can be created as [m + 1]  and for the subproblem F(m – Ci), you can store the result to memory [m – Ci] for solving future problems.

  1. Implementation

Implementation becomes easier and more convenient when you break it down into smaller fragments. This allows you to follow the same to solve similar questions. Here’s how you should go about the implementation process:

  • Initialize memoization and define the array memory [m + 1].
  • See if the problem has been solved using the memory. If yes, revert to the result directly.
  • Now, implement the formula and save the result to memory.

While these four steps basically sum up the top-down approach, that is, we solve the primary problem by breaking it down into subproblems, there’s another approach to DP – bottom-up. Unlike the top-down approach, the bottom-up approach doesn’t use recursion but begins with subproblems and moves up towards the primary problem.

So, that’s Dynamic Programming simplified for you! Have any doubts? Drop them in the comments below!