Dynamic Programming & algorithms

Dynamic Programming & algorithms
Dynamic Programming & algorithms

Introduction

Dynamic programming is an optimization for recursion as we have to go calculate the same calculation, again and again, making a stack going in-depth but using DP this problem can be overcome.

What we do in dynamic programming instead of doing the same calculation repeatedly, we try to store it somewhere so when asked then instead of calculating it again we can directly return the result.

Wish to build a bright future in Coding? Enroll in our Premium Courses!

Let’s try to understand this with the help of an example, Suppose if we have to find 4! Then how will we do it recursively.

Here, in the above-explained figure, you can see that we have first made a based case where n<=1 then we are returning 1 and else we’re calculating the factorial. There’s nothing wrong with it, the logic is absolutely fine but as we say finding a solution for a problem is not enough.

Our solution should be optimized doing fast calculating for big integers also in minimum time and that’s where Dynamic Programming comes, hereinabove figure one thing to notice is that if  I have to find factorial of another number, let’s say 5, then, we have to go again to recursion until the base case fails.       

Here, let’s try to see if we calculate 5! Then we can write it as 5!= 5* 4! Asking as we just have to multiply 5 with 4! then 4!= 4*3! And so on. So it gives us a hint that instead of calculating a factorial again for every other input we can store it somewhere.

Here you can see that when we are going to calculate for n! then we first we are creating an array of size n+1 and checking the result in the array and return the result directly if exist otherwise we calculate it and store it in array so there’s no need to calculate it again and we could return the result directly whenever needed.

How to identify if a problem can be solved by dynamic programming and solve it?

Dynamic Programming is a way to solve problems that exhibit a specific structure (optimal substructure) where a problem can be broken down into subproblems that are similar to the original problem. Clearly one can invoke recursion to solve a DP.

But it is not necessary.

We have launched a new Preparation Guide for your next interview.

One can solve a DP without recursion. Recognizing that a problem can be solved using DP is the first and often the most difficult step in solving it. What you want to ask yourself is whether your problem solution can be expressed as a function of solutions to similar smaller problems.

There are following two different ways to store the values so that the values of a sub-problem can be reused. Here, will discuss two patterns of solving DP problem:

  • Tabulation: Bottom-Up, Going bottom-up is a way to avoid recursion, saving the memory cost that recursion incurs when it builds up the call stack. Put simply a bottom-up algorithm “starts from the beginning,” while a recursive often “start from the end and works backwards.”
  • Memoisation: Top-Down, Memoisation ensures that a function doesn’t run for the same inputs more than once by keeping a record of the results for the given inputs(usually in a dictionary).

That sounds confusing, isn’t it? Don’t worry we’ll try to understand all approaches with some standard problems. Let’s start with the first one:

Fibonacci Number Series:

Let’s try to see a simple recursive approach to solve it later we’ll optimise it.

  • Recursive approach: ( Time Complexity of 2n )

In the given figure if we can see in recurrence tree if we want to calculate for sum of Fibonacci Number Series then we have to calculate it again and again for all the cases recursively. But this thing is a hint that we can optimise it with dynamic programming. As we know DP is used when a problem has optimal substructure and problem can be broken into same set of problems i.e. overlapping sub problems.

Memoisation: ( Time Complexity of O(n) )

As here you can see that first we have created an array of size n+1 by initialising it with -1 and then calling the helper function and in the helper function we try to check if the solution already exists and if yes then return it otherwise calculate it. So this is basically memorisation which help us from not solving same problem again and again. It is generally recursive and easy to do it all you have to do it is to think of an recursive solution and then memoise it later.

Dynamic programming (Bottom-up): Time Complexity of O(n)

Recurrence tree for the dynamic programming will be same as in memorisation, the only difference would be in space complexity as memorisation is recursion so it is making stack so memorisation is taking extra space in comparing to dynamic programming approach while the time complexity of both approaches is same.

2. Knapsack Problem:

Let’s consider an example what this problem is?

Suppose, you are a renowned thief who has recently switched from stealing precious metals to stealing cakes because of the insane profit margins. You end up hitting the jackpot, breaking into the world’s largest privately-owned stock of cakes—the vault of the Queen of England. You brought a duffel bag that can hold limited weight, and you want to make off with the most valuable haul possible.

There are three major types of knapsack problems:

Now your task is to steal cake such that according to the weight so that the maximum monetary value the duffle bag can hold.

Recursive Approach:

Memoization:

Since the time to calculate each entry in table V[k,w] is constant, the time complexity is Θ(n x W). This is not an in-place algorithm, since the table requires Θ(n x W) cells and this is not linear in n. But dynamic programming algorithms save time by storing results, so we wouldn’t expect any dynamic programming solution to be in-place.

Maybe you can find the dynamic programming approach for this problem. Give yourself a try! You only need to write the bottom-up approach.

3. Tower of Hanoi:

Initial Stage:

There are 3 pegs ‘from’, ‘using ‘ and ‘to’. Some disks of different sizes are given which can slide onto any peg. Initially, all of those are in ‘from’ peg in order of size with the largest disk at the bottom and smallest disk at the top. We have to move all the disks from ‘from’ peg to ‘to’ peg. In the end, to’ peg will have disks in the same order of size.

  • Only one disk can be moved from one peg to another peg at a time
  • A disk can be placed only on top of a larger one
  • A disk can be moved from the top only

Each step is considered a subproblem and this where dynamic programming comes to mind.

Final Stage: Figure 1 shows all these steps. The first move solves the n=1 problem from 0 to 1, the first three, the n=2 problem from 0 to 2, and the first seven, the n=3 problem from 0 to 1.

Recursive Approach:

So first recursive call moves n-1 disks from ‘from’ to ‘using’ using ‘to’. So after that call n-1 disks are in ‘using ‘peg in order of size and the ‘from’ peg contains the nth disk i.e. the largest one. So, now move that disk from ‘from’ peg to ‘to’ peg. Then again by the 2nd recursive call move n-1 disk from ‘using’ peg to ‘to’ peg using ‘from’ peg. So, after all these, ‘to’ peg contains all disks in order of size.

  • Memoisation: This is a similar solution as above, but instead of using arrays or strings, we will just pass in the number of disks and get back how many steps it will take to move the disks. We will be memoising this solution so that it does not take years to solve (trust me, otherwise it will. And if you don’t believe me, you can try it without the memoisation).

And it took barely a second to solve this. However, you could not use an input 1000 on our previous solutions because they would take forever to complete.

4. Matrix Chain Multiplication:

  • Let A be an n by m matrix, let B be an m by p matrix, then C = AB is an n by p matrix
  • C = AB can be computed in O(nmp) time, using traditional matrix multiplication
  • Suppose I want to compute A1.A2.A3.A4 
  • Matrix Multiplication is associative, so I can do the multiplication in several different orders.

Understand How To Ace Dynamic Programming in Competitions in this blog.

Example:

  • A1 is 10 by 100 matrix
  • A2 is 100 by 5 matrix
  • A3 is 5 by 50 matrix
  • A4 is 50 by 1 matrix
  • A1A2A3A4 is a 10 by 1 matrix

Let’s try to understand this with the help of an example, Given a chain <M1, M2…Mn> of n two-dimensional matrices, write a program to fully parenthesize the product M1×M2×⋯×Mn in a way that minimises the number of multiplications.

  • Recursive Approach:

To find the minimum number of operations needed to multiply the matrices, we need to derive some formula. Each matrix can only multiply with its adjacent matrix, a prefix can only start from A1 to some matrix Akand a suffix can only start from A(k+1) to An, split at some index k.

The resultant dimensions from multiplying 2 matrices are important to find the cost. In a sequence of matrices Ai . . . Aj If we split at a point k, the resultant dimensions of the prefix are ri * ck and the suffix is r(k+1) * cj. The cost of multiplying these two matrices are therefore ri * ck * cj .

Base case: When there is only 1 matrix. Then the prefix will be equal to the suffix, and there are no operations performed, so the cost would be 0. Suppose we have a function B(i, j) that computes the minimum number of required operations for multiplying a chain of matrices from matrix i to matrix j. So in a range i to j, we select from j — i possibilities, from i until j — 1. Those possibilities in turn call B recursively until it hits the base case where i = j.

Solution Steps:

  • Create a recursive function B, where B(i, j) returns the minimum number of operations for multiplying chain of matrices from i to j. 
  • For each i and j, find k between i and j with the minimum cost of multiplication.
  • Dynamic Programming: If we draw the recursion tree for an array of length 4, we could find there are many overlapping subproblems.

The main problem has been broken down into small recurring subproblems (Overlapping Subproblems), which we can piece together to solve the main problem (Optimal Substructure).

1. Define the problem variable and decide the states: Looking at the function B, we will find that there are two parameters i and j on which the state of the problem depends.

2. Define Table Structure and Size: To store the solution of smaller sub-problems in the bottom-up approach, we need to define the table structure and table size.

  • The table structure is defined by the number of problem variables. Since the number of problem variables, in this case, is 2, we can construct a two-dimensional array to store the solution of the sub-problems.

Understand the basic of Dynamic Programming & its Algorithms

3. Table Initialisation: We can initialise the table by using the base cases from the recursion.

  • the base case is when i = j, then B(i, j) = 0.
  • i cannot exceed j, so those areas will need to be greyed out.

4. Iterative Structure to fill the table: We can define the iterative structure to fill the table by using the recurrence relation of the recursive solution.

Solution Steps:

  • Create a 2D Dp array
  • Fill DP array in a bottom-up manner as discussed above
  • return DP[1][n] as the least cost

To learn more about this topic, click here.

By Yogesh Kumar