**Introduction**

“Those who cannot remember the past are condemned to repeat it.”

You guessed it right: Dynamic programming!

You must have heard about it while preparing for technical interviews or struggled through it in a data structures course or maybe while learning how to code on your own; someone must have told you somewhere along the way that it’s important to understand dynamic programming.

Yes, writing algorithms using dynamic programming is as essential as it is feared. But don’t worry, you are on the right page, and trust me, it will be worth reading.

Today in this article, we will discuss everything from scratch, like a brief summary of dynamic programming, its prerequisites, its importance, the ordered way to solve its problems, and the pattern of problems generally asked in interviews.

Let’s get started with the most fundamental question:

**What is Dynamic Programming?**

Dynamic Programming or DP is just an optimization technique. It is a method for solving problems by breaking them down into a collection of simpler subproblems, solving each of those subproblems just once, and storing their solutions.

When the same subproblem occurs, we can simply look up the previously computed solution instead of recomputing its solution.

It saves computation time at the expense of storage space. Okay, wait, have we heard these terms before?

Yes, in Recursion. Recursion is a method of solving a problem where the solution depends on solutions to more minor instances of the same problem.

Since DP has always been a pain point for everybody, let me try to ease it a bit and walk you through a classic problem asked by Amazon.

**Climbing The Staircase Problem **

Want to Practice Staircase Problem? Click here to solve it.

Ninja is climbing a staircase. It takes n steps to reach the top. Each time he can either climb 1 or 2 steps. In how many distinct ways can Ninja climb to the top?

**Note:** This is one of those problems where there are many ways to solve it-including recursion, memoisation, dynamic programming, etc.

Suppose, if the input is 2 (there are 2 stairs in the staircase), then there are 2 distinct ways to climb to the top. Ninja can climb one step at a time i.e. {(0,1),(1,2)} or can climb only two-step i.e.{(0,2)}

Similarly, if the input is 3, there are 3 distinct ways to climb to the top.

Ninja can climb one step at a time i.e. {(0, 1) ,(1, 2),(2,3)} or climb the first two-step and then one step i.e. {(0,2),(1, 3)} or climb first one step and then two step i.e. {(0,1), (1,3)}.

There will be ‘N’ steps to climb, So at every step, Ninja has two options to climb the stairs: either he can climb with one step or climb with two steps.

So the number of ways can be recursively defined as:

Where “currStep” denotes the current step on which the person is standing, N represents the destination step.

The recursion tree for N=5 would be like this:

We can observe many redundant calls in the recursion approach and only ‘N’ distinct function calls in the recursion tree. So we should avoid redundant function calls.

To do that, we will store the result at each step, and whenever the same function call occurs, we can directly return the result for that step. Sounds good! This technique is known as memorisation. We will be keeping the result into an array of hashmap.

As we have seen that this problem can be broken into subproblems. And many subproblems were the same, so for that, we were using memoisation.

But instead of storing the result through recursion. Now we are going to store the result iteratively. Our intuition is:

How can we reach the “currStep” step in taking one step or two steps:

- We can take the one-step from (currStep-1)th step or
- We can take the two steps from (currStep-2)th step

The base case will be, If we have 0 stairs to climb, then the number of distinct ways to climb will be one (we are already at Nth stair, that’s the way), and if we have only one stair to climb, then the number of distinct ways to climb will be one, i.e., one step from the beginning.

Hence the total number of ways to reach “currStep” will be the sum of the total number of ways to reach at (currStep-1)th and the total number of ways to reach (currStep-2)th step.

Let dp[currStep] define the total number of ways to reach “currStep” from the 0th.

So,

From the above example, we can infer that a problem can be optimised using dynamic programming if it has:

Now, let’s talk about the basic knowledge one needs before entering the world of dynamic programming.

**Pre-Requisites**

Dynamic programming is just an optimization over recursion. Thus the prerequisites of entering the world of dynamic optimization are:

- Recursion
- Sorting Algorithms – Merge Sort, Quick Sort, etc.
- LinkedList
- Trees
- Time and Space Complexity

**An Ordered Way of Practicing Dynamic Programming Problems**

1. **Recursion:**

The first step to solve any dynamic programming problem is to find the initial brute force recursive solution.

2. **Memoisation:**

- In memoisation, we try to solve a problem by recursively breaking it into more minor problems, i.e., we start with the given n and recursively compute it until we reach the base problem.
- We create a
*memo*to store the values returned from solving each subproblem. Then, when the same problem occurs again, we retrieve the solution from our memo rather than recomputing it again.

3. **Tabular Method:**

The tabular method says,” if you’re going to compute something, again and again, better store it somewhere.”

- In a tabular method,
**we start with the small solutions and build them up**. First, compute solutions to all the possible base cases and then work up to reach the answer we want. - It ensures that a method doesn’t run for the same inputs or subproblems more than once by keeping their solutions. We can use an array or hashmap to store the values that we have already computed to look them up later quickly.
- A general python code structure for the tabulation approach would look something like this.

Initially, writing all three ways might seem frustrating, and also not all the recursive problems can be optimised using the tabular method, but yes, most of them can be!

Trust me, by applying structure to solutions and following the ordered steps, we can quickly master dynamic programming. To know in-depth about dynamic programming and types of problems, you can check out the blog on CodeStudio.

**Importance in Interviews**

No doubt, Dynamic Programming or DP has always been a cherry on the cake in interviews. Interviewers love to test candidates on DP because it is perceived as a complicated topic.

In fear of that many Interviewees struggle during interviews because:

- Either they lack a problem-solving framework for approaching dynamic programming.
- Or fail to decode the pattern in problems.

But don’t worry, we have come a long way and overcame some hurdles already. Now we have a little further to go to master dynamic programming. Let’s just walk you through the problems list to get more familiar with the patterns.

**Basic Problems Asked In Interviews**

Remember, the firm foundation of recursion and memoisation can make dynamic programming *“Easy peasy lemon squeezy topic.*”

To have a strong base, try these fundamental problems:

**1-D and 2-D Array Problems**

After practicing some fundamental problems, now we can move forward to our oldest and most used data structure- Arrays.

An **array** is used to store a fixed-size sequential collection of elements of the same data type. Array-based questions are never-ending, but we are providing you below some frequently asked array problems in interviews.

**Strings**

The string is another popular topic in programming job interviews. I have never participated in a coding interview where no string-based questions were asked.

A **string** is a sequence of characters in computer programming, either as a literal constant or variable. Check the problems below to get a basic idea of string problems that fall under dynamic programming.

**Counting Problems**

Counting is fun; some real-world problems that ask us to find the finite number of objects which can satisfy a particular condition fall under this category. For example:

**Mathematical Problems**

Maths is beautiful. It is an expression of the human mind that reflects the active will and the contemplative reason. Its essential elements are logic and intuition, analysis and construction, generality, and individuality. The same goes for programming. Try your hands-on below mathematical problems of dynamic programming.

**Standard Problems**

Apart from data structure-based questions, most of the programming job interviews also ask some standard questions, and they are most likely to be asked by top-notch companies. For example:

- Rod Cutting
- Count Palindromic Subsequences
- Longest Palindromic Substring
- Optimal BST
- Distinct Subsequences

This is not the end of practice.

You can try as many problems as possible on our platform CodeStudio created by aspiring and creative minds, which provides you hassle-free, adaptive, and excelling online courses, practice questions, blogs, mentors-support, interview experiences, and everything you need to become the perfect candidate for your dream company!

You can get insights into the Interview-Experiences of other programmers from companies like Amazon, Microsoft, Google, Samsung, Adobe, Codenation, etc.

## Frequently Asked Questions

**What are the features of dynamic programming?**

The main features of dynamic programming are:

1. To solve a problem optimally if it has overlapping subproblems and optimal substructure.

2. To explore all the possibilities to achieve an optimal solution.

3. To reduce time complexity.

**Is math a dynamic programming?**

Dynamic programming is both a mathematical optimization method and a computer programming method.

**What is the basic principle of dynamic programming?**

Dynamic Programming relies on the principle of optimality. It explores all the possibilities, finds the base cases, starts with the small solutions, and then works up to reach the answer we want.

**Is dynamic programming hard?**

Dynamic programming is considered mysterious and counterintuitive among programmers, but practicing many questions can make it easy for you to develop a framework for approaching dynamic programming questions.

**How do you practice dynamic programming?**

The best way to practice dynamic programming is:

1. First, define a brute force recursive solution.

2. Characterise the structure of the recursive solution.

3. Identify the base cases.

4. Store the computed values of overlapping subproblems.

5. Convert Recursive code to Memoised code.

6. Convert Memoised code to Tabular form.

**Is Dijkstra dynamic programming?**

No, Dijkstra is a greedy algorithm.

**Is Floyd-warshall dynamic programming?**

Yes, Floyd-warshall is a dynamic programming problem. It breaks down the problem into smaller subproblems then synthesises the final solution from pre-computed subproblems.

**Is Dijkstra BFS or DFS?**

Dijkstra’s algorithm is BFS with a priority queue.

**How is the Floyd Warshall algorithm?**

Floyd-Warshall Algorithm is an algorithm for finding the shortest path between all the pairs of vertices in a weighted graph. It works for both the directed and undirected weighted graphs without negative cycles. Floyd-warshall is a dynamic programming problem. It breaks down the problem into smaller subproblems then synthesises the final solution from pre-computed subproblems.

**Key Takeaways**

This article started with the basic idea of dynamic programming followed by a systematic way of solving dynamic programming problems. Then we walk through the roadmap of all the essential questions you should know to master dynamic programming and ace your technical interview.

Perfection comes from consistent and deliberate practice,

So don’t stop practicing and If you feel the need for expert guidance to learn more concepts, check out our DSA courses by starting your free trial today.

**By Aanchal Tiwari**

## Leave a Reply