## Introduction

While preparing for technical interviews or maybe while learning how to code on your own; someone must have told you somewhere along the way that it’s important to understand Dynamic Programming.

Yes, writing algorithms using dynamic programming is as essential as it is feared. But don’t worry, you are on the right page, and trust me, it will be worth reading.

Today in this article, we will discuss everything from scratch, like a brief summary of dynamic programming, its prerequisites, its importance, the ordered way to solve its problems, and the pattern of problems generally asked in interviews.

Let’s get started with the most fundamental question:

## What is Dynamic Programming?

Dynamic Programming is an optimization technique that improves upon plain recursion. When we encounter a recursive solution that makes repeated calls for the same inputs, we can apply Dynamic Programming to optimize it. The approach involves storing the results of subproblems, which eliminates the need to re-calculate them when required in the future.

The basic idea behind dynamic programming is to reuse the already computed values. The intuition behind that is a function will always return the same value of a specific input. Also, not recalculating the values for the same parameters saves time. It saves computation time at the expense of storage space.

## How does Dynamic Programming work?

Imagine if you have a big problem that can not be solved directly, what you can do is break a big problem into smaller problems and then solve those smaller problems resulting in the solving of a big problem, and this is what Dynamic Programming does.

Dynamic Programming works by breaking down a problem into sub-problems and then solving those sub-problems so that you can find the solution to the main problem. Here's an image representation explaining the same:

In the above image, ‘DP’ is a problem with the parameters ‘a’ and ‘b’ where you have to solve the problem ‘DP’. The base condition can be any condition that tells that the problem is solved or the process can be stopped.

Imagine the problem is to find X that can only be determined if you've tried all the possible cases, so the same process is done here in the image. the values of a can be (a - 1) or (a + 1), and the values of b can be (b - 1) or (b + 1). In the first step, these 4 cases can be created to try all the possible cases, and the same process will be repeated till the base condition is met.

## Approaches of Dynamic Programming

As we have discussed the working of Dynamic Programming, let's discuss the approaches of dynamic programming. Before looking at the approaches directly, let's understand Memorization and Tabulation. Memorization is a combination of recursion and caching. Tabulation is a non-recursive technique where the results are stored in a matrix.

There are mainly two approaches to Dynamic Programming are as follows:

### Top-Down Approach

This is an approach that comes underMemorization Technique, which works in the recursion function.Top-Down Approach uses the decomposition approach. In this approach, the problem is broken down into smaller parts.

It also contains redundant information, and there is no need for communication between the modules.

### Bottom-Up Approach

This is an approach that comes under Tabulation Technique, which works in the non-recursive function. Bottom-Up Approach uses the composition approach. In this approach, the smaller problems are solved.

It does not contain any redundant information, and there requires more communication between the modules.

## Recursion vs Dynamic Programming

Here's a table highlighting the difference between recursion and dynamic programming on various aspects:

Aspect | Recursion | Dynamic Programming |
---|---|---|

Definition | A technique where a function calls itself to solve a problem by breaking it down into smaller instances of the same problem. | A technique for solving problems by breaking them down into smaller overlapping subproblems and storing their solutions to avoid redundant calculations. |

Termination | Relies on a base case that determines when the recursive calls should stop. | Typically relies on a well-defined recurrence relation that expresses the solution to a problem in terms of solutions to smaller subproblems. |

Memory Usage | May use more memory due to the call stack, as each recursive call creates a new stack frame. | Often uses less memory by storing intermediate results in a table (e.g., an array or matrix) to avoid redundant calculations. |

Complexity | Easier to implement and understand, especially for problems naturally suited to recursion. | More efficient for problems with overlapping subproblems, often leading to improved time complexity. |

Examples | Calculating factorial, computing Fibonacci numbers, traversing tree structures (e.g., in-order, pre-order, post-order traversal). | Finding the shortest path in graphs (e.g., Dijkstra's algorithm, Floyd-Warshall algorithm), solving knapsack problems, calculating edit distances (e.g., Levenshtein distance). |

## Dynamic Programming Algorithms

Dynamic programming algorithms break down complex problems into smaller, solvable segments. They iteratively find optimal solutions to subproblems and combine them to reach the overall best solution. Several key dynamic programming algorithms include:

### 1. Greedy Algorithms

Greedy algorithms, which are a type of programming, aim to find the local solutions for smaller parts of a problem in order to achieve an overall optimal solution. Although these algorithms make choices that seem optimal at each step, they may not always guarantee the outcome in the long run, which could result in less than-optimal results later on. Greedy algorithms are particularly useful when a greedy choice at a subproblem guarantees the optimal solution later on. Some known examples of algorithms include Dijkstra's shortest path algorithm and Huffman coding for data compression.

### 2. Floyd-Warshall Algorithm

The Floyd Warshall algorithm uses DP to find the paths between all pairs of vertices in a weighted graph, whether it is directed or undirected. Unlike algorithms that calculate the route, between two specific nodes, the Floyd Warshall algorithm calculates the shortest distances between all pairs of nodes in a single execution. It adopts a matrix based approach to iteratively refine distance estimates and can also reconstruct paths when needed. Additionally it possesses the capability to identify cycles within the graph by inspecting the diagonal of the path matrix for negative values, which is beneficial, for network analysis.

**Handling Negative Cycles:**This algorithm can also detect negative cycles by examining the diagonal of the path matrix for negative values, indicating the presence of a negative cycle.

**Time Complexity:**The Floyd-Warshall algorithm exhibits a time complexity of O(n^3), where 'n' represents the number of nodes in the network.

### 3. Bellman-Ford Algorithm

The Bellman Ford Algorithm is a programming method that helps find the routes from a starting point to all other points, in a graph with directed weighted edges. Unlike Dijkstras algorithm this approach can handle graphs with edges, those with negative weights while maintaining accuracy. The algorithm works by improving estimated distances until it reaches the solution. In the beginning these estimates might be higher, than the distances. They eventually converge to identify the shortest path. Unlike Dijkstra's algorithm, it can handle edge weights while still ensuring correctness. However, it tends to be slower compared to Dijkstra's algorithm.

**Relaxation Approach:**Bellman-Ford utilizes relaxation, continually improving approximate distances until an optimal solution is achieved. These approximations initially overestimate actual distances but gradually converge to the shortest path.

**Detection of Negative Cycles:**The algorithm can also identify negative cycles, making it valuable for cycle-cancelling techniques in network flow analysis.

**Dynamic Programming Example**

In this section, we will take an example of finding a Factorial of a given n number using Dynamic Programming. The Factorial of a number is the sum of the multiplication of all the integers smaller than the given number. For example, a factorial of 4 will be 4 * 3 * 2 * 1, which equals 24.

**The output of the above implementation:**

`Factorial of 5 (ninja) is: 120`

**Frequently Asked Questions**

**What are the features of dynamic programming?**

The main features of dynamic programming are to solve a problem optimally if it has overlapping subproblems and optimal substructure, to explore all the possibilities to achieve an optimal solution, and to reduce time complexity.

**What is dynamic programming with example?**

An example of Dynamic Programming can be finding the factorial of a given n number where if the n is 4, then the factorial of 4 will be 4 * 3 * 2 * 1, which equals 24. The Memorization and Tabulation method can be used to solve this problem.

**What is a real life example of dynamic programming?**

Dynamic programming is heavily used in real life problems. It is used in computer networks, routing, graph problems, computer vision, etc. Google Maps is among the most noticeable use cases of DP, along with Search engine.

**What is the difference between dynamic programming and recursion?**

Dynamic programming and recursion both solve problems by breaking them into subproblems. However, dynamic programming stores intermediate results to avoid redundant computations, thus making it more memory-efficient. Recursion, on the other hand, may recompute subproblems.

**Conclusion**

This article started with the basic idea of dynamic programming, followed by a systematic way of solving dynamic programming problems. Then we walk through the roadmap of all the essential questions you should know to master dynamic programming and ace your technical interview.

Common Dynamic Programming Problems -

To learn more about Data Structures and Algorithms, you can enroll in our __DSA in C++ Course__.