Dynamic Programming

One easy way to explain is to find subproblems for a problem that also overlap. Some recursive algorithms are exponential. The recursive algorithm can be drawn as a tree. Some subtrees are the same. So the return values of those functions can be cached. The return values can be of any form. Boolean, string, list, number of ways, index. This is easy. This kind be first done by a brute force. There need not always be a constant number of subproblems for each problems. So do it in for loops if necessary . But make sure when calling the subproblem, make sure the scope of the problem shortens. The base cases are usually straightforward. With practice this can be done. There is another way to visualize. That is by looking at tables. There the implementation is always iterative. All recursive problems can be done in a iterative way?? If the variables can be arranged properly. Some can be thought of like this easily. Depends of the problems. The thing with this table approach is that that table doesn’t always have the right state. The subproblem solved will be right. With each iteration a part of the table become correct. Some base values have to be set here again. Not all problems are dynamic only. Some other working need to be added too. Like maintaining two pointers. Double minimjm. There are other steps in addition to the things that DP gives that need to be added too. There are various kinds. Knapsack is about choosing. There we compare two subset. One when choosing and one when not choosing. They both diverge to two smaller subproblems with change variables. With string problems we go from substring. At a time, part of the things has answers. There we run into off by one errors. Have to make sure empty string is also handled.