Subject | Algorithm Design |
---|---|

NU Year | Set: 2.(b) Marks: 5 Year: 2010 |

Dynamic Programming is mainly an optimization over plain recursion. Wherever we see a recursive solution that has repeated calls for same inputs, we can optimize it using Dynamic Programming. The idea is to simply store the results of subproblems so that we do not have to re-compute them when needed later. This simple optimization reduces time complexities from exponential to polynomial.

This makes the most sense to take about with respect to optimization problems. In an optimization problem, you have constraints that any solution produced by the algorithm must satisfy (think of these as the “rules” of the problem). We call solutions that satisfy all the constraints of the optimization problem **feasible**, and the feasible solution that has maximum/minimum (depending on the objective/goal) value an **optimal** solution.

So in short, every optimal solution of an optimization problem is a feasible solution, but not every feasible solution (most times) is an optimal solution.

For example, in the minimum spanning tree problem, a feasible solution would be a spanning tree. The optimal solution would be a spanning tree where the sum of the edge weights in the spanning tree is the smallest possible for the given instance.