- Trending Categories
- Data Structure
- Networking
- RDBMS
- Operating System
- Java
- iOS
- HTML
- CSS
- Android
- Python
- C Programming
- C++
- C#
- MongoDB
- MySQL
- Javascript
- PHP

- Selected Reading
- UPSC IAS Exams Notes
- Developer's Best Practices
- Questions and Answers
- Effective Resume Writing
- HR Interview Questions
- Computer Glossary
- Who is Who

In this post, we will understand the differences between the greedy algorithm and dynamic programming methods.

It is an algorithmic paradigm that builds up on a solution in parts, step by step. The next step is chosen such that it gives the most obvious and immediate benefit.

- Problems that involve choosing local optimal values will help in choosing the global optimal values/solution to the problem. Such ate the problems associated with greedy algorithm.
- There is no surety that a greedy algorithm would lead to an optimal solution.
- An optimal choice is made at every stage of the problem, i.e the local optimal solution.
- It is efficient in terms of memory usage since there is no question of having to go back or change previous solutions/values.
- In general, they are quick in comparison to dynamic programming techniques.
- Example: Dijkstra's shortest path algorithm that takes O(ELogV + VLogV) time.
- The solution in a greedy algorithm is computed in a forward method, never visiting the previous values/solutions or changing them.

It is an optimization technique that helps store the result of sub-problems so that they don't need to be re-computed when need in the future. They can just be extracted from the pre-computed set. It reduces the time complexity from exponential to polynomial complexity.

- For example: A recursive solution can be turned into a dynamic programming problem by computing.
- In this, the decision made at every step is done by considering the current problem in hand, and the solution to previously solved sum-problem. This will be used to calculate the optimal value/solution.
- It is guaranteed that a dynamic programming problem's solution would be an optimal one.
- Here, the optimal solution chosen is a globally optimal one. It uses certain formula which would have been used to store previously calculated state values.
- The dynamic programming table is required for memorization. This increases the memory complexity.
- It is comparatively slower.
- Example: Bellman Ford algorithm that takes O(VE) time.
- Dynamic programming determines the solution using a bottom up or top down approach, by developing from smaller problems that have optimal solutions.

- Related Questions & Answers
- Difference between Static and Dynamic Testing
- Difference Between Static and Dynamic Binding
- Difference between Static and Dynamic Web Pages
- Difference between var and dynamic in C#
- Difference between Static Routing and Dynamic Routing
- Difference between Static SQL and Dynamic SQL
- Difference between Basic Disk and Dynamic Disk
- Difference between . and : in Lua programming
- Bitmasking and Dynamic Programming in C++
- Difference Between Go and Python Programming Language
- What is the difference between static and dynamic polymorphism?
- Difference between Fixed Channel Allocations and Dynamic Channel Allocations.
- Difference between Static IP Address and Dynamic IP Address
- Difference between Static binding and dynamic binding in Java
- Dynamic Programming in JavaScript

Advertisements