Literature Review: Greedy Algorithm vs. Dynamic Programming Analysis

Verified

Added on  2023/06/07

|17
|5002
|402
Report
AI Summary
This report provides a comprehensive literature review comparing and contrasting Greedy Algorithms and Dynamic Programming, two fundamental techniques in algorithm design. It begins by introducing the core concepts of each approach, including the characteristics, properties, and solving steps involved. The report delves into the applications of both algorithms, highlighting scenarios where each is particularly effective, such as activity selection, graph algorithms, and optimization problems. A significant portion of the review focuses on the key differences between the two techniques, including their handling of optimal substructure, overlapping subproblems, and the trade-offs between computational complexity and solution optimality. Furthermore, the report explores the positive and negative impacts of each algorithm, providing a balanced perspective on their strengths and limitations. The analysis includes real-world examples and detailed explanations to facilitate a deeper understanding of these essential algorithm design paradigms.
Document Page
LITERATURE REVIEW:
GREEDY ALGORITHM VERSUS DYNAMIC PROGRAMMING
ADVANCED ALGORITHM ANALYSIS ()
Name:
Course:
Date of submission:
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Abstract
Lately, availability and usage of computerized machines and computers has increased within a
short period. This is an indication that the level of information technology has gone up, which is
incredibly great.
Most of these computers are used in order to make the work very efficient, in terms of timely
results which at the same time is very accurate thus increasing productivity. So, the increasingly
availability, has resulted to more developers more so software developers, which is a field that
doesn’t require much capital to start. Most of the people are now developing many useful
software’s and also algorithms to perform certain tasks which must be a very efficient algorithm
so that it can be marketable. Due to the increasing data/information collection in different sectors
that use computers, these data are after a while they are to be analyzed to help in decision
making and all these analysis raises the demand of more and accurate algorithms.
For a good algorithm can not only be best at one thing, but also maximize the benefits at a less
cost. There are various algorithms : greedy algorithm, divide and conquer, sorting and searching
and dynamic programming algorithms are used to solving problems where each and every
algorithm has the steps or the outline it follows to solve the problems. When solving problems
you find that some of the algorithm uses a lot of time a space in solve that problem while others
are simple and easy to implement. In this review, we expound the concept, basic elements,
properties, core, solving steps, difficulties of the different algorithm techniques, positive and
negative impacts and different scenarios where they are applied to solve different problems.
Document Page
Table of Contents
Abstract............................................................................................................................................2
Introduction......................................................................................................................................4
Greedy algorithm.............................................................................................................................4
Applications of greedy algorithm................................................................................................6
Dynamic programming....................................................................................................................7
Optimal Sub structure..................................................................................................................7
Example of an optimal substructure.........................................................................................8
Overlapping sub problems:..........................................................................................................8
Applications of dynamic programming.....................................................................................13
Comparisons between Greedy Algorithm and Dynamic programming........................................13
Conclusion.....................................................................................................................................14
Document Page
Introduction
Algorithms are often quite different from one another, though the objective of these algorithms
are the same. Due to existence of different problems and multiple ways to solve them, there is
always the need to choose a better algorithm which is more efficient for a particular problem, yet
one computational problem can be solved by different algorithms but still efficiency matters.
Some algorithm techniques and computational problems skills are now applied in different
situations: where by some of these techniques break a large problem into pieces and solve them
recursively; Examples are dynamic programming, greedy algorithm, divide and conquer
(Buffa,2012). They assist in solving computational problems, in an efficient way. Also helps in
generating more effective algorithms that takes the shortest period. Here, we are going to look at
two techniques and we compare them in terms of their functionalities:
Greedy algorithm
Dynamic programming technique
Greedy algorithm
Greedy algorithm is s technique that mostly used to find solution to optimization problems. An
Optimization problem is that set of inputs values are given one, which should either maximum or
minimum under some restraints or circumstances (Feldman,2011).
For example Knapsack problem, Minimum-spanning tree problem and shortest path problem,
are problems that may apply greedy algorithm technique, where by it helps by giving optimal
solutions to these problems. It does not continuously give an optimal resolution, but for most of
the problems they do.
Mostly greedy algorithm is used in problems that comprises of optimal structures, where the
optimal result or the issue contains ideal answers for the sub-issues and greedy attribute, which is
very difficult to prove its correctness. These two are the factors the, if they does not exist, the
greedy algorithm will not work.
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
From the set of those solutions, specific results that fulfills or almost fulfills the target of the
capacity (which may either be most extreme or least, by and large), is called optimal solution.
It is well known of making selections which are best immediate at that time (Langford, 2008).
That is, it makes a local optimal selection in the expectation of that selection to cause an overall
globally optimal solution. But some time it may find a solution which is less optimal due to some
external factors. If any selection is not acknowledged then it’s never considered again.
A Greedy algorithm always starts with arranging the input information in a specified way. The
algorithm at that point develops an answer for the problem, with extra special care. At each
progression, we have an incomplete solution, and they aren't generally the answers for the sub
issues, (yet they might be at a few conditions), to the first problem. At each step we settle on
some choice, for the most part to incorporate or exclude some specific component from our
answer, which is done once no turning back. It is generally not difficult to see that the algorithm
in the long run ends with some answer for the issue. It is likewise generally not difficult to
contend about the running time of the calculation, and when it is difficult to contend about the
running time it is a result of issues associated with the information structures utilized as opposed
to with anything including the insatiable idea of the algorithm. The key issue is regardless of
whether the algorithm finds an ideal arrangement, that is, an answer that limits or expands
whatever amount should be limited or augmented (Liu,2012). We say a greedy algorithm is ideal
on the off chance that it is ensured to locate an ideal answer for each info. However, most greedy
algorithms are not ideal.
Some of greedy algorithm positive impacts are:
In many cases, greedy algorithm generally considered to solve a problem due to its
characteristic of being direct to the point.
Simple and easy way to implement efficiently compared to dynamic programming
technique.
Greedy algorithm run time analysis is easier than other techniques
But it also have hindrances such as:
Document Page
It’s difficult to understand correctness issues. Generally, it’s also hard to give a proof why
it is correct, while still using correct algorithm. So proving a greedy algorithm may be time
consuming and also commitments as compared to other techniques.
Greedy algorithm can't rely upon any future decisions or on the answers for sub issues. Greedy
algorithm usually progresses in top-down design, settling on one eager decision after another,
repeatedly diminishing each offered issue to a smaller one, where in top-down fashion, you begin
assembling the enormous arrangement immediately by clarifying how you fabricate it from a
smaller arrangements (Pan,2008) .
When it comes to decide the optimal, you have to do assumption of target work that should be
enhanced, either to be augmented or limited, at a given point. So Greedy algorithm needs to
settle on insatiable decisions at each progression to guarantee that the target work is streamlined.
The Greedy algorithm has just a single time to register the ideal arrangement with the goal that it
never returns and switches the choice (Wang,2010).
In any case, all together for the greedy answer to be ideal, the issue should likewise indicate
greedy property; that is.as earlier explained, an all-inclusive ideal arrangement can be landed at
by settling on locally ideal decisions. Greedy algorithm has no fixed algorithm framework, the
key of the design algorithm is the choice and determination of greedy strategy.
Applications of greedy algorithm
1. Scheduling
Activity Selection There arise problems in activity selection, where the main objective is
to choose the most extreme number of exercises that do not interfere with each other.
Scheduling of unit-time tasks with deadlines on single processor
2. Graph Algorithms
Minimum Spanning Trees in graphs either by use of prims algorithm or kruskals
algorithm (Schuetz,2008).
Dijkstra’s Algorithm to find the shortest path between vertices
Document Page
Others
Used to solve fractional knapsack problem
Traveling Salesman
Set-covering
Dynamic programming
Richard Bellman coined the term dynamic programming in 1957 and He highlighted that
dynamic programming is tackling issues by consolidating the answers for sub-issues that contain
basic sub-sub-issues (Liu, 2014).
Also dynamic programming is a helpful numerical method for settling on a grouping of
interrelated choices. It gives an efficient method to deciding the ideal mix of decisions. It is a
general approach, therefore there isn’t standard mathematical formulation of the dynamic
programming problem. The best way to know the procedures to solve dynamic programming
problems is by exposure to a wide assortment of dynamic programming applications and an
investigation of the attributes that are regular to every one of these circumstances (Sakoe,2013).
In Dynamic programming there are properties that should be looked into consideration that helps
in getting the correct solution. These are:
1. Optimal Substructure
2. Overlapping sub problems
Optimal Sub structure
Also dynamic programming is referred to as technique for problems with optimal substructure:
An ideal answer for an issue contains ideal answers for sub issues. This doesn't really imply that
each ideal answer for a sub issue will add to the primary arrangement. In unique programming,
we take care of many sub issues and store the outcomes: not all of them will contribute to solving
the larger problem (Rahwan,2008). For greedy algorithms, we can simply pick the right or right
sub issue by an insatiable decision. As a result of ideal substructure, we can make sure that in
any event a portion of the sub issues will be helpful. Dynamic programming can solve all the
problems that can be solved by greedy algorithm giving optimal solution.
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
Example of an optimal substructure
If a point W lies in the briefest way from a source direct S toward goal point D, then we can say
that the shortest path from S to D is the combination of the shortest path from S to W, and the
shortest path from W to D.
In dynamic programming we settle on a decision at each progression yet the decision may rely
upon solutions to sub problems, so they interrelate. Dynamic programming solves sub-problems
first and then uses the solutions to sub-problems to come up with a solutions to larger problems,
where those sub-problems may be stored for future use. Dynamic programming starts with a
little bit of the first issue and finds the ideal answer for this smaller issue. It at that point step by
step expands the issue, finding the current ideal arrangement from the first one, until the point
when the first issue is explained completely
Overlapping sub problems:
Dynamic programming has the issue of overlapping sub-problems which are some situations
where you have to solve sub problems more than one. That’s solving as sub problem repeatedly,
thus wasting a lot of time. Consequently remembrance is utilized to manage those covering
issues by putting away the outcomes in the table, so the resulting counts simply complete a table
circle up to check whether there exist another occurrence which is the same.
Dynamic programming does not require the greedy-choice property to find the optimal solution t
must take a gander at a few sub-issues, and powerfully pick one of them to concoct a global
solution the result is normally gathered base up (Grossi,2013). The solutions of the sub-
problems are typically reused in dynamic programming, it is typical to reduce L (j) into L (j−1)
this is one reason why recursion sometimes does not work so well for dynamic programming.
So, the functionality of dynamic programming to store sub-problems solutions is called
memorization which is better than re-computing the sub-problems solutions again. Thus saving
any resource which is required like time but more memory is required in dynamic programming
to store those results. When coming up with the memorization solution for a problem, start with a
backtrack solution that finds the correct answer. Backtrack solution enumerates all the valid
answers for the problem and chooses the best one.
Document Page
Dynamic Programming algorithms are preferred in solving optimization problems because it will
analyze the beforehand tackled sub-issues and will join their answers for give the best solution
for the given problem, thus making it very efficient in analysis.
Most problems that can be solved by a Dynamic Programming Algorithm will have this
necessary condition that is An optimal policy has the property that whatever the underlying state
and introductory choice are, the rest of the choices must constitutes an ideal approach concerning
the state coming about because of the primary choice, this is likewise alluded to as guideline of
optimality. In this way, the ideal immediate outcome relies upon just the present state and not on
how you arrived
Dynamic programming problems are categorized differently: As optimization problems – where
by in optimization problems you are expected to select an achievable arrangement, with the goal
that the estimation of the require abject is minimized or maximized, and are also categorized as
Combinatorial problems – whereby in combinatorial problems you are expected to figure out the
number of ways in which something can be done, or the probability of some event happening
(Kim, 2013).
Dynamic programming problem solving follows the following outline:
1. After being presented with a problem ,show that it can be broken in to smaller problems
called sub-problems
2. Most of these sub-problems are recursive, so the value of their solutions are done
recursively in terms of their ideal answers for sub-issues.
3. The results are stored, that is the solutions of the sub problems, after computing the
solutions in a bottom-up approach.
4. From the computed solutions of the sub problems an optimal solution is constructed.
Document Page
Dynamic programming is more like Divide and conquer approach the only difference is the
memory required to store the results. For divide and conquer you have to get results again and
again.
There are three most essential attributes of dynamic programming issues:
1. Stages – While solving a problem using dynamic programming technique, the problem is
structured in parts of optimization problems called stages, which are solved each at a
time.
2. States – In each stage of the problem you find the states of the process which reflect the
information required to evaluate the results that the current decision has upon future
actions. There are features that should help in the selection of states are:
The states ought to contain enough data to settle on future choices without
looking in to how the procedure achieved the present state
The quantity of state factors ought to be little, since the computational exertion
related with the dynamic programming system is exceptionally costly when there
are more than two, state factors associated with the model definition.
3. Recursion – This is where by a solution of a problem gotten by first tackling a one-
arrange issue and consecutively including one phase at any given moment and taking care
of one-organize issues until the point when the ideal solution reached.
Under dynamic programming approach you can find other types of problems:
Deterministic problems- This is a type of problem where the state at the following stage
is totally controlled by the state and policy choice at the present stage.
The probabilistic problems – This is a type of problem where there is a likelihood
circulation for what the following state will be.
Both types of problems differs from each other. For example, probabilistic powerful
programming contrasts from deterministic unique programming in that the state at the following
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
stage isn't totally dictated by the state and arrangement choice at the present stage (Simao, 2009).
Or maybe, there is a likelihood dissemination for what the following state will be. This
likelihood dispersion still is totally dictated by the state and arrangement choice at the present
stage.
There are features that should help in the selection of states are:
The states ought to contain enough data to settle on future choices without looking in to
how the procedure achieved the present state.
The quantity of state factors ought to be little, since the computational exertion related
with the dynamic programming method is extremely costly when there are more than
two, state factors associated with the model plan
Moreover, assuming that there are numerous phases of the choice, and the past choice will
influence the consequent basic leadership, it can likewise think about dynamic programming
algorithm.
Some advantages of dynamic programming are:
It is an extremely valuable procedure for settling and grouping of interrelated choices. It
requires defining a fitting recursive relationship for every individual issue. Be that as it
may, it gives an awesome computational investment funds over utilizing comprehensive
numeration to locate the best blend of choices, particularly for huge issues (Wang, 2011).
Dynamic programing is reliable more than greedy algorithm due to its ability of
backtracking to the previously computed results of the sub problems in order to yield an
optimal solution.
Disadvantages of dynamic programming are:
Document Page
For a problem to be solved using dynamic programming technique, the problem should
first be gauged, then at the same time proper and correct division of sub-problems,
definition of states and also determine how or the way those states will follow each other
in the right order, all these processes are difficult, so dynamic programming tends to be
more difficult to completely understand it in a proficiency level. Thus, most of the people
opt for easy and convenient algorithms such as greedy algorithm.
In spite of the fact that the dynamic programming algorithm can be all around connected
to improvement issues, the effectiveness can be supported when taking care of the issue
with an extensive number of sub-issues rehashed, notwithstanding, the dynamic
programming strategy has solid adaptability in light of which there is no uniform
standard model of the dynamic programming algorithm. What's more, regardless of
whether to make utilization of dynamic programming to take care of the issue relies upon
the idea of the issue. On the off chance that the properties of ideal sub-structure and
covering sub-issue cover are not fulfilled, the dynamic programming algorithm isn't
material.
Applications of dynamic programming
In most cases you find that areas where greedy algorithm is applied, dynamic
programming can also be applied and solving different problems in a more precise and
accurate way thus giving reliable results or solutions like in knapsack problem.
Greedy Algorithm is unable to solve 0-1 knapsack problem because it cannot find
optimal solution to such problem, so that’s where dynamic programming is applied in
solving 0-1 problems.
Comparisons between Greedy Algorithm and Dynamic programming
For the greedy algorithm solution to be ideal, the issue should likewise have greedy
decision property, that is, an all-inclusive ideal arrangement can be landed at by settling
chevron_up_icon
1 out of 17
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]