Greedy Algorithm vs Dynamic Programming: Advanced Algorithm Analysis
Verified
Added on 2023/06/07
|17
|5002
|402
AI Summary
This review compares the functionalities of Greedy Algorithm and Dynamic Programming in solving computational problems. It discusses the concept, basic elements, properties, core, solving steps, difficulties, positive and negative impacts, and different scenarios where they are applied to solve different problems.
Contribute Materials
Your contribution can guide someone’s learning journey. Share your
documents today.
LITERATURE REVIEW: GREEDY ALGORITHM VERSUS DYNAMIC PROGRAMMING ADVANCED ALGORITHM ANALYSIS () Name: Course: Date of submission:
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Abstract Lately, availability and usage of computerized machines and computers has increased within a short period. This is an indication that the level of information technology has gone up, which is incredibly great. Most of these computers are used in order to make the work very efficient, in terms of timely results which at the same time is very accurate thus increasing productivity. So, the increasingly availability, has resulted to more developers more so software developers, which is a field that doesn’t require much capital to start. Most of the people are now developing many useful software’s and also algorithms to perform certain tasks which must be a very efficient algorithm so that it can be marketable. Due to the increasing data/information collection in different sectors that use computers, these data are after a while they are to be analyzed to help in decision making and all these analysis raises the demand of more and accurate algorithms. For a goodalgorithm can not only be best at one thing, but also maximize the benefits at a less cost. There are various algorithms : greedy algorithm, divide and conquer, sorting and searching and dynamic programming algorithms are used to solving problems where each and every algorithm has the steps or the outline it follows to solve the problems. When solving problems you find that some of the algorithm uses a lot of time a space in solve that problem while others are simple and easy to implement. In this review, we expound the concept, basic elements, properties, core, solving steps, difficulties of the different algorithm techniques, positive and negative impacts and different scenarios where they are applied to solve different problems.
Table of Contents Abstract............................................................................................................................................2 Introduction......................................................................................................................................4 Greedy algorithm.............................................................................................................................4 Applications of greedy algorithm................................................................................................6 Dynamic programming....................................................................................................................7 Optimal Sub structure..................................................................................................................7 Example of an optimal substructure.........................................................................................8 Overlapping sub problems:..........................................................................................................8 Applications of dynamic programming.....................................................................................13 Comparisons between Greedy Algorithm and Dynamic programming........................................13 Conclusion.....................................................................................................................................14
Introduction Algorithms are often quite different from one another, though the objective of these algorithms are the same. Due to existence of different problems and multiple ways to solve them, there is always the need to choose a better algorithm which is more efficient for a particular problem, yet one computational problem can be solved by different algorithms but still efficiency matters. Some algorithm techniques and computational problems skills are now applied in different situations: where by some of these techniques break a large problem into pieces and solve them recursively;Examplesaredynamicprogramming,greedyalgorithm,divideandconquer (Buffa,2012). They assist in solving computational problems, in an efficient way. Also helps in generating more effective algorithms that takes the shortest period. Here, we are going to look at two techniques and we compare them in terms of their functionalities: Greedy algorithm Dynamic programming technique Greedy algorithm Greedy algorithm is s technique that mostly used to find solution to optimization problems.An Optimization problem is that set of inputs values are given one, which should either maximum or minimum under some restraints or circumstances (Feldman,2011). For exampleKnapsack problem, Minimum-spanning tree problem and shortest path problem, are problems that may apply greedy algorithm technique, where by it helps by giving optimal solutions to these problems. It does not continuously give an optimal resolution, but for most of the problems they do. Mostly greedy algorithm is used in problems that comprises of optimalstructures,where the optimal result or the issue contains ideal answers for the sub-issues and greedyattribute,which is very difficult to prove its correctness. These two are the factors the, if they does not exist, the greedy algorithm will not work.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
From the set of those solutions, specific results that fulfills or almost fulfills the target of the capacity (which may either be most extreme or least, by and large), is called optimal solution. It is well known of making selections which are best immediate at that time (Langford, 2008). That is, it makes a local optimal selection in the expectation of that selection to cause an overall globally optimal solution.But some time it may find a solution which is less optimal due to some external factors. If any selection is not acknowledged then it’s never considered again. A Greedy algorithm always starts with arranging the input information in a specified way. The algorithm at that point develops an answer for the problem, with extra special care. At each progression, we have an incomplete solution, and they aren't generally the answers for the sub issues, (yet they might be at a few conditions), to the first problem. At each step we settle on some choice, for the most part to incorporate or exclude some specific component from our answer, which is done once no turning back. It is generally not difficult to see that the algorithm in the long run ends with some answer for the issue. It is likewise generally not difficult to contend about the running time of the calculation, and when it is difficult to contend about the running time it is a result of issues associated with the information structures utilized as opposed to with anything including the insatiable idea of the algorithm. The key issue is regardless of whether the algorithm finds an ideal arrangement, that is, an answer that limits or expands whatever amount should be limited or augmented (Liu,2012). We say a greedy algorithm is ideal on the off chance that it is ensured to locate an ideal answer for each info. However, most greedy algorithms are not ideal. Some of greedy algorithm positive impacts are: In many cases, greedy algorithm generally considered to solve a problem due to its characteristic of being direct to the point. Simple and easy way to implement efficiently compared to dynamic programming technique. Greedy algorithm run time analysis is easier than other techniques But it also have hindrances such as:
It’s difficult to understand correctness issues. Generally, it’s also hard to give a proof why it is correct, while still using correct algorithm. So proving a greedy algorithm may be time consuming and also commitments as compared to other techniques. Greedy algorithm can't rely upon any future decisions or on the answers for sub issues.Greedy algorithm usually progresses in top-down design, settling on one eager decision after another, repeatedly diminishing each offered issue to a smaller one, wherein top-down fashion, you begin assembling the enormous arrangement immediately by clarifying how you fabricate it from a smaller arrangements (Pan,2008) . When it comes to decide the optimal, you have to do assumption oftarget work that should be enhanced, either to be augmented or limited, at a given point. So Greedy algorithmneeds to settle on insatiable decisions at each progression to guarantee that the target work is streamlined. The Greedy algorithm has just a single time to register the ideal arrangement with the goal that it never returns and switches the choice (Wang,2010). In any case, all together for the greedy answer to be ideal, the issue should likewise indicate greedy property; that is.as earlier explained, an all-inclusive ideal arrangement can be landed at by settling on locally ideal decisions. Greedy algorithm has no fixed algorithm framework, the key of the design algorithm is the choice and determination of greedy strategy. Applications of greedy algorithm 1. Scheduling Activity Selection There arise problems in activity selection, where the main objective is to choose the most extreme number of exercises that do not interfere with each other. Scheduling of unit-time tasks with deadlines on single processor 2. Graph Algorithms Minimum Spanning Trees in graphs either by use of prims algorithm or kruskals algorithm (Schuetz,2008). Dijkstra’s Algorithm to find the shortest path between vertices
Others Used to solve fractional knapsack problem Traveling Salesman Set-covering Dynamic programming Richard Bellman coined the term dynamic programming in 1957 and He highlighted that dynamic programming is tackling issues by consolidating the answers for sub-issues that contain basic sub-sub-issues (Liu, 2014). Also dynamic programmingis a helpful numerical method for settling on a grouping of interrelated choices. It gives an efficient method to deciding the ideal mix of decisions. It is a general approach, therefore there isn’t standard mathematical formulation of the dynamic programming problem. The bestway to know the procedures to solve dynamic programming problems is by exposure to a wide assortment of dynamic programming applications and an investigation of the attributes that are regular to every one of these circumstances (Sakoe,2013). In Dynamic programming there are properties that should be looked into consideration that helps in getting the correct solution. These are: 1.Optimal Substructure 2.Overlapping sub problems Optimal Sub structure Also dynamic programming is referred to as technique for problems with optimal substructure: An ideal answer for an issue contains ideal answers for sub issues. This doesn't really imply that each ideal answer for a sub issue will add to the primary arrangement. In unique programming, we take care of many sub issues and store the outcomes: not all of them will contribute to solving the larger problem(Rahwan,2008). For greedy algorithms,we can simply pick the right or right sub issue by an insatiable decision. As a result of ideal substructure, we can make sure that in any event a portion of the sub issues will be helpful. Dynamic programming can solve all the problems that can be solved by greedy algorithm giving optimal solution.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
Example of an optimal substructure If a pointWlies in the briefest way from a source direct S toward goal point D,then we can say that the shortest path fromStoDis the combination of the shortest path fromStoW,and the shortest path fromWtoD. In dynamic programming we settle on a decision at each progression yet the decision may rely upon solutionsto sub problems, so they interrelate. Dynamic programming solves sub-problems first and then uses the solutions to sub-problems to come up with a solutions to larger problems, where those sub-problems may be stored for future use.Dynamic programming starts with a little bit of the first issue and finds the ideal answer for this smaller issue. It at that point step by step expands the issue, finding the current ideal arrangement from the first one, until the point when the first issue is explained completely Overlapping sub problems: Dynamic programming has the issue of overlapping sub-problems which are some situations where you have to solve sub problems more than one. That’s solving as sub problem repeatedly, thus wasting a lot of time. Consequently remembrance is utilized to manage those covering issues by putting away the outcomes in the table, so the resulting counts simply complete a table circle up to check whether there exist another occurrence which is the same. Dynamic programming does not require the greedy-choice property to find the optimal solution t must take a gander at a few sub-issues, and powerfully pick one of them to concoct a global solution the result is normally gathered base up (Grossi,2013).The solutions of the sub- problems are typically reused in dynamic programming, it is typical to reduce L (j) into L (j−1) this is one reason why recursion sometimes does not work so well for dynamic programming. So,thefunctionalityofdynamicprogrammingtostoresub-problemssolutionsiscalled memorization which is better than re-computing the sub-problems solutions again. Thus saving any resource which is required like time but more memory is required in dynamic programming to store those results. When coming up with the memorization solution for a problem, start with a backtrack solution that finds the correct answer. Backtrack solution enumerates all the valid answers for the problem and chooses the best one.
Dynamic Programming algorithms are preferred in solving optimization problems because it will analyze the beforehand tackled sub-issues and will join their answers for give the best solution for the given problem, thus making it very efficient in analysis. Most problems that can be solved by a Dynamic Programming Algorithm will have this necessary condition that is An optimal policy has the property that whatever the underlying state and introductory choice are, the rest of the choices must constitutes an ideal approach concerning the state coming about because of the primary choice, this is likewise alluded to as guideline of optimality. In this way, the ideal immediate outcome relies upon just the present state and not on how you arrived Dynamic programming problems are categorized differently:As optimization problems – where by in optimization problems you are expected to select an achievable arrangement, with the goal that the estimation of the require abject is minimized or maximized, and are also categorized as Combinatorial problems – whereby in combinatorial problems you are expected to figure out the number of ways in which something can be done, or the probability of some event happening (Kim, 2013). Dynamic programming problem solving follows the following outline: 1.After being presented with a problem ,show that it can be broken in to smaller problems called sub-problems 2.Most of these sub-problems are recursive, so the value of their solutions are done recursively in terms of their ideal answers for sub-issues. 3.The results are stored, that is the solutions of the sub problems, after computing the solutions in a bottom-up approach. 4.From the computed solutions of the sub problems an optimal solution is constructed.
Dynamic programming is more like Divide and conquer approach the only difference is the memory required to store the results. For divide and conquer you have to get results again and again. There are three most essential attributes of dynamic programming issues: 1.Stages – While solving a problem using dynamic programming technique, the problem is structured in parts of optimization problems called stages, which are solved each at a time. 2.States – In each stage of the problem you find the states of the process which reflect the information required to evaluate the results that the current decision has upon future actions. There are features that should help in the selection of states are: The states ought to contain enough data to settle on future choices without looking in to how the procedure achieved the present state The quantity of state factors ought to be little, since the computational exertion related with the dynamic programming system is exceptionally costly when there are more than two, state factors associated with the model definition. 3.Recursion – This is where by a solutionof a problem gotten by first tackling a one- arrange issue and consecutively including one phase at any given moment and taking care of one-organize issues until the point when the ideal solution reached. Under dynamic programming approach you can find other types of problems: Deterministic problems- This is a type of problemwhere the state at the following stage is totally controlled by the state and policy choice at the present stage. The probabilistic problems – This is a type of problem where there is a likelihood circulation for what the following state will be. Bothtypesofproblemsdiffersfromeachother.Forexample,probabilisticpowerful programming contrasts from deterministic unique programming in that the state at the following
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
stage isn't totally dictated by the state and arrangement choice at the present stage (Simao, 2009). Or maybe, there is a likelihood dissemination for what the following state will be. This likelihood dispersion still is totally dictated by the state and arrangement choice at the present stage. There are features that should help in the selection of states are: The states ought to contain enough data to settle on future choices without looking in to how the procedure achieved the present state. The quantity of state factors ought to be little, since the computational exertion related with the dynamic programming method is extremely costly when there are more than two, state factors associated with the model plan Moreover, assuming that there are numerous phases of the choice, and the past choice will influence the consequent basic leadership, it can likewise think about dynamic programming algorithm. Some advantages of dynamic programming are: It is an extremely valuable procedure for settling and grouping of interrelated choices. It requires defining a fitting recursive relationship for every individual issue. Be that as it may, it gives an awesome computational investment funds over utilizing comprehensive numeration to locate the best blend of choices, particularly for huge issues (Wang, 2011). Dynamic programing is reliable more than greedy algorithm due to its ability of backtracking to the previously computed results of the sub problems in order to yield an optimal solution. Disadvantages of dynamic programming are:
For a problem to be solved using dynamic programming technique, the problem should first be gauged, then at the same time proper and correct division of sub-problems, definition of states and also determine how or the way those states will follow each other in the right order, all these processes are difficult, so dynamic programming tends to be more difficult to completely understand it in a proficiency level. Thus, most of the people opt for easy and convenient algorithms such as greedy algorithm. In spite of the fact that the dynamic programming algorithm can be all around connected to improvement issues, the effectiveness can be supported when taking care of the issue withanextensivenumberofsub-issuesrehashed,notwithstanding,thedynamic programming strategy has solid adaptability in light of which there is no uniform standard model of the dynamic programming algorithm. What's more, regardless of whether to make utilization of dynamic programming to take care of the issue relies upon the idea of the issue. On the off chance that the properties of ideal sub-structure and covering sub-issue cover are not fulfilled, the dynamic programming algorithm isn't material. Applications of dynamic programming Inmostcasesyoufindthatareaswheregreedyalgorithmisapplied,dynamic programming can also be applied and solving different problems in a more precise and accurate way thus giving reliable results or solutions like in knapsack problem. Greedy Algorithm is unable to solve 0-1 knapsack problem because it cannot find optimal solution to such problem, so that’s where dynamic programming is applied in solving 0-1 problems. Comparisons between Greedy Algorithm and Dynamic programming For the greedy algorithm solution to be ideal, the issue should likewise have greedy decision property, that is, an all-inclusive ideal arrangement can be landed at by settling
on locally ideal greedy choice. Interestingly, dynamic programming strategy is useful for issues that have ideal substructure as well as covering sub-issues, regular sub issues settled. This implies a specific sub-issue can be come in various ways. The dynamic programming calculation ascertains the estimation of each sub-issue once and after that can reuse theseeach time the calculation returns to them. As expressed in Introduction to Algorithms (Sundstrom,2009). In Dynamic programming calculations on sub-problems are done just one ad the stored in a memory called memoization for future retrievalwhich saves time while in greedy algorithm technique there isn’t memory to store those sub-problems results , so the calls are to be done again. Thus wasting time. Dynamic programming technique uses bottom-approach while assembling the solutions dynamically chosen from several sub problems to come up with a global solution. In contrast, greedy algorithm technique uses top-down approachwhere by it start building the big solution right away by explaining how you build it from smaller sub-problems solutions. Dynamic programming, it is exceptionally careful and you are guarantee of finding the arrangement. After each stage, dynamic programming settles on choices in view of the considerable number of choices made in the past stage, and may rethink the past solutions insubproblemsstagesofthealgorithmwhileingreedyalgorithmthereisn’t backtracking. Dynamic programming is considered more efficient in analysis of algorithm due to its functionality of storing already computed solutions as a result of overlapping sub problems so it saves time when another common sub-problem is encountered while greedy algorithm ,a lot of time is wasted asit has to re-compute again and again (Kamalanathsharma,2013). Basically, Greedy algorithms are easier to implement but you are not guaranteed to get to an optimal solution at the end as it is based upon local optimizes solutions at each step which are hoped to yield a global solution while dynamic programming can be relatively hard to implement than greedy because it has to consider the solutions stored of all sub- problems solved so far but it is guaranteed to provide optimal solution at the end.
Secure Best Marks with AI Grader
Need help grading? Try our AI Grader for instant feedback on your assignments.
We can settle on whatever decisionappears to be best right now and after that comprehend the sub problems that emerge later. The decision made by a greedy algorithm may rely upon decisions made up until this point however not on future decisions or every one of the answers for the sub problem. It iteratively settles on one eager decision after another, decreasing each given issue into a littler one. At the end of the day, greedy algorithm never reexamines its decisions. This is the principle distinction from dynamic programming, which is comprehensive and is ensured to discover the arrangement. After each stage, dynamic programming settles on choices in view of the considerable number of choices made in the past stage, and may reevaluate the past stage's algorithmic way to solution. Conclusion These algorithm techniques are very essential and significance in real life where time is inevitable and very crucial in most of the activities. With the current era of big data, a large number of data information needs to be processed in a timely manner, In order to meet the needs of people to deal with a large number of data information, and to solve various practical problems in a timely manner, computer algorithm got the rapid development, A large number of operations research models have been applied to the computer algorithms techniques, such as dynamic programming technique and greedy algorithm, which has produced many effective algorithms to solve practical problems in real life . As well discussed above ,Greedy algorithm as a kind of operations research model either in computational problems , it has unique advantages in many optimization problems, is an effective method to solve the problems fast in the real word, but it is also not reliable , how to make the greedy algorithm utilize the real life better. This is also a major issue that needs to be explored. As we have witnessed in the analysis algorithm, the dynamic programming algorithm is appropriate for streamlining issues too. On the off chance that tackling the issue with the presence of an expansive number of sub-issues just by utilizing normal recursive strategy, the
rehashed figuring of sub-issues will influence the program to work gradually, while the dynamic programming algorithm can enormously enhance the productivity lessening the time many-sided quality from the exponential level to the polynomial level. In spite of the fact that the dynamic programming calculation is a productive algorithm, it isn't all issues are relevant for it. Furthermore, in light of the fact that the dynamic programming algorithm exists a specific level of trouble and ability, some of the time an easier calculation is more advantageous.
References Buffa, A., Maday, Y., Patera, A.T., Prud’homme, C. and Turinici, G., 2012. A priori convergence of the greedy algorithm for the parametrized reduced basis method.ESAIM: Mathematical modelling and numerical analysis,46(3), pp.595-603. Grossi, E., Lops, M. and Venturino, L., 2013. A novel dynamic programming algorithm for track-before-detect in radar systems.IEEE Transactions on Signal Processing,61(10), pp.2608- 2619. Feldman, M., Naor, J. and Schwartz, R., 2011, October. A unified continuous greedy algorithm for submodular maximization. InFoundations of Computer Science (FOCS), 2011 IEEE 52nd Annual Symposium on(pp. 570-579). IEEE. Kamalanathsharma, R.K. and Rakha, H.A., 2013, October. Multi-stage dynamic programming algorithm for eco-speed control at traffic signalized intersections. InIntelligent Transportation Systems-(ITSC), 2013 16th International IEEE Conference on(pp. 2094-2099). IEEE. Kim, Y.J., Ahn, S.J., Hwang, P.I., Pyo, G.C. and Moon, S.I., 2013. Coordinated control of a DG and voltage control devices using a dynamic programming algorithm.IEEE Transactions on Power Systems,28(1), pp.42-51. Langford, J. and Zhang, T., 2008. The epoch-greedy algorithm for multi-armed bandits with side information. InAdvances in neural information processing systems(pp. 817-824). Liu, E. and Temlyakov, V.N., 2012. The orthogonal super greedy algorithm and applications in compressed sensing.IEEE Transactions on Information Theory,58(4), p.2040. Liu, D. and Wei, Q., 2014. Policy iteration adaptive dynamic programming algorithm for discrete-time nonlinear systems.IEEE Trans. Neural Netw. Learning Syst.,25(3), pp.621-634. Pan, Q.K., Wang, L. and Zhao, B.H., 2008. An improved iterated greedy algorithm for the no- wait flow shop scheduling problem with makespan criterion.The International Journal of Advanced Manufacturing Technology,38(7-8), pp.778-786. Rahwan, T. and Jennings, N.R., 2008, May. An improved dynamic programming algorithm for coalition structure generation. InProceedings of the 7th international joint conference on Autonomous agents and multiagent systems-Volume 3(pp. 1417-1420). International Foundation for Autonomous Agents and Multiagent Systems. Sakoe, H. and Chiba, S., 2013. Dynamic programming algorithm optimization for spoken word recognition.IEEE transactions on acoustics, speech, and signal processing,26(1), pp.43-49. Simao, H.P., Day, J., George, A.P., Gifford, T., Nienow, J. and Powell, W.B., 2009. An approximate dynamic programming algorithm for large-scale fleet management: A case application.Transportation Science,43(2), pp.178-197. Schuetz, P. and Caflisch, A., 2008. Efficient modularity optimization by multistep greedy algorithm and vertex mover refinement.Physical Review E,77(4), p.046112.
Paraphrase This Document
Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Sundstrom, O. and Guzzella, L., 2009, July. A generic dynamic programming Matlab function. InControl Applications,(CCA) & Intelligent Control,(ISIC), 2009 IEEE(pp. 1625-1630). IEEE. Wang, Y., Cong, G., Song, G. and Xie, K., 2010, July. Community-based greedy algorithm for mining top-k influential nodes in mobile social networks. InProceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining(pp. 1039-1048). ACM. Wang, H. and Song, M., 2011. Ckmeans. 1d. dp: optimal k-means clustering in one dimension by dynamic programming.The R journal,3(2), p.29.