CS 301: Algorithm Analysis - Big O, Big Theta, and Big Omega

Verified

Added on  2022/10/10

|3
|741
|387
Homework Assignment
AI Summary
This assignment solution delves into the analysis of algorithms, focusing on the determination and application of various notations to express algorithm performance. It explores Big-O, Big-Theta, and Big-Omega notations to represent time complexity. The solution provides expressions for these notations based on a given algorithm, including arguments and examples. The assignment also covers the characteristics of problems and input data where specific performance arises, along with the simplification of polynomial functions using Big O notation. Furthermore, the solution explores the use of witnesses to demonstrate the Big O relationship between functions, and analyzes the time complexity of recursive algorithms. This comprehensive analysis helps students understand how to evaluate and compare the efficiency of algorithms.
Document Page
1. Suppose you have an algorithm that runs in time. T(n) = 1/2n3 + 2n2lg(n) + n1/2 +1700
a. Determine and show appropriate expressions for the various notations for algorithms: big-O, big-Theta,
big-Omega, and ~.
big-O
T(n) = lgn + 2T(n1/2)
= lgn + 2(lg(n 1/2) + 2T(n1/4))
= lgn + lgn + 22T(n1/4)
= lgn + lgn + … + 2kT(2)
T(n) = 1/2n3 + 2n2lg(n) + n1/2 +1700
3lg1/2n2+4lgnlgn +lgn/√
big-Theta
T(n) = (SUM_{i=0 to (log_2 n) - 1} cn 2^i) + n^2 T(1) = cn(2^{log_2 n} - 1) +
n^2
= cn^2 - cn + n^2
= (c+1) n^2 - cn
Big-Omega
f(n)= Ω(g(n)) if f(n)≥ C.g(n) for some after n≥n0≥0
f(n)= 1/2n3 + 2n2lg(n) + n1/2 +1700, g(n)=2n
f(n) = Ω(g(n))
f(n) ≥(C.g(n))
1/2n3 + 2n2lg(n) + n1/2 +1700 ≥ C.(2n) ≥
3lg1/2n2+4lgnlgn +lgn/√≥C.(2n) ≥ 3.3
b. Give your arguement for the big-O expression you gave in part (a) above.
tabler-icon-diamond-filled.svg

Paraphrase This Document

Need a fresh take? Get an instant paraphrase of this document with our AI Paraphraser
Document Page
It basically says how complex the algorithm is, or how the time taken scales as the
number of data increases. For example, O(n)O(n) is linear complexity. Here,
doubling the size of the data input doubles the time taken, sans a constant. So
basically, time taken will be something like t=k×n+ct=k×n+c, where k,c are
some constants.
For instance in our example above, O(n2)O(n2). Here, it scales quadratically
-- t=an2+bn+ct=an2+bn+c, and thus this is called quadratic complexity.
You can similarly have O(n3)O(n3), and so on -- these are all
of polynomialcomplexity.
Suppose you have an algorithm that runs in time (fn):
f(n)=n2lg(n1) + 2n + 100
a. Describe the characteristics of a problem and its input data where such
performance might arise
O(1)O(1) is constant complexity, which basically means that it will take the
same time to run the algorithm, regardless of the length of the data set. (for
example, it takes the same time to find the first element in a sorted data set)
b. Find a simpler polynomial g(n), so that f(n) = )(g(n)). State your function
g(n).
1.f(n)= 1 + 2 + 3 +. . .+n2.
Document Page
2.f(n)=√n2+ 7n+ 10 forn ≥0.
3.f(n)= log 1 + log 2 + log 3 +. . .+ logn2
f(n)=n1/2+n5/4+n2−100+1
f(n) = Ω(g(n))
c. Provide the values of witnesses, c and k, that demonstrate f(n) = O(g(n)).
n=2k , k = lg n
d. Given c and k, what are the values for cg(k) and f(k)?
T(n) = n + 2T(n/2)
= n + 2(n/2 + 2T(n/4))
= n + n + 4T(n/4)
= n + n + 4(n/4 + 2T(n/8))
= n + n + n + 8T(n/8) T(n)
= 3n + 23T(n/23)
= kn + 2kT(n/2k )
= nlgn + nT(1)
T(n) = O(nlgn)
chevron_up_icon
1 out of 3
circle_padding
hide_on_mobile
zoom_out_icon
[object Object]