I'm attempting to guess and prove the Big O for:

f(n) = n^3 - 7n^2 + nlg(n) + 10

I guess that big O is n^3 as it is the term with the largest order of growth

However, I'm having trouble proving it. My unsuccesful attempt follows:

```
f(n) <= cg(n)
f(n) <= n^3 - 7n^2 + nlg(n) + 10 <= cn^3
f(n) <= n^3 + (n^3)*lg(n) + 10n^3 <= cn^3
f(n) <= N^3(11 + lg(n)) <= cn^3
so 11 + lg(n) = c
```

But this can't be right because c must be constant. What am I doing wrong?

For any base *b*, we know that there always exists an `n0 > 0`

such that

`log(n)/log(b) < n`

whenever `n >= n0`

Thus,

`n^3 - 7n^2 + nlg(n) + 10`

< `n^3 - 7n^2 + n^2 + 10`

when `n >= n0`

.

You can solve from there.

For your question, the proof of O(n^3) should look something like this:

```
f(n) <= n^3 + 7n^2 + nlg(n) + 10 for (n > 0)
f(n) <= n^3 + 7n^3 + nlg(n) + 10 for (n > 0)
f(n) <= n^3 + 7n^3 + n*n^2 + 10 for (n > 2)
f(n) <= n^3 + 7n^3 + n^3 + 10 for (n > 2)
f(n) <= n^3 + 7n^3 + n^3 + n^3 for (n > 3)
f(n) <= 10n^3 for (n > 3)
```

Therefore f(n) is O(n^3) for n > 3 and k = 10.

Similar Questions

How to prove this: x^7 = O(x^10) x^10 = O(x^7)? ı couldnt prove this statement.

While trying to understand the difference between Theta and O notation I came across the following statement : The Theta-notation asymptotically bounds a function from above and below. When we have on

I know that this class uses a Red-Black tree but does that mean that it is O(log(n)) to get the least element or is it O(1) or something else? Thanks!

Could you please explain how I can get the worst-case Big O of this algorithm. I was reading my textbook and I came across a similar algorithm like this one but still don't understand the logic behind

I have read quite a bit on big-O notation and I have a basic understanding. This is a specific question that I hope will help me understand it better. If I have and array of 100 integers (no duplicate

For the below function, I did But I must have did wrong ... answer should be O(log n). I am terrible at Big O ... havent fully understood master theorem which is not taught in school yet. They tau

for (int i=0; i < n; i++) for (j=0;j<i*i;j++) x++ Would the big O be O(n^3)? I'm just confused about how the i's relate to the n.

I have two recursive functions in Python and simply just want to know the Big O Notation for them. What is the Big O for each these? def cost(n): if n == 0: return 1 else: return cost(n-1) + cost(n-1)

The Hash table wiki entry lists its Big O as: Search: O(n) Insert: O(n) Delete: O(n) while a java HashMap is listed with Big O as: get: O(1) put: O(1) remove: O(1) Can someone plz explain why does t

I understand that when adding functions, the behaviour is dominated by the highest power. But I have trouble understanding the proof. Could anyone help me step by step in explaining the proof behind

I am using a simple example off the top of my head here function factorial(n) if n==1 return 1 else return n*factorial(n-1) function factorial(n) result = 1 for i = 1 to n result *= n return result O

We just started learning big-o in class. I understand the general concept that f(x) is big-o of g(x) if there exists two constants c,k such that for all x>k |f(x)|<=c|g(x)|. I had a question whe

I'm implementing a voting system like Stackoverflow's. How can I implement this so it is hack proof? I've got some PHP that does database work according to the ajax request sent after the javascript p

I have a couple of questions regarding some algebra using big O notation: if f(n)=O(g(n)) is log(f(n)) = O(log(g(n)))? is N^{f(n)}=O(N^{g(n)})? (where N is any real number)

I really want to know real definition. I have tried to read a book but couldn't understood. O : Big-O notation worst case. Θ : Theta notation average case. Ω : Omega notation best case. Q1> But why

Could you please explain how to calculate Big O complexity of the following segment: i := n; while i > 1 do begin for j:= i div 2 + 1 to i do begin k := 2; while n >= k do k := k * k end; i := i

I have the following algorithm that determines the greatest common divisor of two numbers x and y. I need to find the big o notation that describes this algorithm and explain why, but I have no idea h

So I have a quick question on how to verify the big O of a function. for example: a quicksort algorithm sorting an array of 5000000 element yields a time interval of 0.008524 seconds, running the same

I am working with big multidimensional byte arrays (~500mb per array, like, an array with dimensions of [8,8192,8192]) and I'd like to read and write them into file for storage. I tried using BinaryF

What Big-O notation questions have you been asked? Did you find them to be good questions? Did the interviewer actually understand the concept?

I'm trying to help a friend analyze the complexity of his algorithm but my understanding of Big-O notation is quite limited. The code goes like this: int SAMPLES = 2000; int K_SAMPLES = 5000; int i =

I've been told the below code is = O(MN) however, I come up with O(N^2). Which is the correct answer and why? My thought process: nested for loops plus if statements --> (O(N^2)+O(1)) + (O(N^2)+O(

Let f(n) and g(n) complexity functions. Why this statement holds true?. How can i prove it? f(n) - g(n) is O(min(f(n),g(n)))

Wikipedia says: The statement f(x) is O(g(x)) as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since the use of the equals sign could be mislead

I want to know, why this is O(n2) for 1+2+3+...+n? For example, 1+2+3+4 = 4·(4+1)/2 = 10 but 42=16, so how come it's O(n2)?

I want to calculate Big O of x++ in below algorithm. for (int i = 2;i < n;i*=2) for(int j = i;j < m;j*=j) x++; I think a lot about it, but I can't solve it. How can I solve it?

I'm confused about how Big-O works when dealing with functions within functions (when analyzing worst case). For example, what if you have something like: for(int a = 0; a < n; a++) { *some functio

I am interested in theoretical Big-O analysis of the following MySQL query: SELECT id, value FROM MyTable WHERE lat BETWEEN %s AND %s AND lon BETWEEN %s AND %s; In particular, I would like to know ho

Given a list of complexities: How do you order then in their Big O order? I think the answer is below? Question now is how does log(n!) become n log(n). Also I don't know if I got the n! and (n-1

I'm new to big O notation and have a couple of questions about it in my program. I have a program that has 2 maps. Before adding to one of the maps I loop through each character and randomly change th

If I have a loop construction like this for(int i=1; i<n;i++) for(int j=1; j<n;j++); O(n2) or O(0)? Assume that inside the loop is an if: for(int i=1; i<n;i++) for(int j=1; j<n;j++) if(a=

I am trying to understand what the Big O would be for the following method. for(Integer i : list){ if(list.contains(-i)) //doSomething } I know that the contains method is O(n) and that a for loop l

Possible Duplicate: Plain English explanation of Big O I was recently asked about my knowledge of how to use Big O notation and I was stumped because I had never come across Big O before. I have rea

I got this question wrong on an exam : Name a function that is neither O(n) nor Omega(n). After attempting to learn this stuff on my own through youtube, I'm thinking this may be a correct answer: (n

I looked online for a simple example of file reading and writing. Being a python graduate and moving into C# was a big step for me, but I do really love the language. I've seen people do file I/O with

Given the following code, what is the complexity of 3. and how would I represent simple algorithms with the following complexities? O(n²+n) O(n²+2n) O(logn) O(nlogn) var collection = new[] {1,2,3}; va

I know that T(n) = T(n/2) + θ(1) can be result to O(Log N) and my book said this is a case of Binary Search. But, how do you know that? Is it just by the fact that Binary Search cuts the problem in ha

This question is throwing me for a loop, and I hope StackOverflow is the right place to ask this. The question asks n^1.001 = O(n log n) (log is base 2) in other words, does n log n grow faster than

I am studying the book Introduction to Algorithms, by Thomas H. Corman. I am studying the asymptotic notation. One thing is bothering me, because the author stated that: f(n)=Big-theta(g(n)) implies f

Is O(2^(n^c)) = O(n^c*(2^(n^c))), where c is some constant? The context for this is showing that NP is a subset of DTIME(2^n^c) for all c > 1.

The matrix A is sorted by row and column where A[i][j] < A[i][j+1] and A[i][j] < A[i+1][j]. An additional information is that the first element of each row is smaller than the last element of th

I have a Computer Science Midterm tomorrow and I need help determining the complexity of these recursive functions. I know how to solve simple cases, but I am still trying to learn how to solve these

I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.

The method hasTwoTrueValues return true if at least two values in an array of boolean are true. Provide the Big-O running time for all three implementations proposed. // Version 1 public boolean hasTw

I am having problems analyzing this method for its Big O complexity. In fact, I am also unsure of how the method works due to its confusing nature. For now, all I figure out is that the a gradually s

I am trying to find the Big O for stooge sort. From Wikipedia algorithm stoogesort(array L, i = 0, j = length(L)-1) if L[j] < L[i] then L[i] ↔ L[j] if j - i > 1 then t = (j - i + 1)/3 stoogesort

For some reason im unable to solve this. what will be the Big-o Notation for (int i = 0; i < n; i++) for (int j = 0; j < n; j++) { c[i][j] = 0; for (int k = 0; k < n; k++) c[i][j] += a[i][k]

For large problems sizes, an algorithm with time cost O(2^n) is faster than an algorithm that has time cost O(N^2) Is this true or false? What I think is that if C^n, C = constant and C > 1, then

What are the performance characteristics of JavaScript property access (on current implementations)? Is it safe to assume array access is O(1)? If I use an object as a hash table (with string keys) c

Im studying for a test and a practice problem states: True or False: O(n^3 + n^2) dominates O(n^4) Do we count O(n^3 + n^2) as O(n^5)? If so it does dominate.