I'm struggling with understanding Big O and I'm unable to quite comprehend the multiple ways that I read, and watch, that are used to find the Big O of a function. It seems that every video or post has a slightly different way of doing it.

Some: Count the operations line by line. Others, count assignments, multiplications, and additions.

I know this question might be downvoted/closed but I feel I really need an answer specifically to my questions in order to completely understand it.

In this psuedocode taken from http://i.imgur.com/u0dUTlK.png

```
function sum(list):
sum = 0;
foreach element in list:
current = element
add current to sum
sum = sum * 1
return sum
endfunction
```

The author states that the time is `1 + n * 2 + 2`

because `it sets sum to 0, which takes one operation`

, `it iterates over the list, performing two operations each time`

, and `then, it performs two more operations`

.

I don't necessarily see how the author gets two operations, instead of 3, that is... `current = element`

is one operation, and, in my opinion, `add current to sum`

is two operations (assignment and addition). As well as, `sum = sum * 1`

the author states this to be one operation, but I believe it to be two, and `return sum`

another operation. Thus, the time should be `1 + n * 3 + 3`

.

If anyone could please tell me if I am not understanding why this should be two operations, or if it i really three?

Also, from reading I've noticed most people dont tend to include comparison as an operation. Is it alright to ommit comparison?

And finally, the page at http://isites.harvard.edu/fs/docs/icb.topic780601.files/time_analysis_matrix_multiplication.pdf seems to be the most helpful reference I've found, but they calculate the Big O a totally different way. They calculate the additions, multiplications, and assignments (again ommoting comparisons) and then add them together.

For the code:

```
for ( int i = 0; i < n; i++ ) {
for ( int j = 0; j < n; j++ ) {
c[i][j] = 0;
for ( int k = 0; k < n; k++ ) {
c[i][j] += a[i][k] * b[k][j];
}
}
}
```

It says `n^3 multiplications`

... `2n^3 + n^2 + n additions`

... `n^3 + 2n^2 + n + 1`

assignments...

Removing coefficients and all but the largest term:

It says `O(n^3) multiplications + O(n^3) additions + O(n^3) assignments = O(n^3) overall time complexity.`

When adding these, is it technically `n^3 + n^3 + n^3 = 3n^3`

and removing coefficients, this leaves us with `n^3`

? I am asking because I don't see how mathematically `n^3 + n^3 + n^3 = n^3`

.

The way I've found it most helpful to do it is how the page at harvard does it: Counting additions, multiplications, and assignments. I was wondering if there are many ways to do it, and if this is one way that would yield the correct answer every time? This is because for the first psuedocode example, the equation we get would be different (pre removing coefficients and all but largest term).

So if I were to do to the first psuedocode example the way that it's done in the harvard document, I would get:

`Multiplications: sum * 1... happens 1 time.`

`Additions: add current to sum, happens n times.`

`Assignments: sum = 0... happens 1 time. current = element, happens n times... add current to sum... happens n times.`

`1 multiplication + n additions + 1 + n + n assignments = 2 + 3n`

`Removing coefficients and all but largest term: O(n)`

.

Note that I didn't include the return statement in the equation, and i also didn't include the addition that happens n times when element is incremented. Really, I don't know exactly what I need to keep and what I need to discard.

Could someone please tell me how I would be able to do the above, but be correct? What should I ommit, and what should I keep? And What is the easiest way to find the Big O? Like in the first example, like in the harvard example, or another way? I feel like I can relate best to the harvard example, however, I need to know what the rules are when doing it.

Thanks.

O (n^3) doesn't mean n^3 operations: It means the number of operations is less than some unknown constant times n^3. Therefore O (n^3) + O (n^3) = O (n^3): The first is less than some constant times n^3, the second is less than some constant times n^3, and if you add them, the sum is less than some (larger) constant times n^3.

Similar Questions

Possible Duplicate: Big Theta Notation - what exactly does big Theta represent? I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three. In school, w

sum = 0; for (i=0;i<n/2;i++) for (j=i; j<n/4; j++) sum++; What is the big O for the above code? I calculated the big O but I'm not sure if it's correct. This is my answer the outer loop will r

Could you please explain how to calculate Big O complexity of the following segment: i := n; while i > 1 do begin for j:= i div 2 + 1 to i do begin k := 2; while n >= k do k := k * k end; i := i

I am having trouble figuring out how to prove that t(n) = sqrt(31n + 12n log n + 57) is O(sqrt(n) log n) I haven't had to deal with square root's in big O notation yet so I am having lots of troubl

I am trying to understand what the Big O would be for the following method. for(Integer i : list){ if(list.contains(-i)) //doSomething } I know that the contains method is O(n) and that a for loop l

I have got a function and want to denote it in terms of bigO notation. f(n) = log4n+n*(1/3). Is this function O(n)? Thanks for your help

My question arises from the post Plain English Explanation of Big O. I don't know the exact meaning for logarithmic complexity. I know that I can make a regression between the time and the number of

What is the big-O complexity of the function (log n)k for any k?

IF I have a grid NxN that requires N^2 steps and is dependent on the grid NxN at the previous time steps does the Big O remain the same?

I'm trying to help a friend analyze the complexity of his algorithm but my understanding of Big-O notation is quite limited. The code goes like this: int SAMPLES = 2000; int K_SAMPLES = 5000; int i =

I am working on an assignment but a friend of mine disagree with the answer to one part. f(n) = 2n-2n^3 I find the complexity to be f(n) = O(n^3) Am I wrong?

Some standard books on Algorithms produce this: 0 ≤ f(n) ≤ c⋅g(n) for all n > n0 While defining big-O, can anyone explain to me what this means, using a strong example which can help me to visual

I am sorting a Dictionary of 10,000 elements using the OrderBy method as shown below and want to know the big O of it. Does anyone know? After Ordering them I then add them to a new Dictionary in that

If a function body invokes 3 different functions, all of the order O(n), how do I calculate the order of the outer (containing) function? Yes, this is homework, and I've surprisingly failed to find a

In big-O notation is O((log n)^k) = O(log n), where k is some constant (e.g. the number of logarithmic for loops), true? I was told by my professor that this statement was true, however he said it wil

Here is a summary of my application requirement. The app needs to process a batch of 10000 items and then upload the processed data on multiple servers using socket i/o. After the upload is done, move

What Big-O notation questions have you been asked? Did you find them to be good questions? Did the interviewer actually understand the concept?

For binary search tree type of data structures, I see the Big O notation is typically noted as O(logn). With a lowercase 'l' in log, does this imply log base e (n) as described by the natural logarith

I am trying to understand what exactly Big O notation is. I understand it in the literal and practical sense by going through the answers of similar questions in SO. But what the answers don't explain

I got this question wrong on an exam : Name a function that is neither O(n) nor Omega(n). After attempting to learn this stuff on my own through youtube, I'm thinking this may be a correct answer: (n

So I have this problem to do and I am not really sure where to start: Using the definition of Big-O, prove the following: T(n) = 2n + 3 ∈ O(n) T(n) = 5n + 1 ∈ O(n2) T(n) = 4n2 + 2n + 3 ∈ O(n2) if an

Good evening people, I would like some help to compare a Big O and a Theta algorithm. I can understand how to compare two big O's but something troubles my understanding on how to compare big O with T

This question already has an answer here: Plain English explanation of Big O 21 answers I was reading about complexities where n is the number of elements. I do some operation different ways an

I have always used to write function prototype declaration in this way: var O = function () {}; O.prototype.fn = function () {} But Some developer write in this way: var O = function () {}; O.prototy

This question already has an answer here: Best case Big O complexity 2 answers Could someone please help me with this question?: how can you limit the input data to achieve a better Big O compl

What are the performance characteristics of JavaScript property access (on current implementations)? Is it safe to assume array access is O(1)? If I use an object as a hash table (with string keys) c

Are scalars included in big-O notation, or is O(2n) actually the same as O(n) because scalars are not taken into account? If so, why is this?

I'm using the following vpath to attempt locating my $(OBJ) files: vpath %.o ./lib/obj And my target is setup as such: # Link target link: @echo \nLinking files $(CC) $(LINK_FLAGS) -o main.elf $(OB

I am unsure how to formally prove the Big O Rule of Sums, i.e.: f1(n) + f2(n) is O(max(g1(n)),g2(n)) So far, I have supposed the following in my effort: Let there be two constants c1 and c2 such that

I've been thinking lately, but was unable to find any info, which method is the best for records finding in big databases - especially MySQL databases. Let say I have DB with tables: topics: [ID topi

I have been searching for a few days now, but I cannot find a big-O notation algorithm for encrypting, decrypting, or attempting to break an encrypted file (brute force) making use of public key encry

I need to derive the Big-O complexity of this expression: c^n + n*(log(n))^2 + (10*n)^c where c is a constant and n is a variable. I'm pretty sure I understand how to derive the Big-O complexity of

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

It seems like in most mainstream programming languages, returning multiple values from a function is an extremely awkward thing. The typical solutions are to make either a struct or a plain old data

I need help in this question. I really don't understand how to do it. Show, either mathematically or by an example, that if f(n) is O(g(n)), a*f(n) is O(g(n)), for any constant a > 0.

So let's say we have a function such as 6wn^2 - 6wn + 6w, would the big-o notation be O(n^2) or O(wn^2)? The problem set question asks me to write this function in Big-O form in terms of w and n, so I

for i := 1 to n do j := 2; while j < i do j := j^4; I'm really confused when it comes to Big-O notation, so I'd like to know if it's O(n log n). That's my gut, but I can't prove it. I know the whi

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based

I need to prove/disprove the following sentence: for each f(n)=O(logn) implies 2^(f(n)) = O(n) I think it's true, because 2^(log(n)) = n. What do you think?

What is the Big-O efficiency of a stack, queue, set, and deque as it pertains to insertion, search, indexing, space, and deletion complexities?

O represents Big-O. O(g) : { f| f is non negative function there exists c,m where c and m are any constants

I have written a simple algorithm to re-order the items in a list whenever the user drag and drop them. Also, if an item is deleted or a now one is added the list will be re-ordered. The algorithm con

Please help me on following two functions, I need to simplify them. O(nlogn + n^1.01) O(log (n^2)) My current idea is O(nlogn + n^1.01) = O(nlogn) O(log (n^2)) = O (log (n^2)) Please kindly help

I am having trouble finding out the Big-O notation for this fragment of code. I need to find the notation for both for loops. public static int fragment(int n) { int sum = 0; for (int i = n; i >= 1

Hey i have a question. say t(n) = O(n log(n)) and u know that this is true. and then your given these statements and told to say whether they must be true or false. t(n) = n^4 and t(n) = O(N^4) The s

I have a question regarding complexity theory. If I have a Bubble sort algorithm and I want to find its worst case running time Big O, we can conclude that it is O(n^2). Now, what about If I have a pr

My program of sorting values clocks at: 100000 8s 1000000 82s 10000000 811s Is that O(n)?

Question Hi I am trying to understand what order of complexity in terms of Big O notation is. I have read many articles and am yet to find anything explaining exactly 'order of complexity', even on th

Good afternoon all, We say that a hashtable has O(1) lookup (provided that we have the key), whereas a linked list has O(1) lookup for the next node (provided that we have a reference to the current n

I've been having some problems trying to grasp the concept of big O notation. So, by definition big O is as follows, T(n) ∈ O(G(n)) if T(n) <= G(n) * C. Since the the constant C can be any integ