I'm asked to give a big-O estimates for some pieces of code but I'm having a little bit of trouble

```
int sum = 0;
for (int i = 0; i < n; i = i + 2) {
for (int j = 0; j < 10; j + +)
sum = sum + i + j;
```

I'm thinking that the worst case would be O(n/2) because the outer for loop is from i to array length n. However, I'm not sure if the inner loop affects the Big O.

```
int sum = 0;
for (int i = n; i > n/2; i − −) {
for (int j = 0; j < n; j + +)
sum = sum + i + j;
```

For this one, I'm thinking it would be O(n^2/2) because the inner loop is from j to n and the outer loop is from n to n/2 which gives me n*(n/2)

```
int sum = 0;
for (int i = n; i > n − 2; i − −) {
for (int j = 0; j < n; j+ = 5)
sum = sum + i + j;
```

I'm pretty lost on this one. My guess is O(n^2-2/5)

You're NOT running nested loops:

```
for (int i = 0; i < n; i = i + 2);
^----
```

That semicolon is TERMINATING the loop definition, so the `i`

loop is just counting from 0 -> n, in steps of 2, without doing anything. The `j`

loop is completely independent of the `i`

loop - both are simply dependent on `n`

for their execution time.

For the above algorithms worst case/best case are the same.

In case of Big O notation, lower order terms and coefficient of highest order term can be ignored as Big O is used for describingasymptoticupper bound.

```
int sum = 0;
for (int i = 0; i < n; i = i + 2) {
for (int j = 0; j < 10; j + +)
sum = sum + i + j;
```

Total number of outer loop iteration =n/2.for each iteration of outer loop, number of inner loop iterations=10.so total number of inner loop iterations=10*n/2=5n. so clearly it is O(n). Now think about rest two programs and determine time complexities on your own.

Your running times for the first two examples are correct.

For the first example, the inner loop of course always executes 10 times. So we can say the total running time is O(10n/2).

For the last example, the outer loop only executes twice, and the inner loop `n/5`

times, so the total running time is O(2n/5).

Note that, because of the way big-O complexity is defined, constant factors and asymptotically smaller terms are negligible, so your complexities can / should be simplified to:

- O(n)
- O(n
^{2}) - O(n)

If you were to take into account constant factors (using something other than big-O notation of course - perhaps ~-notation), you may have to be explicit about what constitutes a unit of work - perhaps `sum = sum + i + j`

constitutes 2 units of work instead of just 1, since there are 2 addition operations.

Similar Questions

I'm confused about how to do big-O analysis for the following problem - find an element from an array of integers. ( an example problem) my solution sort the array using bubble sort ( n^2 ) binary se

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

Let say i have a CMMI bug in TFS2010 and I give it an original estimate of 40. I create 2 linked (child) work items of type Task and give it each an original estimate of 10 and 30. Is this the right w

I'm trying to understand the performance of database indexes in terms of Big-O notation. Without knowing much about it, I would guess that: Querying on a primary key or unique index will give you a O

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based

I have the following algorithm that determines the greatest common divisor of two numbers x and y. I need to find the big o notation that describes this algorithm and explain why, but I have no idea h

I have 2 algorithms to do something (eg search a list) one has linear complexity and the other has log complexity (O(log n). If I am comparing operation on 100 and 1000 units, do I say the linear algo

I'm wondering what is the big O notation for each of the below statement: Sum = 0; for i=1 to N^2 do: for j=1 to N do: 'Sum += 1; Sum = 0 is O(1) for sure, because it will only be executed once. But

I've been ripping my hair out trying to solve this: Σ(k=0,n)3k = O(3n) I've been looking through various things online but I still can't seem to solve it. I know it involves the formal definition of B

Possible Duplicate: are there any O(1/n) algorithms? Is it ever possible for your code to be Big O less than O(1)?

Possible Duplicate: Are there any O(1/n) algorithms? This just popped in my head for no particular reason, and I suppose it's a strange question. Are there any known algorithms or problems which act

I'm confused about how Big-O works when dealing with functions within functions (when analyzing worst case). For example, what if you have something like: for(int a = 0; a < n; a++) { *some functio

Prove that for any real numbers, a, b such that a > b > 0, b^n is O(a^n), n >=1. I have searched several textbooks I own on Discrete Mathematics as well as several online searches for any ex

Hello I just have a simple question, why is the big O notation of a sorted array O(log N)? It will be a sorted array.

Does anyone have insight into the typical big-O complexity of a compiler? I know it must be >= n (where n is the number of lines in the program), because it needs to scan each line at least once. I

How do I go about determining functions, say g(n), that gives about O(g(n)) and Ω(g(n)) on the running time of a loop? I understand that O is the upper bound and Omega is the lower, and I think I can

I would just like to prove the following: Show that 1^k + 2^k+...+n^k is O(n^(k+1)) for every positive integer k I am not sure how to go about it. Normally when I am proving a function is big O of ano

I am unsure how to formally prove the Big O Rule of Sums, i.e.: f1(n) + f2(n) is O(max(g1(n)),g2(n)) So far, I have supposed the following in my effort: Let there be two constants c1 and c2 such that

Given a list of complexities: How do you order then in their Big O order? I think the answer is below? Question now is how does log(n!) become n log(n). Also I don't know if I got the n! and (n-1

I have been searching for a few days now, but I cannot find a big-O notation algorithm for encrypting, decrypting, or attempting to break an encrypted file (brute force) making use of public key encry

I am new to Big-O notation. While reading I came across an example : Qus : Find upper bound for f(n) = n^2 + 1 Sol : n^2 + 1 <= 2n^2 for all n >= 1 so f(n) = O(n^2) with c = 2 and n0 = 1 after

I am having a hard time understanding Dijkstra's big O notation exactly. I have a question regarding Dijkstra with an unsorted array. As from Wikipedia: The simplest implementation of the Dijkstra's

I'm writing about an optimisation about an algorithm which has O(n) complexity. It still has O(n) complexity but the execution time has improved tremendously. Is it correct for me to say that I've imp

I can't figure out the smallest upper barriers for those two i thought about log3(n) for the first one and O(n!) for the second, but i'm not sure about that, because i have not really understood the s

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes? Input size: 4 Number of execution steps: 4 + 3 +

I would like to show that: O(f_1(n) + f_2(n) + .. + f_k(n)) <= O(f_1(n)) + O(f_2(n)) + ... + O(f_k(n)) is true. My intuition why inequality holds is that in both directions: <=: We sum up all th

I was having problem with the following question Consider the following nested loop construct. Categorize its efficiency in terms of the variable n using big-o notation. Suppose the statements repre

I was just wondering what is the time complexity of the following code. The time complexity (Big O) of the code below in my opinion would be O(n^4) What do you guys think? int result = 0; for(int i =1

I'm fairly familiar with algorithm analysis and can tell the Big-O of most algorithms I work with. But I've been stuck for hours unable to come up with the Big-O for this code I write. Basically it's

Anyone know what the amortized analysis is of keySet in Java HashMap? O(1)? Is iterating through them O(n)?

Problem What is this algorithm doing? What does 0x01 represent? What does it mean that m = m >> 1 within the inner while loop? What is this algorithm big-O of? while(n>0) { m = n; while(m) {

I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them. for (int i = 0; i < n; i++) for (int j = 0; j < 2; j++)

What will be the Big O notation for the above complexity? Is it O(n)

My program of sorting values clocks at: 100000 8s 1000000 82s 10000000 811s Is that O(n)?

for(int i=N; i>0; i=i/2) irrelevant statement; I am asked to find the complexity class and I am unsure of whether I am supposed to use Big-Omega notation or Big-O? But I am assuming that it is O(N

If I had a function like this: void myfunction(node* root) { for(int i = 0; i<root->children.size();i++) { myfunction(root->children[i]); } } Would that be Big O of n^2 or Big O of n? If you

I have run across this method in our code base and wonder what the Big O is. The method takes a flat list and creates a tree, assigning the Parent and Children values as it goes. private void AddChild

Just need a confirmation on something real quick. If an algorithm takes n(n-1)/2 tests to run, is the big oh O(n^2)?

Using an ADT Linked List and the following code, how would I express the runtime in Big O Notation?: def reverse (self): (1)if self._head is not None: (2)temp = self._head.item (3)self._head = self._h

So if you use quick sort to sort an array you can do it in O(nlogn) using quicksort and then once you sort it, you can insert new elements into the array in O(logn) with a binary-search-esque algorith

I am using a simple example off the top of my head here function factorial(n) if n==1 return 1 else return n*factorial(n-1) function factorial(n) result = 1 for i = 1 to n result *= n return result O

While trying to understand the difference between Theta and O notation I came across the following statement : The Theta-notation asymptotically bounds a function from above and below. When we have on

What is the Big-O of division on most modern day ISAs? Is there some kind of optimization or is it the naive O(numerator/denominator)? I'm writing code that relies heavily of modulus operation. For ex

What wording would be correct to say I can reduce the complexity from O(n^2) to O(n) but reduce in algorithm analysis means you can cast one problem in terms of another for which there exist a known

If I had two data structures linked together (e.g. each node of a linked list contained an AVL tree) then when searching for one data item would the Big O efficiency be O(N) + O(logN) = O(N), using

I'm relatively new to the practice of determining algorithm runtimes using big-O notation and I have a question regarding the runtime of a sorting algorithm. Let's say I have a set of pairs (a, b) in

I know there are quite a bunch of questions about big O notation, I have already checked Plain english explanation of Big O , Big O, how do you calculate/approximate it?, and Big O Notation Homework--

I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.

What is the difference between O(n^2) and O(n.log(n))?

Give the smallest O() estimate you can for the following functions: 4n2 + 5n – 8 = O(...) log(n)2 + n = O(...) If you guys can, explain the answer rather than giving it to me. A question like this wi