I have a few problems that I'm trying to find the Big O for. The ones that are confusing me are the (N*N) problems:

```
for (i=1, sum=0, i <= N; i++) {
for (j=1; j <= N*N; j++) {
sum++;
}
}
```

I'm guess that it's O(N^3) since (N*N) could be representing two loops.

```
for (i=1, sum=0, i <= N; i++) {
for (j=1; j <= N*N; j++) {
for (k=1; k<=j; k++) {
sum++;
}
}
}
```

If so, then this one would be O(N^4)?

```
for (i=1, sum=0, i <= N; i++) { // loop over i
for (j=1; j <= N*N; j++) { // loop over j, no dependency
sum++;
}
}
```

The inner loop over `j`

is independent of `i`

and has complexity `O(N*N)`

. The outer loop does this `N`

times, hence `O(N^3)`

in total.

```
for (i=1, sum=0, i <= N; i++) { // loop over i
for (j=1; j <= N*N; j++) { // loop over j, no dependency
for (k=1; k<=j; k++) { // loop over k, dependent on j
sum++;
}
}
}
```

The loop over `k`

is dependent on `j`

. `j`

loops independently over the integers up to `N*N`

. The sum `1 + 2 + ... + N * N`

is equal to `(N * N + 1) * N * N / 2`

which is `O(N^4)`

. Now again, the loop over `i`

repeats that `N`

times, hence total complexity is `O(N^5)`

.

For calculating big O, always start from the innermost loop and take care of dependencies!

I think it's O(N^5).

The first (external) loop is O(N).

The second one is O(N^2).

The last one is also O(N^2).

You can proceed methodically and formally using Sigma notation:

Similar Questions

I have the following question: Is the following statement true or false? All logs to base 2 log2n is a member of O(log(n)) My attempt: log2n - clogn <= 0 log2 + logn - clogn <= 0 1 + logn(1-c)

I can't solve a problem; can someone help me? What is the Big O notation for the below statement:- for (int i=2;i<=n;i=i*4) sum++;

I have a question about calculating time-complexity in algorithms. Is it possible to have a notation such as O(n^4) if you have four nested for-loops?

I've just read an article about a breakthrough on matrix multiplication; an algorithm that is O(n^2.373). But I guess matrix multiplication is something that can be parallelized. So, if we ever start

What is the big-O complexity of the function (log n)k for any k?

I learned that,using Big O notation O(f(n)) + O(g(n)) -> O(max(f(n),g(n)) O( f(n) )* O( g(n)) -> O( f(n) g(n)) but now, I have this equation for running time T for input size N T(N) = O(N^2) //

How would you characterize the below in big-O notation? rotors = [1,2,3,4,5 ...] widgets = ['a', 'b', 'c', 'd', 'e' ...] assert len(rotors) == len(widgets) for r in rotors: for w in widgets: ... del w

Based on my understanding, big O is essentially similar to theta notation but can include anything bigger than the given function (e.g. n^3 = O(n^4), n^3 = O(n^5), etc.), and big Omega includes anythi

I am doing the exercise of Skiena's book on algorithms and I am stuck in this question: I need to calculate the big O of the following algorithm: function mystery() r=0 for i=1 to n-1 do for j=i+1 to

I think the Big-O notation is n^2, but im not too sure. for (int i = 0; i < n -1; i++) { for (int j = 0; j < n – 1; j++) if (x[j] > x[j+1]) { temp = x[j]; x[j] = x[j+1]; x[j+1] = temp; } }

I have two separate pieces of code for which I need to calculate the Big O complexity. The first one is: k:=1; s := 4; while k < N do begin k := 2 * k; m:=1; while m < N do begin for i := m to

In n-element array sorting processing takes; in X algorithm: 10-8n2 sec, in Y algoritm 10-6n log2n sec, in Z algoritm 10-5 sec. My question is how do i compare them. For example for y works faster acc

This question already has an answer here: Big-O summary for Java Collections Framework implementations? 7 answers Question pretty much says it all. Specifically, I would like the Big-O of all t

What are some examples where Big-O notation[1] fails in practice? That is to say: when will the Big-O running time of algorithms predict algorithm A to be faster than algorithm B, yet in practice algo

How to prove this: 4n = O(8n) 8n = O(4n)? So what are the C and n0 values for both cases?

I know that T(n) = T(n/2) + θ(1) can be result to O(Log N) and my book said this is a case of Binary Search. But, how do you know that? Is it just by the fact that Binary Search cuts the problem in ha

What would the Big O notation be for the following nested loops? for (int i = n; i > 0; i = i / 2){ for (int j = n; j > 0; j = j / 2){ for (int k = n; k > 0; k = k / 2){ count++; } } } My t

i need help finding time complexity of this function in big O notation: int myfunction(bool exists) { int var=0; int k,j,n; if (exists) for(k=1; k<=n; k*=2) for(j=1; j<=k; j++) var++; else for(k

I'm wondering what is the big O notation for each of the below statement: Sum = 0; for i=1 to N^2 do: for j=1 to N do: 'Sum += 1; Sum = 0 is O(1) for sure, because it will only be executed once. But

I know there are quite a bunch of questions about big O notation, I have already checked Plain english explanation of Big O , Big O, how do you calculate/approximate it?, and Big O Notation Homework--

What would be the big O notation of the length of the list of permutations of the characters of a list of words of lenght n? I just do not know how to express that because it would be like n! for each

I've got a question about calculating Big O running times for a series of loops, that are nested in an outer for loop. For example: for (50,000 times) { for (n times) { //Do something } for (n-2 tim

Is the time complexity of the Oracle MAX function O(1), O(log n) or O(n) with respect to the number of rows in a table?

Can some help me with a function which is Big O(1) but not Ω(1) and the other way around? Some explanation would greatly help.

Is it common that a higher value for Big-O notation is used for convenience or simpler looks? For example: I'm looking at this algorithm bifragment gap carving shortly explained here (page 66). If I

Can anyone point me to a resource that lists the Big-O complexity of basic clojure library functions such as conj, cons, etc.? I know that Big-O would vary depending on the type of the input, but stil

What would be the best-case and worst-case complexity in Big-Theta (T) notation of the selection sort algorithm when the array grows by repeatedly appending a 19? For instance: [ 19, 13, 7, 19, 12, 1

Given a list of complexities: How do you order then in their Big O order? I think the answer is below? Question now is how does log(n!) become n log(n). Also I don't know if I got the n! and (n-1

for i=1 to n | n+1 for j=1 to i^3 | ??? x=x+1 | Problem: Find O notation for n number of times the statement x=x+1 executes. I am confused with the second for loop here. I have the first statement th

Possible Duplicate: What’s the complexity of for i: for o = i+1 I have done the following sorting algorithm for arrays of length 5: int myarray[5] = {2,4,3,5,1}; int i; for (i = 0; i < 5; i++) {

I was reading about Big O notation. It stated, The big O of a loop is the number of iterations of the loop into number of statements within the loop. Here is a code snippet, for (int i=0 ;i<n; i+

I can think of two situations to use an O(n^2) algorithm instead of an O(n) algorithm: Because the big O notation only describes the asymptotic complexity, the exact complexity of an O(n^2) algorithm

What are the computational complexities of JBIG lossless compression algorithm in Big O notation?

I am having trouble finding out the Big-O notation for this fragment of code. I need to find the notation for both for loops. public static int fragment(int n) { int sum = 0; for (int i = n; i >= 1

We use Ө-notation to write worst case running time of insertion sort. But I’m not able to relate properties of Ө-notation with insertion sort, why Ө-notation is suitable to insertion sort. How does th

I am trying to understand what exactly Big O notation is. I understand it in the literal and practical sense by going through the answers of similar questions in SO. But what the answers don't explain

I'm getting objects containing big numbers from a web service and show them in an <input type=number> field. It works until angular begins to show the values in scientific notation. The value

I am trying to get a grasp on Big O notations. It seems pretty abstract. I selected the most common data structures - array, hash, linkedl list (single and double) and a binary search tree and guessed

Hello can somebody help me in expressing (x^3)/1000 - 100*x^2 - 100*x + 3 in big theta notation. It looks like of x^3 to me, but obviously at x = 0 obviously this polynomial gives a value of 3. Multip

I know that the relation n = Big-O(1) is false. But if we use induction involving Big-O it can be proved. But the fallacy is we cannot induct Big-O. But my question is how we can disprove the relation

I'm currently trying to understand dynamic programming, and I found an interesting problem : Given a chess board of nxn squares and a starting position(xs,ys), find the shortest (as in no. of moves)

Lets say we have a problem we implemented using X algorithm with O(n) or O(log n) or etc.... When is the value of n big enough that we must consider an alternative implementation? Let's see if i can e

I am trying to learn Big-O notation but i have difficulties in calculating time complexity of recursive functions. Can you help me to understand the time complexity of following example? public int r

Mozilla's website clearly describes hasOwnProperty() and the in operator. However, it does not give any implementation details in regards to their efficiencies. I would suspect they'd be O(1) (constan

I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them. for (int i = 0; i < n; i++) for (int j = 0; j < 2; j++)

Here's the pseudocode: Baz(A) { big = −∞ for i = 1 to length(A) for j = 1 to length(A) - i + 1 sum = 0 for k = j to j + i - 1 sum = sum + A(k) if sum > big big = sum return big So line 3 will be O

What is the Big-O time complexity ( O ) of the following recursive code? public static int abc(int n) { if (n <= 2) { return n; } int sum = 0; for (int j = 1; j < n; j *= 2) { sum += j; } for (i

Question 1: Under what circumstances would O(f(n)) = O(k f(n)) be the most appropriate form of time-complexity analysis? Question 2: Working from mathematical definition of O notation, how to show tha

I am trying to find the big O bound for the following recurrence relation: T(n) = T(n-1) + n^c, where c >= 1 is a constant So I've decided to solve this by using iteration: T(n) = T(n-1) + n^c T(n

If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm. Why would a constant time be considered O(1) rather than O(k), considering it takes a variable