I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them.

```
for (int i = 0; i < n; i++)
for (int j = 0; j < 2; j++)
cout << i << " " << j << endl;
```

My first thought was that it is O(n^{2}) because there are 2 for loops. Then I took a closer look and noticed that the inner for loop only goes from 0 to 2.

From what I understand, Big O looks at the worst case scenario. My thoughts were that if `n = 2`

then it would result in O(n^{2}) however all other cases result on O(n) which is what is really confusing me.

Could someone please explain why this is O(n^{2}) or O(n)?

The only "handle" you have on this "algorithm" is `n`

, and `n`

is only used in the outer loop.

The inner loop is constant time complexity, so overall you have O(N).

It's `O(n)`

. When a function `f(n)`

is `O(n)`

, it basically means that there are some constants `A`

and `B`

such for each `n`

(at least for large-enough `n`

s):

```
f(n) <= A * n + B
```

In other words, constants (multiplicative and additive) do not affect big-O notation. And since `2`

is a constant (it does not depend on `n`

), it is not reflected in the big-O notation.

In your case, the function `f`

is the time complexity - i.e. how many constant-time steps the program takes for an input of size `n`

.

The inner cycle always performs two iterations no matter the input. Therefor it is **constant**. Now the outer cycle will perform `n`

times the operations in the inner cycle. Thus its complexity is n times the complexity of the inner cycle. So overall you will perform n times a constant number of operations, or your complexity is `O(n)`

.

There is an experiment you can do: if the complexity of your algorithm is `O(n^2)`

then the number of operations it will perform for `n * 2`

should be about 4 times the number of operations performed for `n`

. So try this: if your algorithm performs K operations for `n = 100`

how many will it perform for `n = 200`

?

**Ο-Notation (Upper Bound)**:

This notation gives an upper bound for a function to within a constant factor. We write

`f(n) = O(g(n))`

if there are positive constants`n0`

and`c`

such that to the right of`n0`

, the value of`f(n)`

always lies on or below`c g(n)`

.`Ο(g(n)) = {f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ c g(n) for all n ≥ n0}`

This mean that saying your function is of order O(n^{2}) or even O(n^{3}) is true because it is an upper bound.

But your program depends only on `n`

(as constants terms has no effect for **large n**,

`2n`

is equivalent to `n`

here) , therefore O(n) is very close upper bound and hence you can say its time complexity is of O(n).Technically, this is both, because Big O is an upper bound and everything that's O(n) is also O(n^{2}).

But yes, this is O(n). Θ(n), in fact. (See this question for the distinction.)

Big O is not "the worst case scenario". Some algorithms can have "average case" and "worst case" time complexities that can be very different. Quick sort, for instance, is O(n log n) average case and O(n^{2}) worst case. No one usually call quick sort an O(n^{2}) algorithm, though.

Big O is a ceiling of how much the complexity of an algorithm grows as `n`

increases. The complexity of an O(n) algorithm grows at most linearly. In your case, as the other answers explained, the inner loop is constant time, so the overall time complexity is a linear function of `n`

- hence, O(n).

Similar Questions

I'm starting to learn about Big-Oh notation. What is an easy way for finding C and N0 for a given function? Say, for example: (n+1)5, or n5+5n4+10n2+5n+1 I know the formal definition for Big-Oh is:

There are a lot of questions about big O notation, but I didn't found clear answer for this question. We write that: O(5n) = O(n) and O(3n^2 + n + 2) = O(n^2) Can we write that: O(2^(2n)) = O(2^n)? Th

Thanks for helping in advance. I am reading the book More Effective C++ by Scott Meyers, but one simple program in Item 29 Reference Counting really confuses me. The program is copied here: String::

I think the Big-O notation is n^2, but im not too sure. for (int i = 0; i < n -1; i++) { for (int j = 0; j < n – 1; j++) if (x[j] > x[j+1]) { temp = x[j]; x[j] = x[j+1]; x[j+1] = temp; } }

i am trying to find limitations of o notations, i was wondering if there was a simple example that demonstrates an enahncement to a task, where version 1 is the same o notation as version 2, yet versi

def foo(x): if x > 5: return foo(x–1) – foo(x-1) else: return 11 def bar(a,b): if (b > 0): return bar( bar(a, b+1) , b-1 ) else: return 0 How do I find the running time in Big O notation and ho

I'm confused about how Big-O works when dealing with functions within functions (when analyzing worst case). For example, what if you have something like: for(int a = 0; a < n; a++) { *some functio

How would I go about solving this problem: Use big O notation to give the number of nodes that can be contained in a tree of depth n I'm not looking for the exact answer or anything, Just how I would

I would like to start learning about the big data technologies. I want to work in this area in the future. Does anyone know good books to start learning about it? Hadoop, HBase. Beginner - intermediat

How to prove this: 4n = O(8n) 8n = O(4n)? So what are the C and n0 values for both cases?

After 3 years working as an IT Support Technician, I decided to change of field and get to Programming. I am learning C# through the Wrox Beginning Visual C# 2008 book, that I use as guideline. I have

Another Big O notation question...What is the Big O for the folling code: for (int i = n; i > 0; i = i / 2){ for (int j = 0; j < n; j++){ for (int k = 0; k < n; k++){ count++; } } } My Thou

I have an assignment to create a game in java using MVC as a pattern. The thing is that stuff I read about MVC aren't really what the teacher is telling me. What I read is that the Model are the infor

sum = 0; for (i=0;i<n/2;i++) for (j=i; j<n/4; j++) sum++; What is the big O for the above code? I calculated the big O but I'm not sure if it's correct. This is my answer the outer loop will r

What is the performance in Big-O notation of the following algorithm? It's a function I wrote to print all permutations of a string. I know for an input of length n there are n! different permutations

I'm recently learning JavaScript and don't understand some things yet. In this example, I want a button with two event listeners, but not executed at the same time. That's it: The web page has two ele

I know there are quite a bunch of questions about big O notation, I have already checked Plain english explanation of Big O , Big O, how do you calculate/approximate it?, and Big O Notation Homework--

How to find an algorithm for calculating the sum value in the array?? Is is Something like this? Algorithm Array Sum Input: nonnegative integer N, and array A[1],A[2],...,A[N] Output: sum of the N in

I have run across this method in our code base and wonder what the Big O is. The method takes a flat list and creates a tree, assigning the Parent and Children values as it goes. private void AddChild

I have a question regarding complexity theory. If I have a Bubble sort algorithm and I want to find its worst case running time Big O, we can conclude that it is O(n^2). Now, what about If I have a pr

int ara(int dizi[], int ilk, int son, int deger) { int indeks; if ( ilk > son ) return 0; indeks = (ilk + son) / 2; if ( dizi[indeks] < deger ) return ara(dizi, indeks+1, son, deger); else if (

This question already has an answer here: Conflicting overloaded methods with optional parameters 3 answers Why doesn't the C# compiler get confused about methods that have default arguments? I

At the beginning I would like to describe my current position and the goal that would like to achieve. I am a researcher dealing with machine learning. So far have gone through several theoretical cou

I'm confused about the complexity of the following (the operation performed inside the inner loop is in constant time): for(int i=0; i<n; i++) for(int j=i; j<n; j++) is this O(n^2) or O(n)? I f

so my data structure class is covering time complexity and I just have a quick question about the performance for arraylist and treemap. The get method for ArrayList is O(1) and the get method for Tre

I am unsure how to formally prove the Big O Rule of Sums, i.e.: f1(n) + f2(n) is O(max(g1(n)),g2(n)) So far, I have supposed the following in my effort: Let there be two constants c1 and c2 such that

I am just starting to learn about Big O notation and had a question about how to calculate the growth rate of an algorithm. Suppose I had an algorithm with O(√n log n) time complexity, and for n = 10

I'm a little confused about the StripTags filter as used in Zend. I think it's meant to strip tags that could result in XSS. So shouldn't that mean it should be used when outputting data in the views?

I had a question about Big O vs little o notation. It seems intuitively, that Big O is like <= while little o is like <. Does that mean that if something is little o of f(n), it is also Big O of

I created a new Ruby on Rails app (first app I'm working on in Rails 3), and for some reason, my app seems to be confused about what environment it should be running in (unless I'm just confused). Whe

If I want to compute the n-th hexadecimal digit of Pi with http://en.wikipedia.org/wiki/Bailey-Borwein-Plouffe_formula what is the big O notation http://en.wikipedia.org/wiki/Big_O_notation for the Ba

Ok, I am trying to understand the concept of Big O. I have a function I am suppose to find the Big O and I am not quite getting it this is an example in the book of one that is like my homework.. I

For example, if time complexity of merge sort is O(n log n) then why it is big O not theta or omega. I know the definition of these, but what I do not understand is how to determine the notation based

I've read the topic: Big O, how do you calculate/approximate it? And am not sure what the Big-O notation for the following function would be: static int build_nspaces_pattern(const char * const value,

If a function body invokes 3 different functions, all of the order O(n), how do I calculate the order of the outer (containing) function? Yes, this is homework, and I've surprisingly failed to find a

We have 3 functions with big o notations: Func A: O(n) Func B: O(n^2) Func C: O(2^n) If these functions does n proccesses in 10 seconds, how much time need to proccess 2 * n processes for each functi

My understanding is that if an algorithm is O(1) it is also O(n), O(n^2), O(n^3) and so on which makes it seem useless. For example, if someone asked me for the Big-Oh notation of any algorithm, I cou

Where can I get info about implementing my own methods that have the ellipsis notation, e.g. static void my_printf(char* format, ...) { } Also is that called ellipsis notation or is there a fancier n

Suppose an algorithm is known to be O(N2) and solving a problem of size M takes 5 minutes. About how long will it take to solve a problem of size 4M? Is it as simple as ... M=5min 4M=20min ?

The question is from a comment I just added to the answer to this question, but it shouldn't be a duplicate. The answer from @Bavarious to that question makes sense to me, but I am still confused why

void function(int N){ for (int i=0; i<N; i++) for (int j= 0; j< i; j++) System.out.println(j) } For this function, how does the Big O depend on the second for loop because of it being j Also

I know that we are only supposed to think about the larger power term, but for smaller values of x the +x will matter, methinks. Granted with really large values of x it will not.

What would the big O notation of the function foo be? int foo(char *s1, char *s2) { int c=0, s, p, found; for (s=0; s1[s] != '\0'; s++) { for (p=0, found=0; s2[p] != '\0'; p++) { if (s2[p] == s1[s]) {

I working on a problem for which I came up with two algorithms: one takes O(n lgn) time but requires extra space and other takes O(n+nlgn) time. So just wanted to ask is O(n lgn) time complexity an im

There is a comparison-based sorting algorithm that runs in O(n*log(sqrt(n))). Given the existence of an Omega(n(log(n)) lower bound for sorting, how can this be possible?

I'm learning C right now and there is a conversion specifier %a which writes a number in p-notation as opposed to %e which writes something in e-notation (exponential notation). What is p-notation?

What will be the Big O notation for the above complexity? Is it O(n)

void bar(int N){ int c=0; for (int i=0; i<N*N; i++) for (int j= i; j< N; j++) c++; } The outer (and inner) loops above never get past N. Would this mean the Big O is N*N (N for the outer and N/

First, here is my code for the O(n): import java.util.*; public class BigO{ public static void main(String[] args) { Scanner in = new Scanner(System.in); System.out.print(Enter a number: ); String u

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes? Input size: 4 Number of execution steps: 4 + 3 +