What would the big O notation of the function foo be?

```
int foo(char *s1, char *s2)
{
int c=0, s, p, found;
for (s=0; s1[s] != '\0'; s++)
{
for (p=0, found=0; s2[p] != '\0'; p++)
{
if (s2[p] == s1[s])
{
found = 1;
break;
}
}
if (!found) c++;
}
return c;
}
```

What is the efficiency of the function foo?

a) O(n!)

b) O(n^2)

c) O(n lg(base2) n )

d) O(n)

I would have said O(MN)...?

It is `O(n²)`

where n = max(length(s1),length(s2)) (which can be determined in less than quadratic time - see below). Let's take a look at a textbook definition:

f(n) ∈ O(g(n)) if a positive real number c and positive integer N exist such that f(n) <= c g(n) for all n >= N

By this definition we see that n represents a number - in this case that number is the length of the string passed in. However, there is an apparent discrepancy, since this definition provides only for a single variable function `f(n)`

and here we clearly pass in 2 strings with independent lengths. So we search for a multivariable definition for Big O. However, as demonstrated by Howell in "On Asymptotic Notation with Multiple Variables":

"it is impossible to define big-O notation for multi-variable functions in a way that implies all of these [commonly-assumed] properties."

There is actually a formal definition for Big O with multiple variables however this requires extra constraints beyond single variable Big O be met, and is beyond the scope of most (if not all) algorithms courses. For typical algorithm analysis we can effectively reduce our function to a single variable by bounding all variables to a limiting variable `n`

. In this case the variables (specifically, length(s1) and length(s2)) are clearly independent, but it is possible to bound them:

**Method 1**

```
Let x1 = length(s1)
Let x2 = length(s2)
```

The worst case scenario for this function occurs when there are no matches, therefore we perform x1 * x2 iterations.

Because multiplication is commutative, the worst case scenario foo(s1,s2) == the worst case scenario of foo(s2,s1). We can therefore assume, without loss of generality, that x1 >= x2. (This is because, if x1 < x2 we could get the same result by passing the arguments in the reverse order).

**Method 2** (in case you don't like the first method)

For the worst case scenario (in which s1 and s2 contain no common characters), we can determine length(s1) and length(s2) prior to iterating through the loops (in .NET and Java, determining the length of a string is O(1) - but in this case it is O(n)), assigning the greater to x1 and the lesser to x2. Here it is clear that x1 >= x2.

For this scenario, we will see that the extra calculations to determine x1 and x2 make this O(n² + 2n) We use the following simplification rule which can be found here to simplify to O(n²):

If f(x) is a sum of several terms, the one with the largest growth rate is kept, and all others omitted.

**Conclusion**

for `n = x1`

(our limiting variable), such that `x1 >= x2`

, the worst case scenario is `x1 = x2`

. Therefore: `f(x1) ∈ O(n²)`

**Extra Hint**

For all homework problems posted to SO related to Big O notation, if the answer is not one of:

```
O(1)
O(log log n)
O(log n)
O(n^c), 0<c<1
O(n)
O(n log n) = O(log n!)
O(n^2)
O(n^c)
O(c^n)
O(n!)
```

Then the question is *probably* better off being posted to http://math.stackexchange.com/

**O(n^2)**

The relevant part of the function, in terms of complexity, is the nested loops. The maximum number of iterations is the length of s1 times the length of s2, both of which are linear factors, so the worst-case computing time is O(n^2), i.e. the square of a linear factor. As Ethan said, O(mn) and O(n^2) are effectively the same thing.

Think of it this way:

There are two inputs. If the function simply returned, then it's performance is unrelated to the arguments. This would be O(1).

If the function looped over one string, then the performance is linearly related to the length of that string. Therefore O(N).

But the function has a loop within a loop. The performance is related to the length of s1 and the length of S2. Multiply those lengths together and you get the number of loop iterations. It's not linear any more, it follows a curve. This is O(N^2).

In big-O notation, we always have to define what the occuring variables mean. `O(n)`

doesn't mean anything unless we define what `n`

is. Often, we can omit this information because it is clear from context. For example if we say that some sorting algorithm is `O(n log(n))`

, `n`

always denotes the number of items to sort, so we don't have to always state this.

Another important thing about big-O notation is that it only gives an upper limit -- every algorithm in `O(n)`

is also in `O(n^2)`

. The notation is often used as meaning "the algorithm has the exact asymptotic complexity given by the expression (up to a constant factor)", but it's actual definition is "the complexity of the alogrithm is bounded by the given expression (up to a constant factor)".

In the example you gave, you took `m`

and `n`

to be the respective lengths of the two strings. With this definition, the algorithm is indeed `O(m n)`

. If we define `n`

to be the length of the longer of the two strings though, we can also write this as `O(n^2)`

-- this is also an upper limit for the complexity of the algorithm. And with the same definition of `n`

, the algorithm is also `O(n!)`

, but not `O(n)`

or `O(n log(n))`

.

Similar Questions

I am aware intuitively that two for loops make an O(n^2) function, but what if the loops are unrelated. How is it expressed For example: for(x = 1; x < t; x++) for(y = 1; y < z; y++) do somethin

We have 3 functions with big o notations: Func A: O(n) Func B: O(n^2) Func C: O(2^n) If these functions does n proccesses in 10 seconds, how much time need to proccess 2 * n processes for each functi

I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.

Possible Duplicates: Good text on order analysis Plain English explanation of Big O Hello, I have read a lot of documentation about O notation, but all of them in mathematic context and question: Co

Is there someplace where I can get a Big-O style analysis / comparison of traditional data structures such as linked lists, various trees, hashes, etc vs. cache aware data structures such as Judy tree

Wikipedia says: The statement f(x) is O(g(x)) as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since the use of the equals sign could be mislead

Is it possible to obtain a Big O estimate of Math.random()?

What's a good strategy to determine the running time (Big O notation) of data structures and algorithms. I have the following to figure out the running times for and I'm having trouble determining wha

I am trying to find the big O bound for the following recurrence relation: T(n) = T(n-1) + n^c, where c >= 1 is a constant So I've decided to solve this by using iteration: T(n) = T(n-1) + n^c T(n

Possible Duplicate: Plain english explanation of Big O I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely. I've gathered it is some

i <-- 1 while(i < n) j <--1 while(j < i) j <-- j * 2 i <-- i + 1 done My shot at this would be O(log n) for the inner loop. And I'm guessing the outer loop is O(n), for an overall c

Can someone please explain to me the purpose of the constant portion of big O notation. I'll try and explain where I am at right now in terms of understanding: Basically you have a function, for examp

Good morning, Does anyone know about efficient algorithms for partial string matching? For example, given the two strings woods and woodes, the algorithm could/should possibly return wood+s, or

I can't solve a problem; can someone help me? What is the Big O notation for the below statement:- for (int i=2;i<=n;i=i*4) sum++;

I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I ass

1) The time cost to add n elements to an initially empty singly linked list by inserting at the front of the list. the answer seems to be one of these O(n) or O(1). I think it is O(1) because insertin

I've been told the below code is = O(MN) however, I come up with O(N^2). Which is the correct answer and why? My thought process: nested for loops plus if statements --> (O(N^2)+O(1)) + (O(N^2)+O(

What is the difference between O(n^2) and O(n.log(n))?

converter_scientific_notation_to_decimal_notation('1.34E-15') or converter_scientific_notation_to_decimal_notation(1.34E-15) => '0.00000000000000134' converter_scientific_notation_to_decimal_notati

Is it common that a higher value for Big-O notation is used for convenience or simpler looks? For example: I'm looking at this algorithm bifragment gap carving shortly explained here (page 66). If I

How to prove this: 4n = O(8n) 8n = O(4n)? So what are the C and n0 values for both cases?

I am revising for an exam and I have found this problem on the internet and was wondering how I would go about solving it. (With base 2 logs) Prove that log(2n) is a member of O(log n). I have given

What's the big O for JavaScript's array access when used as a hash? For example, var x= []; for(var i=0; i<100000; i++){ x[i.toString()+'a'] = 123; // using string to illustrate x[alpha] } alert(x[

When deciding to use a specific container (List/Set/Map), I like to consider the performance (big-Oh notation) metrics of operations such as insert, delete, get, etc. This is so I can select the best

We just started learning big-o in class. I understand the general concept that f(x) is big-o of g(x) if there exists two constants c,k such that for all x>k |f(x)|<=c|g(x)|. I had a question whe

Order notation question, big-o notation and the like: What does the max and min of a function mean in terms of order notation? for example: DEFINITION: Maximum rules: Suppose that f(n) and g(n) ar

I am just starting to learn about Big O notation and had a question about how to calculate the growth rate of an algorithm. Suppose I had an algorithm with O(√n log n) time complexity, and for n = 10

Given a list of complexities: How do you order then in their Big O order? I think the answer is below? Question now is how does log(n!) become n log(n). Also I don't know if I got the n! and (n-1

I am trying to figure out how to determine Big O notation for recursive methods . I realise that is probably the same way as with a iterative method, but im not sure about this. I wrote this simple r

I think this is a simple question, but if I have something like O(n²/2), should I just get rid of the /2 and conclude that O(n²)?

i have a algorithm that opens a textfile, reads between 5 and 20 words, store them into an array and closes the textfile again. Has this algorithm a Big O Natation (1) or (n)?

Given a string as dot notation, how would I create an object from that string (checking for already existing properties): eg var obj = {}; stringToObj('a.b', 'value1', obj); stringToObj('a.b.c', 'valu

I know in Big O Notation we only consider the highest order, leading polynomial term because we are basically placing this theoretic worst case bound on compute-time complexity but sometimes I get con

I have run across this method in our code base and wonder what the Big O is. The method takes a flat list and creates a tree, assigning the Parent and Children values as it goes. private void AddChild

I was reading about Big O notation. It stated, The big O of a loop is the number of iterations of the loop into number of statements within the loop. Here is a code snippet, for (int i=0 ;i<n; i+

Maybe I'm mistaken in my understanding of Big-O notation (it has been a while since I've taken a course on algorithms) but the following has never made too much sense to me: This would be considered O

What is a plain English explanation of Theta notation? With as little formal definition as possible and simple mathematics. How theta notation is different from the Big O notation ? Could anyone expla

In most popular languages like C/C++/C#/Erlang/Java we have threads/processes; there is GPGPU computation market growing. If algorithm requires N data independent steps we get not the same performance

This is a debate I was having with one of my friends: What would be the fastest way of making a valiation method that checks if the given string has one of the non-allowed characters Method I: simple

I know that the relation n = Big-O(1) is false. But if we use induction involving Big-O it can be proved. But the fallacy is we cannot induct Big-O. But my question is how we can disprove the relation

I have written code for AES decryption, but did not have any success. My AES algo class is here. http://pastebin.com/QtpFnW84 and implementation is : String Masterkey = eX0XcsF8lkeX0XcsF8lkeX0XcsF8lk

I'm confused about how Big-O works when dealing with functions within functions (when analyzing worst case). For example, what if you have something like: for(int a = 0; a < n; a++) { *some functio

So, I really don't get Big O notation. I have been tasked with determining O value for this code segment. for (int count =1; count < n; count++) // Runs n times, so linear, or O(N) { int count2

I got an unknown algorithm that I need to calculate the time complexity for (Big O). The only thing that I know about the algorithm is how long it takes to finish its calculations for a given number o

Is O(5n) = 5*O(n) ? From what I understand , O(5n) == O(n). Thus they are not equal? Please correct me if I am wrong.

Here's the pseudocode: Baz(A) { big = −∞ for i = 1 to length(A) for j = 1 to length(A) - i + 1 sum = 0 for k = j to j + i - 1 sum = sum + A(k) if sum > big big = sum return big So line 3 will be O

We use Ө-notation to write worst case running time of insertion sort. But I’m not able to relate properties of Ө-notation with insertion sort, why Ө-notation is suitable to insertion sort. How does th

I'm asked to give a big-O estimates for some pieces of code but I'm having a little bit of trouble int sum = 0; for (int i = 0; i < n; i = i + 2) { for (int j = 0; j < 10; j + +) sum = sum + i +

I am working out a function: total = 0; for (i = 0; i < N; i++){ for (j = 0; j < i*i; j++){ if (j % i == 0){ for (k=0; k < j; k++){ total++; I the Big O number for this N^4 or N^5 when you b

What are the computational complexities of JBIG lossless compression algorithm in Big O notation?