Suppose f(n) is runtime of algorithm.

According to function definition of `O(n)`

, if `f(n)<=c*g(n)`

then `f(n)=O(g(n))`

where `n0<=n`

.

What is the range of values constant c can take?

`C`

can be anything (above zero, obviously). Doesn't matter: `0.1`

or `1`

or `1.000.000`

. The only thing: it **must be constant** - i.e. may be defined once and for all. It must not depend from `n`

. Of course, `C`

will affect total algorithm performance - but the purpose of big-O is to estimate performance, not calculate it precisely (well, that goes from definition)

By definition (e.g. here), any positive number, as long as it's constant.

For instance, `n^2`

is *not* in `O(n)`

because there is no positive number `c`

such that `n^2 = cn`

for all `n`

; that equality is trivially solved to `c = n`

, but by definition `n`

is not a constant.

Any range.

`c`

being a constance means that it does not depend on the size `n`

of the problem. Whenever the value of `n`

changes, `c`

remains the same.

It can be any positive number. If it is 0 you do nothing, if it is negative you break something.

To speak simply c is an constant consisted of two halves:

- Algorithm half. For example your algo has to iterate 5 times through entire input collection. So constant will be 5. If your another algorithm iterates 3 times. Then constant will be 3, but both of your algo will have complexity O(n).
- Hardware half. It is time needed to compute one operation on your pc. If you start app implementing your algo upon the same collection on Pentium 1 and on modern Xeon it is obvious that Xeon will compute the result much faster.

While `c`

is independent of `n`

, one is allowed to exclude some special cases for small `n`

and determine the `c`

only for `n ≥ n0`

.

For instance,

```
n ≤ n^2 for n ≥ 1
```

but also

```
n ≤ 0.01*n^2 for n ≥ 100
```

Suppose `f(n)=n+1`

and `g(n)=n^2`

We try to prove that `f(n)=O(g(n))`

For n=1,f(n)=2,g(n)=1 `f(n)<=c * g(n)`

if c=2 and now c>n

For n=2,f(n)=3,g(n)=4 `f(n)<=c * g(n)`

if c=1 and now c< n

So comparison between c and n is senseless. **NOTE**: c is a constant (its value can never change)

So we can say,`f(n)=O(g(n)`

for `n>=1`

if `c=2`

But `f(n)=O(g(n)`

for `n>=2`

if `c=1`

Similar Questions

I am currently trying to work out what are the main quantitative differences between a quadratic algorithm, a cubic algorithm and one which is exponential. What I don't understand is what it means by

What would the big O notation of the function foo be? int foo(char *s1, char *s2) { int c=0, s, p, found; for (s=0; s1[s] != '\0'; s++) { for (p=0, found=0; s2[p] != '\0'; p++) { if (s2[p] == s1[s]) {

Possible Duplicate: What’s the complexity of for i: for o = i+1 I have done the following sorting algorithm for arrays of length 5: int myarray[5] = {2,4,3,5,1}; int i; for (i = 0; i < 5; i++) {

Possible Duplicate: Plain english explanation of Big O I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely. I've gathered it is some

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

I have received the assignment to prove 1/O(n) = Ω(n) However, this would mean that n element of O(n) => 1/n element of Ω(n) which is clearly wrong. So my question is: Is the statement 1/O(n) = Ω(n

I'm relatively new to the practice of determining algorithm runtimes using big-O notation and I have a question regarding the runtime of a sorting algorithm. Let's say I have a set of pairs (a, b) in

If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm. Why would a constant time be considered O(1) rather than O(k), considering it takes a variable

I have a question regarding time complexity (big O notation) for Java software. Is there a way to quickly calculate or test it (or any website that could calculate it for me would be welcomed). For ex

I am trying to find the big O bound for the following recurrence relation: T(n) = T(n-1) + n^c, where c >= 1 is a constant So I've decided to solve this by using iteration: T(n) = T(n-1) + n^c T(n

Lets say we have a problem we implemented using X algorithm with O(n) or O(log n) or etc.... When is the value of n big enough that we must consider an alternative implementation? Let's see if i can e

I'm trying to learn Big O notation and I'm a little confused with this C++ code: void enterElements(int *s1, int s1Size) { for(int x = 0;x < s1Size;++x) { retry: cout<<Element <<x + 1

Prove that 1 + 1/2 + 1/3 + ... + 1/n is O(log n). Assume n = 2^k I put the series into the summation, but I have no idea how to tackle this problem. Any help is appreciated

I'm a bit confused on how to determine when constant is important to find big O. I know we're supposed to ignore constant to find big O but this function is making me think twice: f(n): 5n + 8n log n

I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC. Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large t

Is there someplace where I can get a Big-O style analysis / comparison of traditional data structures such as linked lists, various trees, hashes, etc vs. cache aware data structures such as Judy tree

What would the Big O notation be for the following nested loops? for (int i = n; i > 0; i = i / 2){ for (int j = n; j > 0; j = j / 2){ for (int k = n; k > 0; k = k / 2){ count++; } } } My t

What would be the big O notation of the length of the list of permutations of the characters of a list of words of lenght n? I just do not know how to express that because it would be like n! for each

I have got a function and want to denote it in terms of bigO notation. f(n) = log4n+n*(1/3). Is this function O(n)? Thanks for your help

I learned that,using Big O notation O(f(n)) + O(g(n)) -> O(max(f(n),g(n)) O( f(n) )* O( g(n)) -> O( f(n) g(n)) but now, I have this equation for running time T for input size N T(N) = O(N^2) //

I'm new to big O notation and have a couple of questions about it in my program. I have a program that has 2 maps. Before adding to one of the maps I loop through each character and randomly change th

How to find an algorithm for calculating the sum value in the array?? Is is Something like this? Algorithm Array Sum Input: nonnegative integer N, and array A[1],A[2],...,A[N] Output: sum of the N in

i m calculating running time for this algorithm? Cost No Of Times for(j=1;j<=n-1;j++){ c1 n(loop will run for n-1 times +1 for failed cond for(i=0;i<=n-2;i++){ c2 n*(n-1) (n-1 from outer loop

I have the understanding about the Big-Oh notation. But how do I interpret what does O(O(f(n))) mean? Does it mean growth rate of the growth rate?

Wikipedia says: The statement f(x) is O(g(x)) as defined above is usually written as f(x) = O(g(x)). Some consider this to be an abuse of notation, since the use of the equals sign could be mislead

Question: what is the big oh notation for the cost of a stack (that implements an array) that doubles the size of its array if it needs more space. it dynamically resizes larger, but not smaller. ex:

Title is self-explanatory. Very easy question. I think it's O(n) but wanted to verify before my final tomorrow.

Suppose an algorithm is known to be O(N2) and solving a problem of size M takes 5 minutes. About how long will it take to solve a problem of size 4M? Is it as simple as ... M=5min 4M=20min ?

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

What is the Big-O complexity for widespread algorithms of the basic arithmetic operations like multiplication, square root, logarithm, scalar and matrix product? Are there exotic algorithms which are

X=2, y=1 X=3, y=3 X=4, y= 6 X=5, y= 10 X=6, y= 15 X=7, y= 21 X=8, y=28 I know that f(x) = f(x-1) + (x-1) But...is that the correct mathematical function? What would Big O notation be?

Anyone knows the Big O of the algorithm used in the Distinct() method, with a custom IEqualityComparer?

Possible Duplicate: Big O when adding together different routines What does O(n) + O(log(n)) reduce to? My guess is O(n) but can not give a rigorous reasoning. I understand O(n) + O(1) should reduce

I think this is a simple question, but if I have something like O(n²/2), should I just get rid of the /2 and conclude that O(n²)?

Just need a confirmation on something real quick. If an algorithm takes n(n-1)/2 tests to run, is the big oh O(n^2)?

I can't solve a problem; can someone help me? What is the Big O notation for the below statement:- for (int i=2;i<=n;i=i*4) sum++;

I'm reading a textbook right now for my Java III class. We're reading about Big-Oh and I'm a little confused by its formal definition. Formal Definition: A function f(n) is of order at most g(n) - th

Is it possible to control number of digits shown in exponent printed by scientific notation (e) of printf? This code #include <stdio.h> int main(void) { printf(%6.3e, 403.0); return 0; } prod

I'm confused about the complexity of the following (the operation performed inside the inner loop is in constant time): for(int i=0; i<n; i++) for(int j=i; j<n; j++) is this O(n^2) or O(n)? I f

I'm trying to grasp the idea of Big O for part of a project that is due tonight and I don't know if i'm thinking this through right or not... The project included us writing Iterative and Recursive so

Lets say I have a routine that scans an entire list of n items 3 times, does a sort based on the size, and then bsearches that sorted list n times. The scans are O(n) time, the sort I will call O(n lo

Today my professor gave us 2 take home questions as practice for upcoming array unit in C and I am wondering what exactly the sorting algorithm these 2 problems resemble and what their Big O is. Now,

Can someone please explain to me the purpose of the constant portion of big O notation. I'll try and explain where I am at right now in terms of understanding: Basically you have a function, for examp

I don´t really know how to express in big O-notation. I´ve seen several sources talking about this but it only made me more uncertain. When I write in big-O should I just ignore the constants? exampl

I'm learning about big O notation and I'm kind of confused. I don't think I really get how complexity effects an algorithm, and I can't tell if I'm looking at things backwards. is the order of compl

I have a quite simple problem, however I am a bit unsure of what the actual runtime (e.g. Big-O) is of this. The program looks like this. n <- user input for i=1 to n foo(i) foo a: for i=1 to a foo

This question already has an answer here: What does O(log n) mean exactly? 25 answers Like the Big O notation O(1) can describe following code: O(1): for (int i = 0; i < 10; i++) { // do

Next in my series of Big O questions that I can't find the answer to Take the following example for(int i = 0; i < someNumber; i++) { for(int j = i; j < someNumber; j++) { DoSomething(); } } Wo

I'm trying to learn Big O analysis, and I was wondering if someone could let me know if I'm doing it right with these two examples (and if I'm not, where did I go wrong?). I got the first one to be O(

This question already has an answer here: Plain English explanation of Big O 21 answers What is Big O notation? Do you use it? I missed this university class I guess :D Does anyone use it and g