I'm trying to understand a particular aspect of Big O analysis in the context of running programs on a PC.

Suppose I have an algorithm that has a performance of O(n + 2). Here if n gets really large the 2 becomes insignificant. In this case it's perfectly clear the real performance is O(n).

However, say another algorithm has an average performance of O(n^2/2). The book where I saw this example says the real performance is O(n^2). I'm not sure I get why, i mean the 2 in this case seems not completely insignificant. So I was looking for a nice clear explanation from the book. The book explains it this way:

"Consider though what the 1/2 means. The actual time to check each value is highly dependent on the machine instruction that the code translates to and then on the speed at which the CPU can execute the instructions. Therefore the 1/2 doesn't mean very much."

And my reaction is...Huh???. I literally have no clue what that says or more precisely what that statement has to do with their conclusion. Can somebody spell it out for me please.

Thanks for any help.

Big-O notation doesn't care about constants because big-O notation only describes the long-term growth rate of functions, rather than their absolute magnitudes. Multiplying a function by a constant only influences its growth rate by a constant amount, so linear functions still grow linearly, logarithmic functions still grow logarithmically, exponential functions still grow exponentially, etc. Since these categories aren't affected by constants, it doesn't matter that we drop the constants.

That said, those constants are *absolutely* significant! A function whose runtime is 10^{100}n will be *way* slower than a function whose runtime is just n. A function whose runtime is n^{2} / 2 will be faster than a function whose runtime is just n^{2}. The fact that the first two functions are both O(n) and the second two are O(n^{2}) doesn't change the fact that they don't run in the same amount of time, since that's not what big-O notation is designed for. O notation is good for determining whether *in the long term* one function will be bigger than another. Even though 10^{100}n is a colossally huge value for any n > 0, that function is O(n) and so for large enough n eventually it will beat the function whose runtime is n^{2} / 2 because that function is O(n^{2}).

In summary - since big-O only talks about relative classes of growth rates, it ignores the constant factor. However, those constants are absolutely significant; they just aren't relevant to an asymptotic analysis.

Hope this helps!

Big O without constant is enough for algorithm analysis.

First, the actual time does not only depend how many instructions but also the time for each instruction, which is closely connected to the platform where the code runs. It is more than theory analysis. So the constant is not necessary for most case.

Second, Big O is mainly used to measure how the run time will increase as the problem becomes larger or how the run time decrease as the performance of hardware improved.

Third, for situations of high performance optimizing, constant will also be taken into consideration.

You are completely right that constants matter. In comparing many different algorithms for the same problem, the O numbers without constants give you an overview of how they compare to each other. If you then have two algorithms in the same O class, you would compare them using the constants involved.

But even for different O classes the constants are important. For instance, for multidigit or big integer multiplication, the naive algorithm is O(n^2), Karatsuba is O(n^log_2(3)), Toom-Cook O(n^log_3(5)) and Schönhage-Strassen O(n*log(n)*log(log(n))). However, each of the faster algorithms has an increasingly large overhead reflected in large constants. So to get approximate cross-over points, one needs valid estimates of those constants. Thus one gets, as SWAG, that up to n=16 the naive multiplication is fastest, up to n=50 Karatsuba and the cross-over from Toom-Cook to Schönhage-Strassen happens for n=200.

In reality, the cross-over points not only depend on the constants, but also on processor-caching and other hardware-related issues.

Big-O notation only describes the growth rate of algorithms in terms of mathematical function, rather than the actual running time of algorithms on some machine.

Mathematically, Let f(x) and g(x) be positive for x sufficiently large. We say that f(x) and g(x) grow at the same rate as x tends to infinity, if

now let f(x)=x^2 and g(x)=x^2/2, then lim(x->infinity)f(x)/g(x)=2. so x^2 and x^2/2 both have same growth rate.so we can say O(x^2/2)=O(x^2).

As templatetypedef said, hidden constants in asymptotic notations are absolutely significant.As an example :marge sort runs in O(nlogn) worst-case time and insertion sort runs in O(n^2) worst case time.But as the hidden constant factors in insertion sort is smaller than that of marge sort, in practice insertion sort can be faster than marge sort for small problem sizes on many machines.

The time required to do a particular task in computers now a days does not required a large amount of time unless the value entered is very large.

Suppose we wants to multiply 2 matrices of size 10*10 we will not have problem **unless** we wants to do this operation **multiple times** and then the role of **asymptotic notations becomes prevalent** and when the value of n becomes very big then the **constants don't really makes any difference to the answer and are almost negligible** so we tend to leave them while calculating the complexity.

Similar Questions

Assume we have an array of length N where the subarrays from 0 to N/2 and N/2 to N elements are sorted. Is it possible to sort the whole array using constant memory in O(N) time? Example of an array:

Big oh notation says that all g(n) are an element c.f(n), O(g(n)) for some constant c. I have always wondered and never really understood why we need this arbitrary constant to multiply with the bound

I'm just connecting to a db4o database file from 2 different connections with the LockDatabaseFile=false configuration value. When I store an object from an IObjectContainer, I'm unable to get that ob

How can I specify computational complexity of an algorithm in Big-O notation whose execution follows the following pattern, according to input sizes? Input size: 4 Number of execution steps: 4 + 3 +

is it possible to exclude single sortbale items from being dropped on another list? Here is a JsFiddle. E.g. Item 2 shouldn't be dropable on the second list. I have really no idea if this is possible

If I have the following closed form solution for a recurrence relation, how can I simplify it under big O: f(n) = 3^n + n.9^n I would hazard a guess at: f(n) is a member of O(9^n) -> Am not sure if

I am asking the question more from a compiler theory point of view. Why null pointer analysis failed to consider the effect of Assert.notNull()? The Assert method is from Spring and it is implemented

I am currently watching ocw video courses on Algorithms and in second lecture i stuck at a point where professor proves the following statement by induction:- n=O(1) proof:- For base case 1=O(1) supp

I am using C# with MSTest and one thing I find completely incomprehensible is why unit test classes are not excluded by default from code coverage analysis. The code coverage analysis tool must surely

Is O(n) considered as faster compared to O(n log n)? If I have a function that does a loop, which is O(n) then a merge sort outside the loop O(n log n) then the run time would be O(n log n) I assume?

I have an algorithm that takes a DAG graph that has n nodes and for every node, it does a binary search on its adjacency nodes. To the best of my knowledge, this would be a O(n log n) algorithm howeve

I have the following question: Is the following statement true or false? All logs to base 2 log2n is a member of O(log(n)) My attempt: log2n - clogn <= 0 log2 + logn - clogn <= 0 1 + logn(1-c)

I'm going to preface this with the fact that I am not completely knowledgeable on Big O Notation, so maybe my thinking about this is off. I was randomly browsing SO when I came upon a question about d

Hi can someone explain me how to resolve this homework? (n + log n)3^n = O((4^n)/n). i think it's the same as resolving this inequality: (n + log n)3^n < c((4^n)/n)). thanks in advance

What would be the Big O or Theta of a loop that runs forever? Just curious, was thinking about it today. Could you even bound it?

When calculating big-O notation do we also evaluate inherent javascript methods? If we do not, I am reasonably certain this is O(N). If we do how would I express his in big-O and how did you come to t

Can I divide big O expression like O(n^3)/n = O(n^2)? Is this division valid?

The Hash table wiki entry lists its Big O as: Search: O(n) Insert: O(n) Delete: O(n) while a java HashMap is listed with Big O as: get: O(1) put: O(1) remove: O(1) Can someone plz explain why does t

for (int j=0,k=0; j<n; j++) for (double m=1; m<n; m*=2) k++; I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices: O(n^2) O(2^n) O(n!) O(

I am taking now the big O in ICS202 course, and I really find some dificulty to figure it out from a code, Is there any videos,web pages or blogs that can help me with that?

What are some examples where Big-O notation[1] fails in practice? That is to say: when will the Big-O running time of algorithms predict algorithm A to be faster than algorithm B, yet in practice algo

I need to ensure that tables are not dropped from my database. Should I.. Create DDL(or DML ?) trigger that contains COMMIT or create DDL (or DML ?) trigger that contains ROLLBACK ?

How would you characterize the below in big-O notation? rotors = [1,2,3,4,5 ...] widgets = ['a', 'b', 'c', 'd', 'e' ...] assert len(rotors) == len(widgets) for r in rotors: for w in widgets: ... del w

I'm having a hard time wrapping my head around the big-O notation for a pairing operation. The question is pretty simple- Generate all possible pairs for a given list of numbers in an array. My first

Possible Duplicate: Big O when adding together different routines What does O(n) + O(log(n)) reduce to? My guess is O(n) but can not give a rigorous reasoning. I understand O(n) + O(1) should reduce

Let d p(n) = Σ ai n^i i=0 where ad > 0 is a degree-d polynomial in n, and let k be a constant. Use the definitions of the asymptotic notations to prove the following properties. a) if k >= d,

I'm trying to grasp the idea of Big O for part of a project that is due tonight and I don't know if i'm thinking this through right or not... The project included us writing Iterative and Recursive so

I am having trouble understanding this time complexity O(sqrt(B)) given that B is an integer. For example if I have a function... int GetResult(int A, int B) { } ...and this function has a time compl

What is the Big O for the zig-zag merge join algorithm? GAE's Big Table uses it, they go into it in these videos: http://www.youtube.com/watch?v=AgaL6NGpkB8 http://www.bestechvideos.com/tag/zigzag-me

I know the differences between HTTP get and post methods (as expalined in big details in this question). My question is why not always use the post method for AJAX calls, which is safe. Is there get r

Can anyone point me to a resource that lists the Big-O complexity of basic clojure library functions such as conj, cons, etc.? I know that Big-O would vary depending on the type of the input, but stil

I may be teaching a Java crash-course soon. While it is probably safe to assume that the audience members will know Big-O notation, it is probably not safe to assume that they will know what the ord

I'm just learning about Big O notation, and I've been going through a few thought puzzles, and I just thought I'd verify with people whether I'm thinking through things correctly. In Javascript would

Let f(n) and g(n) complexity functions. Why this statement holds true?. How can i prove it? f(n) - g(n) is O(min(f(n),g(n)))

I am trying to find the Big O for this code fragment: for (j = 0; j < Math.pow(n,0.5); j++ ) { /* some constant operations */ } Since the loop runs for √n times, I am assuming this for-loop is O(√

For homework, I was given the following 8 code fragments to analyze and give a Big-Oh notation for the running time. Can anybody please tell me if I'm on the right track? //Fragment 1 for(int i = 0;

Is there a difference between saying that f(n)=O(g(n)) and f(n) ∈ O(g(n))?

when using TCP Socket I/O code.. Is there any big difference of performance between below two codes..?? The result of both is the same~~ // -------- 1 -------- // OutputStream out = sock.getOutputSt

This question already has an answer here: Big-O summary for Java Collections Framework implementations? 7 answers Question pretty much says it all. Specifically, I would like the Big-O of all t

I am having a hard time proving that n^k is O(2^n) for all k. I tried taking lg2 of both sides and have k*lgn=n, but this is wrong. I am not sure how else I can prove this.

Possible Duplicates: Good text on order analysis Plain English explanation of Big O Hello, I have read a lot of documentation about O notation, but all of them in mathematic context and question: Co

Possible Duplicate: Big Theta Notation - what exactly does big Theta represent? I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three. In school, w

for(int i=N; i>0; i=i/2) irrelevant statement; I am asked to find the complexity class and I am unsure of whether I am supposed to use Big-Omega notation or Big-O? But I am assuming that it is O(N

So, in my data structures class we recently learned about algorithmic analysis and Big-O analysis. We really only applied it to sorting algorithms so far, which are relatively simple to analyze. I was

Ok, I am trying to understand the concept of Big O. I have a function I am suppose to find the Big O and I am not quite getting it this is an example in the book of one that is like my homework.. I

How would Big-O notation help in my day-to-day C# programming? Is it just an academic exercise?

I am trying to learn Big-O notation but i have difficulties in calculating time complexity of recursive functions. Can you help me to understand the time complexity of following example? public int r

I've tried to come up with the Big O Running time of the following data structures. Are they Correct? Inserting n integers into an initially empty AVL Tree (best case) O(log n) Inserting n integers i

What is the Big-O for SQL select, for a table with n rows and for which I want to return m result? And What is the Big-O for an Update, or delete, or Create operation? I am talking about mysql and sql

Big O Notation Arrays vs. Linked List insertions: According to academic literature for arrays it is constant O(1) and for Linked Lists it is linear O(n). An array only takes one multiplication and add