I am having trouble figuring out how to prove that

```
t(n) = sqrt(31n + 12n log n + 57)
```

is

```
O(sqrt(n) log n)
```

I haven't had to deal with square root's in big O notation yet so I am having lots of trouble with this! Any help is greatly appreciated :)

Start by finding the largest-degree factor inside of the `sqrt()`

, which would be `12nlogn`

. The largest-degree factor makes all the other factors irrelevant in big O terms, so it becomes `O(sqrt(12nlogn))`

. A constant factor is also irrelevant, so it becomes `O(sqrt(nlogn))`

. Then I suppose you can make the argument this is equal to `O(sqrt(n) * sqrt(logn))`

, or `O(sqrt(n) * log(n)^(1/2))`

, and eliminate the power on `logn`

to get `O(sqrt(n)logn)`

. But I don't know what the technical justification would be for that last step, because if you can turn `sqrt(logn)`

into `logn`

, why can't you turn `sqrt(n)`

into `n`

?

Hint: Consider the leading terms of the expansion of sqrt(1 + x) for |x| < 1.

Big O notation is about how algorithm characteristics (clock time taken, memory use, processing time) grow with the size of the problem.

Constant factors get discarded because they don't affect *how* the value scales.

Minor terms also get discarded because they end up having next to no effect.

So your original equation

```
sqrt(31n + 12nlogn + 57)
```

immediately simplifies to

```
sqrt(n log n)
```

Square roots distribute, like other kinds of multiplication and division, so this can be straightforwardedly converted to:

```
sqrt(n) sqrt(log n)
```

Since logs convert multiplication into addition (this is why slide rules work), this becomes:

```
sqrt(n) log (n/2)
```

Again, we discard constants, because we're interested in the class of behaviour

```
sqrt(n) log n
```

And, we have the answer.

**Update**

As has been correctly pointed out,

```
sqrt(n) sqrt(log n)
```

does not become

```
sqrt(n) log (n/2)
```

So the end of my derivation is wrong.

Similar Questions

What is the running time of this algorithm: for i=1 to n^2 for j=1 to i // some constant time operation I want to say O(n^4) but I can't be certain. How do you figure this out?

Possible Duplicate: Big O when adding together different routines What does O(n) + O(log(n)) reduce to? My guess is O(n) but can not give a rigorous reasoning. I understand O(n) + O(1) should reduce

Big 0 attempts to answer the question of inefficiency in algorithmic complexity. N+1 describes inefficiency as it relates to database queries in terms of separate queries to populate each item in a co

Maybe this is a stupid question, but I am trying to find the math rule to prove that: O(n^2.3) is less efficient than O(n^2logn)

I know you can find the longest palindromic substring in O(n) with manacher's algorithm, but is it possible to find the total number of palindromic substrings in O(n) or O(n log n)? If so, how would y

our assignment is to implement a linked list class to do the following with time complexity of O(sqrt(n)): insert an element at i th place delete an element at i th place retrive i th element of the

for(int i=N; i>0; i=i/2) irrelevant statement; I am asked to find the complexity class and I am unsure of whether I am supposed to use Big-Omega notation or Big-O? But I am assuming that it is O(N

Is the big o for the following O(n^2*log(n)) or O(n^3*log(n))? for (int i=0;i<n;i++){ for(int j=0;j<i;j++){ for(int k=0;k<n;k*=2){ System.out.print(test); } } }

I am currently studying basic algorithms for Big Oh. I was wondering if anyone can show me what the code for (n log n) in Java using Big Oh would be like or direct me to any SO page where one exists.

Possible Duplicate: If f(n) = O(g(n)) , then is exp(f(n)) = O(exp(g(n))) I stumbled upon this question in the Cormen book. If f(n) is O (g(n)) then 2^f(n) is also O (2^g(n)). Is this true? I was try

How would I incorporate the equation into my program? Basically adding a new column of info when it compiles: relative_error_per_cent = 100 *((my_sqrt_1(n) – sqrt(n)) / sqrt(n) I know that it is supp

This question already has an answer here: Java Big O notation of 3 nested loops of log(n) 3 answers I am trying to figure out what the big o estimate of the following nested for loop is. for(in

I am having a hard time understanding Dijkstra's big O notation exactly. I have a question regarding Dijkstra with an unsorted array. As from Wikipedia: The simplest implementation of the Dijkstra's

show that n2/log(n) + 105×n×log(n5)) = O(n2/log(n)) I am having a hard time solving this. If someone could explain to me why that is true, that would be fantastic.

When I was taking algorithms class talking about minimum spanning trees, my professor introduced a performance enhancement from O(m log* n) to O(m log(log* n)) by Fredman and Tarjan. I get really conf

For an assignment, we are to write the merge sort function in C: sort(int* array, unsigned len); I have the code written and working, but its runtime is O(N^2*log[N]) which defeats the purpose of mer

I have been searching through the forums on big O notation and learned quite a bit. My problem is pretty specific and I think a unique case will better help me understand big O, I am ignoring constant

Discrete math question which is as follows: Prove that (x+1)! is not O(x!) using only the definition of Big-Oh notation. (Hint!: log(a * b) = (log a + log b)) I used a proof by contradiction saying t

What in an exact formal manner does the expression f(n) = 2^O(n) mean?

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

So let's say we have a function such as 6wn^2 - 6wn + 6w, would the big-o notation be O(n^2) or O(wn^2)? The problem set question asks me to write this function in Big-O form in terms of w and n, so I

For an assignment, I am to create a constructor taking a String[] names and int[] rank as parameters that accomplishes the following tasks in O(n log n): (Data Validation): Checks that names and ran

Calculate x ^ y in O(log n) There are different answer like Use the Indian Power algorithm or double power(double x, int y) { if(y == 0) return 1; double d = power(x, y/2); if(y%2 == 0) return d*d; e

I'm trying to determine whether it is: O(1). How can I prove it? In complexity terms, log_b(n) is log(n). So is O(log_2(n)-log_3(n))=O(0)=O(1)? that doesn't seem like a strong proof. Also, this doesn'

I am revising for an exam and I have found this problem on the internet and was wondering how I would go about solving it. (With base 2 logs) Prove that log(2n) is a member of O(log n). I have given

For 3-way Quicksort (dual-pivot quicksort), how would I go about finding the Big-O bound? Could anyone show me how to derive it? Thank you!

Could you please explain how I can get the worst-case Big O of this algorithm. I was reading my textbook and I came across a similar algorithm like this one but still don't understand the logic behind

I may be teaching a Java crash-course soon. While it is probably safe to assume that the audience members will know Big-O notation, it is probably not safe to assume that they will know what the ord

What all algorithms do you people find having amazing (tough, strange) complexity analysis in terms of both - Resulting O notation and uniqueness in way they are analyzed?

I'm building a game engine and I was wondering: are there any algorithms out there for Collision Detection that have time complexity of O(N^log N)? I haven't written any coding yet, but I can only thi

Binary search has a average case performance as O(log n) and Quick Sort with O(n log n) is O(n log n) is same as O(n) + O(log n)

This is my first course in data structures and every lecture / TA lecture , we talk about O(log(n)) . This is probably a dumb question but I'd appreciate if someone can explain to me exactly what does

My textbook is saying that the Big O Notation for finding a Node in a Binary Tree is O(log2N), if N = 1 then log2N would be 0, which is impossible? Would this just be rounded up to 1 or is there more

I am having problems analyzing this method for its Big O complexity. In fact, I am also unsure of how the method works due to its confusing nature. For now, all I figure out is that the a gradually s

What's the correct big O notation for an algorithm that runs in triangular time? Here's an example: func(x): for i in 0..x for j in 0..i do_something(i, j) My first instinct is O(n²), but I'm not ent

I need help understanding/doing Big O Notation. I understand the purpose of it, I just don't know how to determine the complexity given a piece of code. Determine the Big O notation for each of the

I'm a little confused about big O notations. Let's say I'm working with an array of 16 elements and trying to do a QuickSort. A QuickSort's Average case is O(n log n) and its worst case is O(n^2) I as

Below method getValue parses a String, builds a Map based on the String and returns a value associated with the key. Is performance of method below getValue O(n) squared ? I'm basing this on becau

Is there a master list of the Big-O notation for everything? Data structures, algorithms, operations performed on each, average-case, worst-case, etc.

I may be teaching a Java crash-course soon. While it is probably safe to assume that the audience members will know Big-O notation, it is probably not safe to assume that they will know what the ord

i have a algorithm that opens a textfile, reads between 5 and 20 words, store them into an array and closes the textfile again. Has this algorithm a Big O Natation (1) or (n)?

I missed the class where big-O was introduced thinking that it was pretty straight forward. It still seems to be however the teacher said something about O(n) deviating from the function when n gets v

Comparison based sorting is big omega of nlog(n), so we know that mergesort can't be O(n). Nevertheless, I can't find the problem with the following proof: Proposition P(n): For a list of length n, me

I'm trying to grasp the idea of Big O for part of a project that is due tonight and I don't know if i'm thinking this through right or not... The project included us writing Iterative and Recursive so

I have been doing some self-study on Big-O. I understand how to give examples of the following notations to algorithms: O(N): for(int i = 0; i < n; i++) sum++; O(N^2): for(int i = 0; i < n; i++

I'm confused about how Big-O works when dealing with functions within functions (when analyzing worst case). For example, what if you have something like: for(int a = 0; a < n; a++) { *some functio

So I have a quick question on how to verify the big O of a function. for example: a quicksort algorithm sorting an array of 5000000 element yields a time interval of 0.008524 seconds, running the same

Is O(5n) = 5*O(n) ? From what I understand , O(5n) == O(n). Thus they are not equal? Please correct me if I am wrong.

My knowledge of big-O is limited, and when log terms show up in the equation it throws me off even more. Can someone maybe explain to me in laymen's terms what a O(log n) algorithm is? Where does the

I'm testing out some functions I made and I am trying to figure out the time complexity. My problem is that even after reading up on some articles on Big O I can't figure out what the following should