I am revising for an exam and I have found this problem on the internet and was wondering how I would go about solving it.

(With base 2 logs)

Prove that log(2n) is a member of O(logn).

I have given it a go but am not sure if I am right as no answer has been provided. Could you please help?

Here is my attempt:

log 2

n-clogn≤ 0

log 2 + logn-clogn≤ 0

1 + (1-c) logn≤ 0

(I then divided by the logn.)

Example: *n* = 8 and *c* = 10 evaluates to less than zero. Therefore it is **true**.

My questions are:

Am I doing this right?

Can my answer be simplified further?

`lg(2n) = lg(2) + lg(n).`

lg(2) is a constant. See Wikipedia, Logarithmic identities.

The long answer is that

```
log(2n) log(2) + log(n) log(2)
lim n->infinity ------- = lim --------------- = lim ------ + 1 = 0 + 1 = 1
log(n) log(n) log(n)
```

Because the ratio of the two functions in the limit exists (i.e. is bounded), they have the same asymptotic complexity.

In the same way, to prove that O(n^{2}) is not O(n), you would do

```
lim n->infinity (n^2 / n) = lim n which tends to infinity
```

Doing this for O(n) vs. O(log n) requires more work because

```
lim n->infinity (n / log n)
```

needs to be handled somehow. The trick is then that you can use the derivatives instead, as the derivatives in the limit also need to be asymptotically related (otherwise their integrals are not, i.e. the original functions). You take the derivative of n, which is 1, and that of log n, which is n^{-1}, after which

```
lim n->infinity (1 / (1 / n)) = lim n which tends to infinity
```

Similar Questions

Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying t

Is big-O notation a tool to do best, worst, & average case analysis of an algorithm? Or is big-O only for worst case analysis, since it is an upper bounding function?

What would the Big O notation be for the following nested loops? for (int i = n; i > 0; i = i / 2){ for (int j = n; j > 0; j = j / 2){ for (int k = n; k > 0; k = k / 2){ count++; } } } My t

I am trying to figure out how to determine Big O notation for recursive methods . I realise that is probably the same way as with a iterative method, but im not sure about this. I wrote this simple r

I need to prove/disprove the following sentence: for each f(n)=O(logn) implies 2^(f(n)) = O(n) I think it's true, because 2^(log(n)) = n. What do you think?

The questions: What I've done: But I've got totally no idea the difference between 3.5 and 3.6.

I am having a hard time understanding Dijkstra's big O notation exactly. I have a question regarding Dijkstra with an unsorted array. As from Wikipedia: The simplest implementation of the Dijkstra's

I think this is probably a beginner question about big-O notation. Say, for example, I have an algorithm that breaks apart an entire list recursively(O(n)) and then puts it back together (O(n)). I ass

How would Big-O notation help in my day-to-day C# programming? Is it just an academic exercise?

for (int j=0,k=0; j<n; j++) for (double m=1; m<n; m*=2) k++; I think it's O(n^2) but I'm not certain. I'm working on a practice problem and I have the following choices: O(n^2) O(2^n) O(n!) O(

I'm learning programming using YouTube and whatever I can find. I stumbled upon some Big O problems and I'm quite confused on one of them. for (int i = 0; i < n; i++) for (int j = 0; j < 2; j++)

Suppose an algorithm is known to be O(N2) and solving a problem of size M takes 5 minutes. About how long will it take to solve a problem of size 4M? Is it as simple as ... M=5min 4M=20min ?

This question triggered some confusion and many comments about whether the algorithms proposed in the various answers are O(1) or O(n). Let's use a simple example to illustrate the two points of view:

I have the following question: Is the following statement true or false? All logs to base 2 log2n is a member of O(log(n)) My attempt: log2n - clogn <= 0 log2 + logn - clogn <= 0 1 + logn(1-c)

This question already has an answer here: Big O of Recursive Methods 2 answers How to find the Big O for the following recursive function using the recursive method: T(n)=(n-1)T(n-1)+(n-1)T(n-2

I am trying to understand what the Big O would be for the following method. for(Integer i : list){ if(list.contains(-i)) //doSomething } I know that the contains method is O(n) and that a for loop l

In n-element array sorting processing takes; in X algorithm: 10-8n2 sec, in Y algoritm 10-6n log2n sec, in Z algoritm 10-5 sec. My question is how do i compare them. For example for y works faster acc

This question already has an answer here: Is Big O(logn) log base e? 6 answers When articles/question state that the Big O running time of the algorithm is O(LogN) . For example Quicksort has a

In big-O notation is O((log n)^k) = O(log n), where k is some constant (e.g. the number of logarithmic for loops), true? I was told by my professor that this statement was true, however he said it wil

There are a lot of questions about big O notation, but I didn't found clear answer for this question. We write that: O(5n) = O(n) and O(3n^2 + n + 2) = O(n^2) Can we write that: O(2^(2n)) = O(2^n)? Th

I have noticed that big-O of 1000n or 10n is the same thing as O(n), but big-O of 2^n and 3^n are different: O(2^n) and O(3^n), what I don't get is why can't we ignore the constants in this case (2 or

How can I represent the complexity of std::find_end algorithm as Big-O notation? The complexity of std::find_end is defined as follows: At most (last2 - first2) * (last1 - first1 - (last2 - first2) +

So I have the following code and I need to derive the execution time growth rate, however I have no idea where to start. My question is, how do I go about doing this? Any help would be appreciated. Th

I have a little list descending into faster asymptotic growth: O(1) O(log log n) O(log n) O(n^x), where 0 <= x <= 1 O(n) O(n log n) O(n^x), where x>1 O(x^n) O(n!) Im just curious as to where

This maybe a trivial/ mathematical concept that I cant seem to work my head around. So if the processing time T(n) of a certain algorithm is both Ω(n) and O(n^3), how can i prove that the T(n) is Θ(n^

So I've been trying to understand Big O notation as well as I can, but there are still some things I'm confused about. So I keep reading that if something is O(n), it usually is referring to the worst

What's the big-O complexity for the following loop: for each vertex u ∈ C do for each vertex v ∈ C and v > u do What I'm doing here is imagine the following set {1,2,3,4} the loop executes a funct

I'm learning about big O notation and I'm kind of confused. I don't think I really get how complexity effects an algorithm, and I can't tell if I'm looking at things backwards. is the order of compl

I'm looking for educational material on the subject of scalability analysis. I'm not simply looking for Big-O analysis, but for material on approaches and techniques for analysis of the scalability of

What is the best/worst/average case complexity (in Big-O notation) of a trie data structure for insertion and search? I think it is O(K) for all cases, where K is the length of an arbitrary string whi

I was reading about Big O notation. It stated, The big O of a loop is the number of iterations of the loop into number of statements within the loop. Here is a code snippet, for (int i=0 ;i<n; i+

Article at http://leepoint.net/notes-java/algorithms/big-oh/bigoh.html says that Big O notation For accessing middle element in linked list is O(N) .should not it be O(N/2) .Assume we have 100 element

I have a question regarding sentiment analysis that i need help with. Right now, I have a bunch of tweets I've gathered through the twitter search api. Because I used my search terms, I know what are

I have been programming in PHP for a long time now, and as I don't come from a computer science / math background I only have a basic understanding of the Big O notation, so I have taken a function an

I asked a question about Big-Oh / Big-Theta but they acquired constants in them It is Big Oh and does not have any visible constants in it so I don't know where to start off with this since it is a s

I'm confused about how to do big-O analysis for the following problem - find an element from an array of integers. ( an example problem) my solution sort the array using bubble sort ( n^2 ) binary se

I often here people talk about Big O which measures algorithms against each other Does this measure clock cycles or space requirements. If people want to contrast algorithms based on memory usage what

I'm having trouble determining the Big-O running time for the following type of code: typedef map<string, vector<string> >::iterator MapIter; while(!myMap.empty()) { for(MapIter it = myMap

Is it common that a higher value for Big-O notation is used for convenience or simpler looks? For example: I'm looking at this algorithm bifragment gap carving shortly explained here (page 66). If I

I've read the topic: Big O, how do you calculate/approximate it? And am not sure what the Big-O notation for the following function would be: static int build_nspaces_pattern(const char * const value,

I am working on an assignment but a friend of mine disagree with the answer to one part. f(n) = 2n-2n^3 I find the complexity to be f(n) = O(n^3) Am I wrong?

What is the performance in Big-O notation of the following algorithm? It's a function I wrote to print all permutations of a string. I know for an input of length n there are n! different permutations

Which operations in Common Lisp programs are to be considered sufficiently primitive so as to count for a single step in algorithmic analysis? How widely do modern lisps vary in their implementation

I've got a question about calculating Big O running times for a series of loops, that are nested in an outer for loop. For example: for (50,000 times) { for (n times) { //Do something } for (n-2 tim

I've got a homework question that's been puzzling me. It asks that you prove that the function Sum[log(i)*i^3, {i, n}) (ie. the sum of log(i)*i^3 from i=1 to n) is big-theta (log(n)*n^4). I know that

IF I have a grid NxN that requires N^2 steps and is dependent on the grid NxN at the previous time steps does the Big O remain the same?

I can't solve a problem; can someone help me? What is the Big O notation for the below statement:- for (int i=2;i<=n;i=i*4) sum++;

Is there a difference between saying that f(n)=O(g(n)) and f(n) ∈ O(g(n))?

I wonder whether there is any automatic way of determining (at least roughly) the Big-O time complexity of a given function? If I graphed an O(n) function vs. an O(n lg n) function I think I would be

I've been working through a recent Computer Science homework involving recursion and big-O notation. I believe I understand this pretty well (certainly not perfectly, though!) But there is one questio