I have to determine the time complexity (big O) of the following function:

```
void BET::makeEmpty(BinaryNode* &n)
{
if(n != NULL)
{
makeEmpty(n->left);
makeEmpty(n->right);
delete n;
}
n = NULL;
}
```

I am familiar with time complexity for simple functions (for loops, nested loops, etc) but I am unsure how to determine the big O for a recursive function.

Thank you!

Well, this one is easier than you think: `makeEmpty`

perform a constant (`O(1)`

) amount of work (excluding the recursive calls, of course). It'll run exactly one time on each node in the tree. So its time complexity is `O(n)`

, where `n`

is the number of nodes in the tree.

I believe the **recurrence relation** of your algorithm should look like the following:

Are you able to solve it?

Similar Questions

Could it happen that at certain values of the pivot_value complexity of the quicksort is logarithmic ?

I'm sure most of you know that a nested loop has O(n^2) complexity if the function input size is n for(int i = 0; i < n; i++){ for(int j = 0; j < n; j++){ ... } } I think that this is similar,

def foo(x): if x > 5: return foo(x–1) – foo(x-1) else: return 11 def bar(a,b): if (b > 0): return bar( bar(a, b+1) , b-1 ) else: return 0 How do I find the running time in Big O notation and ho

I am having some trouble working out the Big O notation for these 2 recursive functions: int calc (int n) { if (n <= 0) return 0 ; else if (n > 10) return n ; else return calc (5 + calc(5n)); }

Possible Duplicate: Plain English explanation of Big O I need to figure out O(n) of the following: f(n) = 10n^2 + 10n + 20 All I can come up with is 50, and I am just too embarrassed to state how I

void bar(int N){ int c=0; for (int i=0; i<N*N; i++) for (int j= i; j< N; j++) c++; } The outer (and inner) loops above never get past N. Would this mean the Big O is N*N (N for the outer and N/

If I understand Big-O notation correctly, k should be a constant time for the efficiency of an algorithm. Why would a constant time be considered O(1) rather than O(k), considering it takes a variable

I know that the big-o complexity of the following equation is O(n^3) 4n^3 + 6n + 6 because the n^3 is the dominant term. Would the big-o complexity be the same for the same function, but with a nega

Maybe I'm mistaken in my understanding of Big-O notation (it has been a while since I've taken a course on algorithms) but the following has never made too much sense to me: This would be considered O

Hello it has been some time sins i have used Big O notation, so i'm a bit rusty. I know that having 1 loop, that loops n times is O(n), and having 1 loop that loops n time within another loop that loo

Given graph G(V,E), un-directed graph. |E| = m, |V| = n The graph's data structure is Adjacency list How to find and print simple cycle (or print that there is no such cycle) in complexity of O(n)?

We have sorted array arr[]={2,4,5,7,8,12,16,18,20}. We need to find out pair of elements whose addition is 12, with complexity O(n). Could anyone help on it?

We did an exercise in class today dealing with big-O notation. Here is one of the problems: void modifyArray(int a[], int size) { int max = a[0]; for (int i = 1; i < size / 2; ++i) { if (max < a

What is the complexity of inserting into sorted link list in big-O notation? Let say I have 5 elements and what is the complexity to insert all of them. Thank you very much

I'm learning about big O notation and I'm kind of confused. I don't think I really get how complexity effects an algorithm, and I can't tell if I'm looking at things backwards. is the order of compl

For large problems sizes, an algorithm with time cost O(2^n) is faster than an algorithm that has time cost O(N^2) Is this true or false? What I think is that if C^n, C = constant and C > 1, then

I've been having some problems trying to grasp the concept of big O notation. So, by definition big O is as follows, T(n) ∈ O(G(n)) if T(n) <= G(n) * C. Since the the constant C can be any integ

I have this method, isPalindrome(), and I am trying to find the time complexity of it, and also rewrite the code more efficiently. boolean isPalindrome(String s) { boolean bP = true; for(int i=0; i<

If I have the following closed form solution for a recurrence relation, how can I simplify it under big O: f(n) = 3^n + n.9^n I would hazard a guess at: f(n) is a member of O(9^n) -> Am not sure if

I have a question about calculating big o notation for the following code: j =1; While (j<=n ) do for i=1 to j do O(1); endfor; j=j*2; endwhile So far, I have that the loop is calculated (sum from

What is the Big-O of division on most modern day ISAs? Is there some kind of optimization or is it the naive O(numerator/denominator)? I'm writing code that relies heavily of modulus operation. For ex

This question already has an answer here: Confused about the time complexity of nested loops and looking for tips 1 answer So I understand that if the two while loops were just while (x < n)

find same node from two single linked lists. Can't use hash, Can not be O(n^2) complexity. Please give some hints. thank you so much.

I've looked over the big-O complexity of multiplying two n × n matrices, which takes time O(n3). But how do you get big-O complexity for multiplying two rectangular matrices which are of dimensions m

Few more problems I ran into calculating the Big-oh complexity. There are 2 problems which I cannot solve due to log base operations. Here are the two problems: n = # of data items being manipulated

Could you please explain how I can get the worst-case Big O of this algorithm. I was reading my textbook and I came across a similar algorithm like this one but still don't understand the logic behind

Another Big O notation question...What is the Big O for the folling code: for (int i = n; i > 0; i = i / 2){ for (int j = 0; j < n; j++){ for (int k = 0; k < n; k++){ count++; } } } My Thou

I am developing a java program that uses curve fitting techniques to determine the time complexities for o-notation. I am provided with input size and time, and i'm to determine which time complexity

void function(int N){ for (int i=0; i<N; i++) for (int j= 0; j< i; j++) System.out.println(j) } For this function, how does the Big O depend on the second for loop because of it being j Also

Suppose f(n) is runtime of algorithm. According to function definition of O(n), if f(n)<=c*g(n) then f(n)=O(g(n)) where n0<=n. What is the range of values constant c can take?

If I have an algorithm that takes 4n^2 + 7n moves to accomplish, what is it's O? O(4n^2)? O(n^2)? I know that 7n is cut off, but I don't know if I should keep n^2 coefficient or not. Thanks

How can I find the complexity of a Ruby method? For example length? If I look at the source code, I see this: static VALUE rb_ary_length(VALUE ary) { long len = RARRAY_LEN(ary); return LONG2NUM(len)

What is the big-O complexity of the function (log n)k for any k?

Hello I just have a simple question, why is the big O notation of a sorted array O(log N)? It will be a sorted array.

Can I divide big O expression like O(n^3)/n = O(n^2)? Is this division valid?

I have a counting algorithm for which I am trying to get a general big-o description. It is horribly nested and horribly exponential. Here it is: 1. For each T_i in T 2. For k = 1 to max_k 3. For eac

If I have a loop construction like this for(int i=1; i<n;i++) for(int j=1; j<n;j++); O(n2) or O(0)? Assume that inside the loop is an if: for(int i=1; i<n;i++) for(int j=1; j<n;j++) if(a=

Possible Duplicate: Big Theta Notation - what exactly does big Theta represent? I understand it in theory, I guess, but what I'm having trouble grasping is the application of the three. In school, w

Need some help on solving this runtime recurrence, using Big-Oh: T(n) = T(n/2) + T(n/2 - 1) + n/2 + 2 I don't quite get how to use the Master Theorem here

I promise this is the last Big O question Big O Notation for following loops... for (int i = n; i > 0; i = i / 2){ for (int j = 0; j < n; j++){ count++; } } for (int k = 0; k < n; k++){ for

I found some references about big O notation, but as far as I can understand algorithm complexity is a function of size of input data. For example, if complexity of bubble sort is O(n^2), n is the siz

for i := 1 to n do j := 2; while j < i do j := j^4; I'm really confused when it comes to Big-O notation, so I'd like to know if it's O(n log n). That's my gut, but I can't prove it. I know the whi

I'm fairly familiar with algorithm analysis and can tell the Big-O of most algorithms I work with. But I've been stuck for hours unable to come up with the Big-O for this code I write. Basically it's

Find out the time complexity (Big Oh Bound) of the recurrence T(n) = T(⌊n⌋) + T(⌈n⌉) + 1. How the time complexity of this comes out to be O(n)??

I want to implement at least two different solutions to find N-th largest element with O(N*log(N)) average time complexity in Big-O notation, where N is the number of elements in the list ? which sea

What would the big O notation of the function foo be? int foo(char *s1, char *s2) { int c=0, s, p, found; for (s=0; s1[s] != '\0'; s++) { for (p=0, found=0; s2[p] != '\0'; p++) { if (s2[p] == s1[s]) {

If I have the following code: IterateArray(object[] array) { for(int i=0; i<array.length; i++) { Dosomething(array[i]); } } and the Dosomething(object) method's time performance is O(log n), is th

I'm having problems calculating Big O for the following code. Im never the smartest cookie. Can someone kindly explain it. My guess here was O(N^2) due to the nested loops but I know there's more to i

I would just like to prove the following: Show that 1^k + 2^k+...+n^k is O(n^(k+1)) for every positive integer k I am not sure how to go about it. Normally when I am proving a function is big O of ano

I have a Big O notation question. Say I have a Java program that does the following things: Read an Array of Integers into a HashMap that keeps track of how many occurrences of the Integers exists in