I need help with finding the big oh of this algorithm. This is a search algorithm that is dividing and conquering my array of size n to find the first occurrence of false, a is an array.

```
n=a.length;
i=0;
while(a[i]){
i += n/2;
n=n/2;
}
i -= n;
while(a[i])
i++;
```

//hopefully, i will stop at the first occurrence of false.

If you are searching through an array which contains just boolean true/false values, then these values could not be sorted in any way which would allow for the binary search which you are describing. If the array contained a sorted list of values then you would be able to use the divide and conquering of binary search with O(log n). To find the first occurrence of a false you would need to check each value of the array with worst case being O(n).

```
int i = 0;
for (i; i < a.length; i++) {
if (a[i] == false){
break;
}
}
return i;
```

This first part:

```
n=a.length;
i=0;
while(a[i]){
i += n/2;
n=n/2;
}
```

executes in O(lg N): i = (n-n), (n-n/2), (n-n/4), ...

But it might have a problem. Let us suppose that N=63. Then:

```
i = 0 and n = 63, so n/2 is 31.5, and being an integer, it is 31.
i = 31 and n = 31, so n/2 is ... 15
i = 46 and n = 15, so n/2 is ... 7
i = 53 and n = 7, so n/2 ... 3
i = 56 and n = 3, so n/2 ... 1
i = 57 and n = 1, so n/2 = 0
```

And now if `a[57]`

is `true`

the cycle will never end, because adding n=0 to the final index will leave it unchanged.

If you exit the cycle with some nonzero `n`

, you are at `n/k`

for some k, and start incrementing i.

```
i -= n;
while(a[i])
i++;
```

Here you add `n/k`

complexity, which is O(N) in the worst case and O(1) in the best, but in both cases you will access illegal memory as soon as `i`

overshoots the array boundary, and probably coredump. You should do something like

```
i -= n;
while((i < n) && (a[i]))
i++;
```

Otherwise, your algorithm might be anywhere between O(lg N) and O(N), but chances are that it will either never terminate, or terminate abnormally.

Similar Questions

Alright, I've cooked up some code to reverse hex characters around as part of a fun exercise I made up. Here is what I have at the moment: #include <stdio.h> int main() { char a,b,c; while (1) {

I want to implement at least two different solutions to find N-th largest element with O(N*log(N)) average time complexity in Big-O notation, where N is the number of elements in the list ? which sea

Just need a confirmation on something real quick. If an algorithm takes n(n-1)/2 tests to run, is the big oh O(n^2)?

I had this exercice in an exam which stated: Find an algorithm which can search for the highest number in an unsorted list and have a Big-Oh complexity of O(log(N)). The only searching algorithm wit

I've just been introduced to Big-O notation and I've been given some questions. However I'm confused as to how to determine the value of n0. I have to show that 3n^3 +20n^2 + 5 is O(n^3). So far I hav

What is the use of Big-O notation in computer science if it doesn't give all the information needed? For example, if one algorithm runs at 1000n and one at n, it is true that they are both O(n). But I

Is the following algorithm simply O(1), or is its complexity trickier to define? for (i = 0; i < n; ++i) if (i > 10) break; I'm confused by the fact that it's obviously O(n) when n <= 10.

Next in my series of Big O questions that I can't find the answer to Take the following example for(int i = 0; i < someNumber; i++) { for(int j = i; j < someNumber; j++) { DoSomething(); } } Wo

I need to design an algorithm that is able to do some calculations in given O notation. It has been some time since I last calculated with O notation and I am a bit confused on how to add different O

I am trying to understand what exactly Big O notation is. I understand it in the literal and practical sense by going through the answers of similar questions in SO. But what the answers don't explain

A homework question asks me to analyse the following code fragement: for (int i = N; i > 0; i--) for (int j = 0; j < i; j++) I think the inner loop runs the following number of times: N + (N-1)

I'm relatively new to the practice of determining algorithm runtimes using big-O notation and I have a question regarding the runtime of a sorting algorithm. Let's say I have a set of pairs (a, b) in

Give a big-O estimate for the following program i=1 while ( i <= n ) { i = 2*i } by drawing a quick table, comparing the value of i to each iteration we see that: if n=5 we need 6 iterations if n=

Possible Duplicate: What’s the complexity of for i: for o = i+1 I have done the following sorting algorithm for arrays of length 5: int myarray[5] = {2,4,3,5,1}; int i; for (i = 0; i < 5; i++) {

What would be the Big O or Theta of a loop that runs forever? Just curious, was thinking about it today. Could you even bound it?

What is the worst case time complexity for the following two algorithms assuming items (an ArrayList<Integer>)has enough unused space that it never needs to be re-sized? My initial guess is that

Most people with a degree in CS will certainly know what Big O stands for. It helps us to measure how (in)efficient an algorithm really is and if you know in what category the problem you are trying t

Can someone help me figure out the running time of this loop? I believe it is O(5nlogn). for(int f = 0; f < Array.length; f++) { F = Array[f]; for(int e = 0; e <= f; e++) { E = Array[e]; for(int

Possible Duplicate: Plain english explanation of Big O I'd imagine this is probably something taught in classes, but as I a self-taught programmer, I've only seen it rarely. I've gathered it is some

Possible Duplicate: Plain English explanation of Big O Dear all, As I read information about some algorithms, occasionally, I encounter algorithm performance information, such as: O(1), O(n), O(n^2)

Given the following code, what is the complexity of 3. and how would I represent simple algorithms with the following complexities? O(n²+n) O(n²+2n) O(logn) O(nlogn) var collection = new[] {1,2,3}; va

What does O(1) space mean? I understand that O(n) steps is like the order of magnitude of calculations an algorithm/program makes, but don't know what the O(n) space is.

I am trying to do a function that takes a list of circles, and returns only a list of circles that are fully overlapped (one inside another). The problem is that the algorithm is at least O(n²), due t

Is there a difference between saying that f(n)=O(g(n)) and f(n) ∈ O(g(n))?

Ok, I am trying to understand the concept of Big O. I have a function I am suppose to find the Big O and I am not quite getting it this is an example in the book of one that is like my homework.. I

I am trying to understand Big-O Time complexity and am unfortunately struggling, I cannot seem to grasp the concept, I know my results are correct for the following two code fragments however how I go

Can I divide big O expression like O(n^3)/n = O(n^2)? Is this division valid?

If we construct sorted set based on RB Tree and Heap. Do insertion() and deleteMax() for n times. (1). What's the Big-O ? My idea: For both RB tree and heap , deleteMax() and insertion() will all tak

If I have 10 elements and starting with an empty tree, What is the complexity of inserting 10 elements into Red Black in big-O notation? Is it going to be more than O(log 10) because each time it ins

The following code give me a O(n). how do I code a for loop that has time complexity of O(c^k)? int power(int x, unsigned int y) { if( y == 0) return 1; else if (y%2 == 0) return power(x, y/2)*power(x

In n-element array sorting processing takes; in X algorithm: 10-8n2 sec, in Y algoritm 10-6n log2n sec, in Z algoritm 10-5 sec. My question is how do i compare them. For example for y works faster acc

I'm struggling with understanding Big O and I'm unable to quite comprehend the multiple ways that I read, and watch, that are used to find the Big O of a function. It seems that every video or post ha

I am learning algorithm analysis. While doing the theory I across many big-O proofs. I was able to solve them but I need help with omega which is the oposite of big-O? Is 22n = O(2n)? --->My answer

So I have a quick question on how to verify the big O of a function. for example: a quicksort algorithm sorting an array of 5000000 element yields a time interval of 0.008524 seconds, running the same

I am interested in theoretical Big-O analysis of the following MySQL query: SELECT id, value FROM MyTable WHERE lat BETWEEN %s AND %s AND lon BETWEEN %s AND %s; In particular, I would like to know ho

I have a counting algorithm for which I am trying to get a general big-o description. It is horribly nested and horribly exponential. Here it is: 1. For each T_i in T 2. For k = 1 to max_k 3. For eac

Is it common that a higher value for Big-O notation is used for convenience or simpler looks? For example: I'm looking at this algorithm bifragment gap carving shortly explained here (page 66). If I

I'm prepping for software developer interviews and have been working on algorithm problems. My book shows a Heapsort algorithm that can sort an unordered array in increasing order. I'm trying to modif

I was having problem with the following question Consider the following nested loop construct. Categorize its efficiency in terms of the variable n using big-o notation. Suppose the statements repre

I would like to show that: O(f_1(n) + f_2(n) + .. + f_k(n)) <= O(f_1(n)) + O(f_2(n)) + ... + O(f_k(n)) is true. My intuition why inequality holds is that in both directions: <=: We sum up all th

What is the complexity of go's builtin append function? What about string concatenation using +? I'd like to remove an element from a slice by appending two slices excluding that element, ex. http://p

I'm trying to rate the efficiency of a function where the input is an array of strings. The algorithm always iterates through every item in this array. This strings contained in this array are of vari

I have a sorted array of doubles (latitudes actually) that relatively uniformally spread out over a range of -10 to -43. Now, if I did a binary search over that list I get O(log N). But, I can further

Lets say we have a problem we implemented using X algorithm with O(n) or O(log n) or etc.... When is the value of n big enough that we must consider an alternative implementation? Let's see if i can e

I have written a function and I need to know the big O notation for it. I have tried to slove this myself and I get O(N^2), however I have been told that this is not the correct answer. Can someone pl

I've read the topic: Big O, how do you calculate/approximate it? And am not sure what the Big-O notation for the following function would be: static int build_nspaces_pattern(const char * const value,

I was wondering what the Big O notation for this would be. I know the for loop is O(n). I wasn't sure if the if statements were O(n log n). If so, doesn't that make the run time complexity (n)*((n log

I want to know, why this is O(n2) for 1+2+3+...+n? For example, 1+2+3+4 = 4·(4+1)/2 = 10 but 42=16, so how come it's O(n2)?

I'm confused about how Big-O works when dealing with functions within functions (when analyzing worst case). For example, what if you have something like: for(int a = 0; a < n; a++) { *some functio

I'm currently studying and trying to implement some algorithms. I'm trying to understand Big O notation and I can't figure out the Big O complexity for the algorithm below: while (a != 0 && b