Subject: Big O Notation In Computer Science
Author: Alex_Raj
In response to: Definition -- small omega
Posted on: 12/14/2013 09:41:22 PM
In Computer Science, Big O notation is used to describe the efficiency or complexity of an algorithm, specifically the execution time required or the space used by an algorithm. The measurement can be expressed as:
f(n) = O(g(n)) with n as the size of problem or input data.
Algorithm A: Count the number of data input
public static int count(int[] a){
return a.length;
}
Algorithm B: Count the number of data input
public static int count(int[] a){
int count = 0;
for(int i=0; i<a.length; i++){
count++;
}
return count;
}
As we can see, the execution time of algorithm B is proportional to the size of array: the more the size, the more the time consumed; whereas the execution time of algorithm A is irrelevant to the size of array and remains constant. Therefore, we can conclude:
Algorithm A = O(1)
Algorithm B = O(n)
Algorithm A is more efficient than B
>
> On 12/14/2013 09:36:09 PM Alex_Raj wrote:
Function f(x) is bounded below (lower-bounded) by function g(x) asymptotically, namely,
for every positive real number C, there exists a real number x0 such that
|f(x)| >= C|g(x)| for all x>x0.
References: