For computing or finding complexity of any algorithm we use the most common metric for calculating time complexity is Big O notation. This removes all constant factors so that the running time can be estimated in relation to N as N approaches infinity.
In general you can think of it like this:
//statement;
It is constant (means it will runs only constant times). The execution time of the statement will not change in relation to N.
for (i=1; i<=N; i++)
statement;
It is linear. The execution time of the loop is right away proportional to N. When N doubles, so does the execution time.
for (i=1; i<=N; i++) {
for (j=1; j<=N; j++) {
statement;
}
}
It is quadratic. The execution time of the two loops is proportional to the square of N. When N doubles, the execution time enhanced by N * N.
while ( low <= high ) {
mid = ( low + high ) / 2;
if ( target < list[mid] )
high = mid  1;
else if ( target > list[mid] )
low = mid + 1;
else break;
}
It is logarithmic. The execution time of the algorithm is proportional to the number of times N can be divided by 2. This is because the algorithm splits the working area in half with each loop.
void quicksort ( int lst[], int left, int right )
{
int pivot = partition ( lst, left, right );
quicksort ( lst, left, pivot  1 );
quicksort ( lst, pivot + 1, right );
}
It is N * log ( N ). The execution time consists of N loops which are logarithmic, thus the algorithm is a combination of linear and logarithmic.
Doing something with every item in one dimension is linear, doing something with all items in two dimensions is quadratic, and splitting the working area in half is logarithmic. There are other Big O measures such as cubic, exponential, and square root, but they're not closely usual. Big O notation is depicted as O ( ) where is the measure. The quick sort algorithm would be depicted as O ( N * log ( N ) ).
Note that none of this has taken into account best, average, and worst case evaluates. Each would have its own Big O notation. Also note that this is a very simplistic explanation. Big O is the most usual, but it's too more complex that I've shown. There are also other notations such as big omega, big o, and big theta. You likely can’t come across them beyond of an algorithm analysis course.
Now, let’s take an example to know the step by step computation of complexity, I’m choosing selection sort for this time you can use any other algorithm for try it yourself.
The array on which selection sort will be performed is as follows,
[5, 6, 2, 1, 3, 4]
Now first we’re iterating to the solution.
The sorted sub lists are denoted by underline.
5 6 2 1 3 4
1 6 2 5 3 4
1 2 6 5 3 4
1 2 3 5 6 4
1 2 3 4 6 5
1 2 3 4 5 6
Here is the pseudo code for Selection sort from which we calculate no. of execution of each line.
for (i=0; i ≤ n1; i++)
{
min=a[i];
for(j=i; j ≤ n1; j++)
{
if (a[j] < min)
{
min=a[j];
}
swap (a[j] , min);
}
}
Now we have to check no. of execution of each statement of pseudo code as follows
Now let’s try one more algorithm I will choosing Decimal to Binary Conversion algorithm as follows;
find(x) { if (x == 1) return x; else find(x/2); }Let no. of execution is as follows;
T(n)=C+T(n/2)
T(1)=d
T(n)=T(n/2)+C
T(n/2)=T(n/4)+C
T(2)=T(1)+C
T(n)=T(1)+k.C
T(n)=d+k.C=Ck+d=C log_{2}n + d
So the answer is O(log_{2}n) and you have learned how It can be done.
Here is some of well known algorithms and their corresponding best, average and worst cases try to solve them by above two mentioned solutions and rules.
Cases

Best

Worst

Average

Algorithms
 
Selection Sort

O(n^{2})

O(n^{2})

O(n^{2})

Insertion Sort

O(n)

O(n^{2})

O(n^{2})

Bubble Sort

O(n)

O(n^{2})

O(n^{2})

Quick Sort

O(n log n)

O(n^{2})

O(n log n)

Merge Sort

O(n log n)

O(n log n)

O(n log n)

Heap Sort

O(n log n)

O(n log n)

O(n log n)

Binary Search Tree

O(1)

O(n)

O(log n)

This article is written by :
very helpful...
ReplyDelete:) thanks preeti
DeleteGood explanation. Even if you weren't the original writer:
Deletehttp://www.daniweb.com/softwaredevelopment/computerscience/threads/13488/timecomplexityofalgorithm