The Big O notation is a theoretical approach for analyzing the performance of the algorithm. It estimates how fast an algorithm's execution time increases as the input size increases . So you can compare two algorithms by examining their growth rates .
An input that results in the shortest execution time is called the best-case input and an input that results in the longest execution time is called the worst-case input. Best-case and worst-case are not representative, but worst-case analysis is very useful. You can show that the algorithm will never be slower than the worst-case.
An average-case analysis attempts to determine the average amount of time among all possible input of the same size. Average-case analysis is ideal, but difficult to perform, because it is hard to determine the relative probabilities and distributions of various input instances for many problems.
If the time is not related to the input size, the algorithm is said to take constant time with the notation O (1).
Linear search takes O ( n ) time. An algorithm with the O ( n ) time complexity is called a linear algorithm . Binary search takes O (log n ) time. An algorithm with the O (log n ) time complexity is called a logarithmic algorithm .
The worst time complexity for selection sort, insertion sort, bubble sort, and quick sort is An algorithm with the time complexity is called a quadratic algorithm .
The average-time and worst-time complexity for merge sort and heap sort is O ( n log n ). The average time for quick sort is also O ( n log n ). An algorithm with the O ( n log n ) time complexity is called a log-linear time.
A variation of merge sort can be applied to sort large data from external files.