Data Structures and Complexity Analysis

Developers and programmers often wonder about the two most important aspect of an efficient code.

  • How much time will be consumed over a particular algorithm implemented in a code ?
  • How much space (memory or resources) will be consumed by the algorithm implemented in the code ?

Well the answers are pretty straight forward if we dive deep into the computational complexity analysis. This topic is well forgotten by developers in the initial stage but becomes the most important topic over the time.

The computational complexity analysis not only provides us with a broad overview of how an algorithm will consume the available resources and time but it also gives us a dashboard over the future scaling of the application. Let’s look at the answer below.

BIG O NOTATION

Above heading not only gives us a new topic to look into but also is the answer of the two questions we discussed in the beginning of the blog.

Big O Notation is the architecture which explains the time and space consumption of an algorithm. It gives an upper bound of the complexity taking the worst case in consideration , assisting in quantifying the overall performance of the algorithm while increasing the input size.

Big O Notation only cares about the worst case thus predicting the approximate time and space complexity of an algorithm. Below table shows the complexities from smallest to largest. Let’s consider the size of the input as (n).

AlgorithmComplexity
Constant timeO(1)
Logarithmic timeO(log(n))
Linear timeO(n)
Logarithmic (AND) Linear timeO(nlog(n))
Square timeO(n2)
Cubic timeO(n3)
Exponential timeO(zn), z>1
Factorial timeO(n!)
Big O Worst case complexities in descending order

Let’s look into some BIG O Properties. Please not – Below points consider ‘n’ as input size and ‘c’ as constant.

  • O(n+c) = O(n)
  • O(cn) = O(n), c>1

Consider an example. Assume a function:

f(n) = 80n2 + 17log(n3) + 12n3 – 89

Above function is a bit complex and thus we compute its complexity as O(f(n)) = O(n3).

As the primary rule of complexity analysis is taking the worst case into consideration we find in the function the component which consumes the maximum time and it is the 3rd one (12n3). Thus the complexity is O(n3)

Let’s consider few other examples.

  • For Normal operations like addition, subtraction, multiplication etc. complexity is O(1).
  • For single loops complexity is O(n), where n is the size of the input.
  • For nested loops the complexity is O(nk). Here n is the size of the input while k is the number of nested loops.
  • For Binary search algorithm complexity is O(log(n))
  • For Linear search algorithm complexity is O(n)
  • For finding all subsets of a set complexity is O(2n)
  • For finding all permutations of a string complexity is O(n!)
  • For sorting an array using merge sort complexity is O(nlog(n))
  • For Iterating over a matrix of dimension n*m complexity is O(nm)

Some examples live.

For i = 1 to n
For j = 1 to n
print i*j
next
next

Complexity is O(n^2)
while i<n
i++

Complexity is O(n)
while i<n
i=+3

Complexity is O(n/3) ~ O(n)
#Binary search
while(lo<hi)
    {
        int mid=(lo+hi)/2;
        if(check(mid))
        {
            hi=mid;
        }
        else
        {
            lo=mid+1;
        }
    }

Complexity is O(log(n))

Happy coding 🙂

Leave a Reply

Your email address will not be published.

Verified by MonsterInsights