Skip to content
Related Articles
Get the best out of our app
GFG App
Open App
geeksforgeeks
Browser
Continue

Related Articles

Complete Guide On Complexity Analysis – Data Structure and Algorithms Tutorial

Improve Article
Save Article
Like Article
Improve Article
Save Article
Like Article

Complexity analysis is defined as a technique to characterise the time taken by an algorithm with respect to input size (independent from the machine, language and compiler). It is used for evaluating the variations of execution time on different algorithms.

What is the need for Complexity Analysis?

  • Complexity Analysis determines the amount of time and space resources required to execute it.
  • It is used for comparing different algorithms on different input sizes.
  • Complexity helps to determine the difficulty of a problem.
  • often measured by how much time and space (memory) it takes to solve a particular problem
Complete Guide On Complexity Analysis

Complete Guide On Complexity Analysis

Things to learn about Complexity Analysis

  • What is Complexity Analysis?
  • What is the need for Complexity Analysis?
  • Asymptotic Notations
  • How to measure complexity?
    • 1. Time Complexity
    • 2. Space Complexity
    • 3. Auxiliary Space
  • How does Complexity affect any algorithm?
    • How to optimize the time and space complexity of an Algorithm?
  • Different types of Complexity exist in the program:
    • 1. Constant Complexity
    • 2. Logarithmic Complexity
    • 3. Linear Complexity
    • 4. Quadratic Complexity
    • 5. Factorial Complexity
    • 6. Exponential Complexity
  • Worst Case time complexity of different data structures for different operations
  • Complexity Analysis Of Popular Algorithms
  • Practice some questions on Complexity Analysis
  • practice with giving Quiz
  • Conclusion

Asymptotic Notations in Complexity Analysis:

1. Big O Notation

Big-O notation represents the upper bound of the running time of an algorithm. Therefore, it gives the worst-case complexity of an algorithm. By using big O- notation, we can asymptotically limit the expansion of a running time to a range of constant factors above and below. It is a model for quantifying algorithm performance. 
 

Graphical Representation

Mathematical Representation of Big-O Notation:

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0 }

2. Omega Notation

Omega notation represents the lower bound of the running time of an algorithm. Thus, it provides the best-case complexity of an algorithm.
The execution time serves as a lower bound on the algorithm’s time complexity. It is defined as the condition that allows an algorithm to complete statement execution in the shortest amount of time.

Graphical Representation

Mathematical Representation of Omega notation :

Ω(g(n)) = { f(n): there exist positive constants c and n0 such that 0 ≤ cg(n) ≤ f(n) for all n ≥ n0 }
Note: Ω (g) is a set

3. Theta Notation

Theta notation encloses the function from above and below. Since it represents the upper and the lower bound of the running time of an algorithm, it is used for analyzing the average-case complexity of an algorithm. The execution time serves as both a lower and upper bound on the algorithm’s time complexity. It exists as both, the most, and least boundaries for a given input value.

Graphical Representation

Mathematical Representation:

Θ (g(n)) = {f(n): there exist positive constants c1, c2 and n0 such that 0 ≤ c1 * g(n) ≤ f(n) ≤ c2 * g(n) for all n ≥ n0}

4. Little ο asymptotic notation

Big-Ο is used as a tight upper bound on the growth of an algorithm’s effort (this effort is described by the function f(n)), even though, as written, it can also be a loose upper bound. “Little-ο” (ο()) notation is used to describe an upper bound that cannot be tight. 
 

Graphical Representation

Mathematical Representation:

f(n) = o(g(n)) means lim  f(n)/g(n) = 0 n→∞ 

5. Little ω asymptotic notation

Let f(n) and g(n) be functions that map positive integers to positive real numbers. We say that f(n) is ω(g(n)) (or f(n) ∈ ω(g(n))) if for any real constant c > 0, there exists an integer constant n0 ≥ 1 such that f(n) > c * g(n) ≥ 0 for every integer n ≥ n0. 

Mathematical Representation:

if f(n) ∈ ω(g(n)) then, 

lim  f(n)/g(n) = ∞ 

n→∞ 

Note:  In most of the Algorithm we use Big-O notation, as it is worst case Complexity Analysis.

How to measure complexity?

The complexity of an algorithm can be measured in three ways:

1. Time Complexity

The time complexity of an algorithm is defined as the amount of time taken by an algorithm to run as a function of the length of the input. Note that the time to run is a function of the length of the input and not the actual execution time of the machine on which the algorithm is running on

How is Time complexity computed?

To estimate the time complexity, we need to consider the cost of each fundamental instruction and the number of times the instruction is executed.

  • If we have statements with basic operations like comparisons, return statements, assignments, and reading a variable. We can assume they take constant time each O(1).
Statement 1: int a=5;            // reading a variable
statement 2; if( a==5) return true;  // return statement
statement 3; int x= 4>5 ? 1:0;     // comparison
statement 4; bool flag=true;      // Assignment

This is the result of calculating the overall time complexity.

total time = time(statement1) + time(statement2) + ... time (statementN)

Assuming that n is the size of the input, let’s use T(n) to represent the overall time and t to represent the amount of time that a statement or collection of statements takes to execute.

T(n) = t(statement1) + t(statement2) + ... + t(statementN); 

Overall, T(n)= O(1), which means constant complexity.

  • For any loop, we find out the runtime of the block inside them and multiply it by the number of times the program will repeat the loop.

for (int i = 0; i < n; i++) {
   cout << “GeeksForGeeks” << endl;
}

For the above example, the loop will execute n times, and it will print “GeeksForGeeks” N number of times. so the time taken to run this program is:

T(N)= n *( t(cout statement))
    = n * O(1)
    =O(n), Linear complexity.
  • For 2D arrays, we would have nested loop concepts, which means a loop inside a loop.

for (int i = 0; i < n; i++) {
   for (int j = 0; j < m; j++) {
       cout << “GeeksForGeeks” << endl;
   }
}

For the above example, the cout statement will execute n*m times, and it will print “GeeksForGeeks” N*M number of times. so the time taken to run this program is:

T(N)= n * m *(t(cout statement))
    = n * m * O(1)
    =O(n*m), Quadratic Complexity.

2. Space Complexity :

The amount of memory required by the algorithm to solve a given problem is called the space complexity of the algorithm. Problem-solving using a computer requires memory to hold temporary data or final result while the program is in execution. 

How is space complexity computed?

The space Complexity of an algorithm is the total space taken by the algorithm with respect to the input size. Space complexity includes both Auxiliary space and space used by input. 
Space complexity is a parallel concept to time complexity. If we need to create an array of size n, this will require O(n) space. If we create a two-dimensional array of size n*n, this will require O(n2) space.

In recursive calls stack space also counts.

Example:

int add (int n){
    if (n <= 0){
        return 0;
    }
    return n + add (n-1);
}

Here each call add a level to the stack :

1.  add(4)
2.    -> add(3)
3.      -> add(2)
4.        -> add(1)
5.          -> add(0)

Each of these calls is added to call stack and takes up actual memory.
So it takes O(n) space.

However, just because you have n calls total doesn’t mean it takes O(n) space.

Look at the below function :

int addSequence (int n){
    int sum = 0;
    for (int i = 0; i < n; i++){
        sum += pairSum(i, i+1);
    }
    return sum;
}

int pairSum(int x, int y){
    return x + y;
}

There will be roughly O(n) calls to pairSum. However, those 
calls do not exist simultaneously on the call stack,
so you only need O(1) space.

3. Auxiliary Space :

The temporary space needed for the use of an algorithm is referred to as auxiliary space. Like temporary arrays, pointers, etc. 
It is preferable to make use of Auxiliary Space when comparing things like sorting algorithms. 
for example, sorting algorithms take O(n) space, as there is an input array to sort. but auxiliary space is O(1) in that case. 

How does Complexity affect any algorithm?

Time complexity of an algorithm quantifies the amount of time taken by an algorithm to run as a function of length of the input. While, the space omplexity of an algorithm quantifies the amount of space or memory taken by an algorithm to run  as a function of the length of the input.

How to optimize the time and space complexity of an Algorithm?

Optimization means modifying the brute-force approach to a problem. It is done to derive the best possible solution to solve the problem so that it will take less time and space complexity. We can optimize a program by either limiting the search space at each step or occupying less search space from the start.

We can optimize a solution using both time and space optimization. To optimize a program,

  1. We can reduce the time taken to run the program and increase the space occupied;
  2. we can reduce the memory usage of the program and increase its total run time, or
  3. we can reduce both time and space complexity by deploying relevant algorithms

Different types of Complexity exist in the program:

1. Constant Complexity

If the function or method of the program takes negligible execution time. Then that will be considered as constant complexity. 

Example:  The below program takes a constant amount of time.

C++




// C program for the above approach
#include <stdio.h>
 
// Function to check if a
// number is even or odd
void checkEvenOdd(int N)
{
    // Find remainder
    int r = N % 2;
 
    // Condition for even
    if (r == 0) {
        printf("Even");
    }
 
    // Otherwise
    else {
        printf("Odd");
    }
}
 
// Driver Code
int main()
{
    // Given number N
    int N = 101;
 
    // Function Call
    checkEvenOdd(N);
 
    return 0;
}


Java




// Java program for the above approach
 
public class GFG {
    // Function to check if a number is even or odd
    public static void checkEvenOdd(int N)
    {
        // Find remainder
        int r = N % 2;
 
        // Condition for even
        if (r == 0) {
            System.out.println("Even");
        }
        // Otherwise
        else {
            System.out.println("Odd");
        }
    }
 
    // Driver code
    public static void main(String[] args)
    {
        // Given number N
        int N = 101;
 
        // Function call
        checkEvenOdd(N);
    }
}


C#




// C# program for the above approach
 
using System;
 
public class GFG {
 
    // Function to check if a number is even or odd
    static void CheckEvenOdd(int N)
    {
        // Find remainder
        int r = N % 2;
 
        // Condition for even
        if (r == 0) {
            Console.WriteLine("Even");
        }
        // Otherwise
        else {
            Console.WriteLine("Odd");
        }
    }
 
    static public void Main()
    {
 
        // Code
        // Given number N
        int N = 101;
 
        // Function call
        CheckEvenOdd(N);
    }
}


Python3




# Python3 program for the above approach
 
# Function to check if a
# number is even or odd
def checkEvenOdd(N):
  # Find remainder
  r = N % 2
  # Condition for even
  if (r == 0):
      print("Even")
  # Otherwise
  else:
      print("Odd")
 
# Driver Code
if __name__ == '__main__':
  # Given number N
  N = 101
  # Function Call
  checkEvenOdd(N)


Output

Odd

Constant  Complexity Graph

2. Logarithmic Complexity:

It imposes a complexity of O(log(N)). It undergoes the execution of the order of log(N) steps. To perform operations on N elements, it often takes the logarithmic base as 2. 

Example: The below program takes logarithmic complexity.

C++




// C++ program to implement recursive Binary Search
#include <bits/stdc++.h>
using namespace std;
 
// A recursive binary search function. It returns
// location of x in given array arr[l..r] is present,
// otherwise -1
int binarySearch(int arr[], int l, int r, int x)
{
    if (r >= l) {
        int mid = l + (r - l) / 2;
 
        // If the element is present at the middle
        // itself
        if (arr[mid] == x)
            return mid;
 
        // If element is smaller than mid, then
        // it can only be present in left subarray
        if (arr[mid] > x)
            return binarySearch(arr, l, mid - 1, x);
 
        // Else the element can only be present
        // in right subarray
        return binarySearch(arr, mid + 1, r, x);
    }
 
    // We reach here when element is not
    // present in array
    return -1;
}
 
int main(void)
{
    int arr[] = { 2, 3, 4, 10, 40 };
    int x = 10;
    int n = sizeof(arr) / sizeof(arr[0]);
    int result = binarySearch(arr, 0, n - 1, x);
    (result == -1)
        ? cout << "Element is not present in array"
        : cout << "Element is present at index " << result;
    return 0;
}


Output

Element is present at index 3

Logarithmic Complexity Graph

3. Linear Complexity:

It imposes a complexity of O(N). It encompasses the same number of steps as that of the total number of elements to implement an operation on N elements.

Example: The below program takes Linear complexity.

C++




// C++ code to linearly search x in arr[]. If x
// is present then return its location, otherwise
// return -1
 
#include <iostream>
using namespace std;
 
int search(int arr[], int N, int x)
{
    int i;
    for (i = 0; i < N; i++)
        if (arr[i] == x)
            return i;
    return -1;
}
 
// Driver's code
int main(void)
{
    int arr[] = { 2, 3, 4, 10, 40 };
    int x = 10;
    int N = sizeof(arr) / sizeof(arr[0]);
 
    // Function call
    int result = search(arr, N, x);
    (result == -1)
        ? cout << "Element is not present in array"
        : cout << "Element is present at index " << result;
    return 0;
}


Java




// Java code to linearly search x in arr[]. If x
// is present then return its location, otherwise
// return -1
class GFG {
    static int search(int[] arr, int N, int x)
    {
        for (int i = 0; i < N; i++) {
            if (arr[i] == x) {
                return i;
            }
        }
        return -1;
    }
 
    // Driver's code
    public static void main(String[] args)
    {
        int[] arr = { 2, 3, 4, 10, 40 };
        int x = 10;
        int N = arr.length;
 
        // Function call
        int result = search(arr, N, x);
        if (result == -1) {
            System.out.println(
                "Element is not present in array");
        }
        else {
            System.out.println(
                "Element is present at index " + result);
        }
    }
}
// This code is contributed by prasad264


Output

Element is present at index 3

Linear Complexity Graph

4. Quadratic Complexity: 

It imposes a complexity of O(n2). For N input data size, it undergoes the order of N2 count of operations on N number of elements for solving a given problem.

Example: The below program takes quadratic complexity.

C++




// C++ program for the above approach
#include <bits/stdc++.h>
 
using namespace std;
 
// Function to find and print pair
bool chkPair(int A[], int size, int x)
{
    for (int i = 0; i < (size - 1); i++) {
        for (int j = (i + 1); j < size; j++) {
            if (A[i] + A[j] == x) {
                return 1;
            }
        }
    }
 
    return 0;
}
 
// Driver code
int main()
{
    int A[] = { 0, -1, 2, -3, 1 };
    int x = -2;
    int size = sizeof(A) / sizeof(A[0]);
 
    if (chkPair(A, size, x)) {
        cout << "Yes" << endl;
    }
    else {
        cout << "No" << x << endl;
    }
 
    return 0;
}
 
// This code is contributed by Samim Hossain Mondal.


Output

Yes

Quadratic Complexity Graph

5. Factorial Complexity: 

It imposes a complexity of O(n!). For N input data size, it executes the order of N! steps on N elements to solve a given problem.

Example: The below program takes factorial complexity.

C++




// C++ program to print all
// permutations with duplicates allowed
#include <bits/stdc++.h>
using namespace std;
 
// Function to print permutations of string
// This function takes three parameters:
// 1. String
// 2. Starting index of the string
// 3. Ending index of the string.
void permute(string& a, int l, int r)
{
    // Base case
    if (l == r)
        cout << a << endl;
    else {
        // Permutations made
        for (int i = l; i <= r; i++) {
 
            // Swapping done
            swap(a[l], a[i]);
 
            // Recursion called
            permute(a, l + 1, r);
 
            // backtrack
            swap(a[l], a[i]);
        }
    }
}
 
// Driver Code
int main()
{
    string str = "ABC";
    int n = str.size();
 
    // Function call
    permute(str, 0, n - 1);
    return 0;
}
 
// This is code is contributed by rathbhupendra


Output

ABC
ACB
BAC
BCA
CBA
CAB

Factorial Complexity Graph

6. Exponential Complexity: 

It imposes a complexity of O(2N), O(N!), O(nk), …. For N elements, it will execute the order of the count of operations that is exponentially dependable on the input data size. 

Example: The below program takes exponential complexity.

C++




// A recursive solution for subset sum problem
#include <iostream>
using namespace std;
 
// Returns true if there is a subset
// of set[] with sum equal to given sum
bool isSubsetSum(int set[], int n, int sum)
{
 
    // Base Cases
    if (sum == 0)
        return true;
    if (n == 0)
        return false;
 
    // If last element is greater than sum,
    // then ignore it
    if (set[n - 1] > sum)
        return isSubsetSum(set, n - 1, sum);
 
    /* else, check if sum can be obtained by any
of the following:
    (a) including the last element
    (b) excluding the last element */
    return isSubsetSum(set, n - 1, sum)
        || isSubsetSum(set, n - 1, sum - set[n - 1]);
}
 
// Driver code
int main()
{
    int set[] = { 3, 34, 4, 12, 5, 2 };
    int sum = 9;
    int n = sizeof(set) / sizeof(set[0]);
    if (isSubsetSum(set, n, sum) == true)
        cout <<"Found a subset with given sum";
    else
        cout <<"No subset with given sum";
    return 0;
}
 
// This code is contributed by shivanisinghss2110


Output

Found a subset with given sum

Exponential Complexity Graph 

Worst Case time complexity of different data structures for different operations

Data structure Access Search Insertion Deletion
Array O(1) O(N) O(N) O(N)
Stack O(N) O(N) O(1) O(1)
Queue O(N) O(N) O(1) O(1)
Singly Linked list O(N) O(N) O(N) O(N)
Doubly Linked List O(N) O(N) O(1) O(1)
Hash Table O(N) O(N) O(N) O(N)
Binary Search Tree O(N) O(N) O(N) O(N)
AVL Tree O(log N) O(log N) O(log N) O(log N)
Binary Tree O(N) O(N) O(N) O(N)
Red Black Tree O(log N) O(log N) O(log N) O(log N)

Complexity Analysis Of Popular Algorithms:

Algorithm Complexity
1.  Linear Search Algorithm  O(N)
2  Binary Search O(LogN)
3. Bubble Sort  O(N^2) 
4. Insertion Sort O(N^2) 
5. Selection Sort O(N^2) 
6. QuickSort O(NlogN)
7  Merge Sort O(N log(N))
8. Counting Sort O(N)
9  Radix Sort O((n+b) * logb(k)).
10. Sieve of Eratosthenes O(n*log(log(n)))
11. KMP Algorithm O(N)
12.  Z algorithm O(M+N)
13.  Rabin-Karp Algorithm O(N*M).
14. Johnson’s algorithm O(V2log V + VE)
15.  Prim’s Algorithm O(V2)
16 Kruskal’s Algorithm O(ElogV)
17.  0/1 Knapsack  O(N * W)
18. Floyd Warshall Algorithm O(V3)
19. Breadth First Search O(V+E)
20. Depth first Search O(V + E)

Practice some questions on Complexity Analysis:

Practice with giving  Quiz :

Conclusion:

Complexity analysis is a very important technique to analyze any problem. The interviewer often checks your ideas and coding skills by asking you to write a code giving restrictions on its time or space complexities. By solving more and more problems anyone can improve their logical thinking day by day. Even in coding contests optimized solutions are accepted. The naive approach can give TLE(Time limit exceed).


My Personal Notes arrow_drop_up
Last Updated : 20 Apr, 2023
Like Article
Save Article
Similar Reads
Related Tutorials