TeachingBee

Understanding Big O Notation With Examples

Big O Notation With Examples

Asymptotic Notations are used to describe the performance of algorithms. It specifically describes how fast the runtime grows relative to the input size.

Let’s think about searching for a name in a phone book as an example. If the phone book had 10 names, it might take 1 second to find a name compared to 1,000, which might take 60 seconds. See how the time is growing as the phone book gets bigger? Asymptotic Notations formalize this growth rate.

Asymptotic Notations are like speedometers for algorithms. Just as a speedometer shows how fast a car is going, these notations give us an idea of how fast or slow an algorithm runs, especially when dealing with a lot of data. Instead of getting lost in detailed steps of the algorithm, we use these notations to get a bird’s-eye view of its performance.

big o notations graph

Types of Asymptotic Notations:

  1. Big O Notation (O)
  2. Omega Notation (Ω)
  3. Theta Notation (Θ)
  4. Little o Notation (o)
  5. Little omega Notation (ω)

In this article we will mainly focus on understanding Big O Notation with examples. So, let’s get started.

What is Big O Notation In Data Structure?

Big O notation in data structure describes the upper bound on an algorithm’s runtime. It tells you the worst-case scenario on how long an algorithm will take to run.

For example, let’s say you had an algorithm with a runtime of O(n). This means its worst-case runtime grows linearly with the size of the input. So if n is the number of elements:

  • If n is 10, the algorithm might take at most 10 seconds.
  •  If n is 100, the algorithm might take at most 100 seconds.
  •  If n is 1000, the algorithm might take at most 1000 seconds.

The key word here is “might“. Big O only tells you the upper bound, which means the algorithm will take at most that much time. The actual runtime could be better! For that O(n) algorithm:

  • If n is 10, the algorithm could run in 5 seconds.
  •  If n is 100, the algorithm could run in 50 seconds.

Because Big O describes the worst case, the actual runtime will be the same or better. Big O Notation gives you a guarantee on the maximum time, but the algorithm could do better. Big O is useful when comparing algorithms. You can use Big O to understand which algorithm will perform better as the input grows, even without knowing the exact runtimes.

Formal Definition Of Big O Notation

Let f(n) and g(n) be two functions defined on the positive integers. We say that f(n) is O(g(n)) if there exist constants c > 0 and n0 > 0 such that for all n >= n0:

f(n) <= c * g(n)

This means there is some constant factor c such that f grows no faster than cg(n) for sufficiently large n. The formal definition requires f to be asymptotically bounded above by cg(n).

So, Mathematical formula for Big O is

O(g(n)) = { f(n): there exist positive constants c and n0 such that 0 <= f(n) <= cg(n) for all n >= n0 }

Intuitively, f(n) = O(g(n)) means that g(n) provides an upper bound on the growth rate of f(n). As n increases, f(n) grows no faster than g(n) up to constant factors.

Big O Notation

How To Calculate Time Complexity With Big O Notation?

Let’s now see steps on how we can calculate Time Complexity with Big O Notation of given algorithm:

Steps To Calculate Time Complexity With Big O Notation are:

int minOperations(int[] arr) {
  int count = 0;
  
  for (int i = 0; i < arr.length; i++) {
    for (int j = i+1; j < arr.length; j++) {
      if (arr[i] > arr[j]) {
        int temp = arr[i];
        arr[i] = arr[j]; 
        arr[j] = temp;
        count++;  
      }
    }
  }
  
  return count;
}

1. Assume the worst case: Since Big O is the worst-case scenario for any algorithm, always consider the maximum number of operations that could be executed for any input. For example, for loops, assume they iterate fully through the end condition, and for recursive functions, assume the maximum recursion depth. For example:

For the above code example, the array length is ‘l.’ Considering worst-case scenarios. The outer loop will run l times, and the inner loop will l*(l-1)/2 times (l + (l-1) + (l-2) …. 1 ). Considering if the statement is always true. The swap operation will run, therefore, l*(l-1)/2 times.

2. Remove constants: The number of operations doesn’t matter; only the growth rate matters. Therefore, omit any constant factors in the operation count. This simplifies the expression to focus just on the growth.

3. Use different terms for inputs: Substitute simple variable names for input parameters. Generally, n is often substituted for the size of a data structure like array length. This allows generalizing beyond the specific input values.

So, for the above example, the arraylength can be substituted with n. Therefore, the swap operation will run therefore n*(n-1)/2 times

4. Drop non-dominant terms: Focus only on the fastest growing component w.r.t. input and drop any lower order terms as n increases. This keeps only the most significant term for large n.

So in above example n*(n-1)/2 dominates the growth

Therefore Time Complexity for the above code is O(n*(n-1)/2 O(n2).

Master theorem can also be used to solve time complexity of recurrence relations of the form:

T(n) = aT(n/b) + f(n)

How To Calculate Space Complexity With Big O Notation?

void duplicateArray(int[] arr) {
  int n = arr.length;
  int[] copy = new int[n]; // Allocate new array
  
  for (int i = 0; i < n; i++) {
    copy[i] = arr[i]; 
  }
}

Let’s now see the steps on how we can calculate Space Complexity with the Big O Notation of the given algorithm:

1. Determine all storage used by the algorithm: Determine all input and output data structures like arrays or strings created, Variables/references used for storing values, Program stack space for recursion, etc. For the above code example, we have one variable to store array length, one copy array, and one input array.

2. Assume the worst case for dynamic memory allocation: Assume Arrays or collections are max sizes, and recursion has maximum depth. Considering the worst case, the copy array is max allocated to the array length.

3. Remove constants: Since Big O is an approximation focusing only on growth as the input size increases, exact storage sizes don’t matter. So, space complexity 2Arraylength ==> Arraylength

4. Use different terms for input sizes: Substitute variable names for input parameters. The general convention is to use ‘n’. Replacing Arraylength with n for the above code example, space complexity can be denoted as O(n).

5. Focus on dominant storage: Look at space usage that grows fastest with input size. For the above example, the Output array dominates, growing as n

Space Complexity for the above code is, therefore, O(n).

Big O Notation With Examples

The most common Big O notations and their time complexities are

Big O NotationNameDescriptionExample
O(1)Constant timeRuntime is constant regardless of input size.Accessing array element by index
O(log n)Logarithmic timeDoubling input size increases runtime by a constant amount.Binary search on sorted array
O(n)Linear timeDoubling input size doubles the runtime.Sequential search on array
O(n log n)Log-linear timeDoubling input increases runtime by a log n factor.Efficient sorting algorithms
O(n^2)Quadratic timeDoubling input size quadruples the runtime.Nested loops
O(n^3)Cubic timeDoubling input size increases runtime by 8x.Triple nested loops
O(2^n)Exponential timeDoubling input size doubles the exponent.Recursive Fibonacci
O(n!)Factorial timeRuntime grows enormously even for small inputs.Brute force TSP
O(n^c)Polynomial timeGrows faster than linear but slower than exponential.Matrix multiplication
O(c^n)Exponential timeGrows extremely fast, even faster than 2^n.Combinatorics
O(n log n)Quasilinear timeIn between linear and quadratic time.Fast Fourier Transform
Big O Notation Examples

Let’s see some of the code examples of these Big O notation

O(1) – Constant Time

// Accessing the element in array.
int get_element(int arr[], int index) {
    return arr[index];
}

The above code has constant time complexity as no matter how large the input size grows, accessing an element in array using index always takes a constant amount of time.

O(log n) – Logarithmic Time

// Binary Search

int binary_search(int arr[], int size, int x) {
    int low = 0;
    int high = size - 1;
    while (low <= high) {
        int mid = (low + high) / 2;
        if (arr[mid] == x) {
            return mid;
        } else if (arr[mid] < x) {
            low = mid + 1;
        } else {
            high = mid - 1;
        }
    }
    return -1;
}

The above is for binary search. Binary search works by repeatedly dividing the search interval in half. Repeatedly, the interval is halved until it narrows down to the desired value or becomes empty.So, while loop is executed for log(n) times..For detailed explanation you can see this.

O(n) – Linear Time

// Linear search

int linear_search(int arr[], int size, int x) {
    for (int i = 0; i < size; i++) {
        if (arr[i] == x) {
            return i;
        }
    }
    return -1;
}

The linear search algorithm works by iterating through each element of the array one by one until it finds the desired value or reaches the end of the array. The worst case scenario will be that the desired value is the last element of the array or not in the array at all. In this case, n comparisons are made, where n is the size of the array.So time complexity is O(n).

O(n log n) – Log-linear Time

// Merge Sort

// Merge two subarrays of arr[]
// First subarray is arr[l..m]
// Second subarray is arr[m+1..r]
void merge(int arr[], int l, int m, int r) {
    int i, j, k;
    int n1 = m - l + 1;
    int n2 = r - m;

    // Create temporary arrays
    int L[n1], R[n2];

    // Copy data to temporary arrays L[] and R[]
    for (i = 0; i < n1; i++)
        L[i] = arr[l + i];
    for (j = 0; j < n2; j++)
        R[j] = arr[m + 1 + j];

    // Merge the temporary arrays back into arr[l..r]
    i = 0; // Initial index of first subarray
    j = 0; // Initial index of second subarray
    k = l; // Initial index of merged subarray
    while (i < n1 && j < n2) {
        if (L[i] <= R[j]) {
            arr[k] = L[i];
            i++;
        } else {
            arr[k] = R[j];
            j++;
        }
        k++;
    }

    // Copy the remaining elements of L[], if there are any
    while (i < n1) {
        arr[k] = L[i];
        i++;
        k++;
    }

    // Copy the remaining elements of R[], if there are any
    while (j < n2) {
        arr[k] = R[j];
        j++;
        k++;
    }
}

// Main merge sort function
void mergeSort(int arr[], int l, int r) {
    if (l < r) {
        // Same as (l+r)/2, but avoids overflow for large l and r
        int m = l + (r - l) / 2;

        // Sort first and second halves
        mergeSort(arr, l, m);
        mergeSort(arr, m + 1, r);

        // Merge the sorted halves
        merge(arr, l, m, r);
    }
}

The code demonstrates the Merge Sort algorithm, which has a time complexity of O(n*log(n)). Here’s why:

  1. Merge Sort follows a divide-and-conquer strategy. It recursively divides the input array into two halves until each sub-array contains only one element. This division process takes O(log(n)) time because the array is continuously divided into halves.
  2. After the division, it performs merging operations. Merging two sorted arrays of size n/2 each takes O(n) time, as it involves comparing and combining all elements of both arrays. Since we perform this merging operation at each level of recursion, the total time spent on merging is O(n*log(n)).

O(2^n) – Exponential Time

// Fibonacci 

// Recursive function to calculate Fibonacci numbers
int fibonacci(int n) {
    if (n <= 1) {
        return n;
    }
    return fibonacci(n - 1) + fibonacci(n - 2);
}

The recursive Fibonacci function has a branching factor of 2 at each level of recursion because it makes two recursive calls, one for fibonacci(n - 1) and one for fibonacci(n - 2). This means that at each level, the number of function calls doubles. As a result, the total number of function calls grows exponentially with the input n.

Big O Notation Practice Problems

Let’s test the knowledge with some practice questions.

Q1. What is the time, and space complexity of the following code?

Q2. What is the time, and space complexity of the following code?

Q3. What is the time, and space complexity of the following code?

Key TakeAways

To conclude:

  1. Asymptotic notations like Big O are used to describe the performance of algorithms, specifically the growth rate of runtime related to input size.
  2. Big O notation provides an asymptotic upper bound on the worst case runtime of an algorithm, expressed in terms of input size n.
  3. The formal definition of Big O is that f(n) = O(g(n)) if f(n) grows no faster than g(n) asymptotically, up to constant factors.
  4. Calculating Big O involves determining the fastest growing operation in an algorithm, simplifying with rules like dropping constants and non-dominant terms.
  5. Common Big O runtimes include O(1), O(log n), O(n), O(n log n), O(n^2), O(2^n) etc based on growth rates like constant, logarithmic, linear, quadratic, exponential.

Checkout more C Tutorials here.

I hope You liked the post ?. For more such posts, ? subscribe to our newsletter. Try out our free resume checker service where our Industry Experts will help you by providing resume score based on the key criteria that recruiters and hiring managers are looking for.

FAQ

What are properties of Big O Notation ?

Why is Big O Notation important?

90% of Tech Recruiters Judge This In Seconds! 👩‍💻🔍

Don’t let your resume be the weak link. Discover how to make a strong first impression with our free technical resume review!

Related Articles

Hello World! in C HackerRank Solution 

Problem Statement In this challenge, we will learn some basic concepts of C that will get you started with the language. You will need to use the same syntax to

Why Aren’t You Getting Interview Calls? 📞❌

It might just be your resume. Let us pinpoint the problem for free and supercharge your job search. 

Newsletter

Don’t miss out! Subscribe now

Log In