What Are Asymptotic Notations?

What Are Asymptotic Notations?
What Are Asymptotic Notations?

Introduction

Almost everyone enjoys a tasty slice of cake every now and then, but have you ever thought of baking one?

To bake a cake, we will first need a recipe. In this recipe, the proper measurements of each ingredient are given, and it is crucial to adhere to them. But why? 

Without the proper amount of ingredients, the cake will not turn out to be as it is supposed to. Therefore, an appropriate amount of ingredients is an essential part of baking. 

Similarly, the mathematical analysis of complexity is an integral part of it, which is done with the help of Asymptotic Notations. 

Also, by analysing the time and effort required for a particular recipe, we can develop new recipes to increase our productivity.

Similarly, we can compare different algorithms concerning their asymptotic analysis and develop better ones in the case of complexity.  Therefore, Asymptotic notations help us analyse the complexity of an algorithm to choose which one is the best. That’s an essential part of being a good programmer.

So, in the article, we will be discussing the different asymptotic notations used to express the complexity of an algorithm.

What are Asymptotic Notations?

As we already know, It helps us to analyse the complexity of an algorithm. But how exactly is it?

To understand that, we will first have to understand the concept of asymptotic analysis. For a particular algorithm, there is a variation in the complexity due to the different input cases. 

For example, consider the program to create a 2-D array as follows:

#include <iostream>
using namespace std;

int main()
{
int M,N,sum=0;
cout<<"Enter the dimensions of the matrix: ";
cin>>M>>N;
int arr[M][N];
for(int i=0;i<M;i++) //outer loop
{
    for(int j=0;j<N;j++) //inner loop
    {
       cout<<"Enter the element in position "<<i<<","<<j<<": ";
       cin>>arr[i][j];
       sum+=arr[i][j];
    }
}
cout<<"The sum of the elements is "<<sum;
return 0;
}

Here, the complexity will vary according to the dimensions of the array given by the user. If the user gives the dimensions as 1,1, then that will be the most straightforward case, and the worst case will be (M x N), where M and N are integers.

This is what is called Asymptotic analysis.

Therefore to define it, Analysing the best case and worst case performance of an algorithm by its inputs is known as asymptotic analysis. 

Now, after analysing the performance of the algorithm, we must represent it using some mathematical function. Asymptotic Notations such as Big O, Big Ω and Big Θ are used to accomplish it.

To understand this, let us consider a quadratic function. By definition, we know that a quadratic function is a sum of algebraic terms that contain the same variable in the different powers of 0,1 or 2. 

But this doesn’t sound very clear.

Image Source: Giphy

So we represent it mathematically as a function of the variable as follows:

f(x) = x2 + 2x + 1 

Similarly, Asymptotic notations represent the complexity as a function of the input variable. As you may have heard, some algorithms have an O(N) time complexity. Stay tuned as we will go over it in detail in the next section.

Therefore, we can define Asymptotic notation as follows:

Asymptotic notations are the mathematical representation of the complexity of an algorithm as a function of the input variable, giving us upper and lower bounds to the run-time. 

Best, Average, and Worst-Case Complexity

Now, we already know what asymptotic notations are, but you may have heard of best-case, worst-case and average-case scenarios for an algorithm. So let’s find out what those are before proceeding any further. 

Best-Case Complexity

To understand this, let us consider the example of linear search. Do you know what Linear Search is? Take a look here.

Suppose we want to find the number 5 in the following array:

Arr = {5,2,7,1,3,8,4,10,9,6}

As you can see, by linear search, we will find five in the very first iteration of the loop. So, this is the best-case scenario since the loop will run only once, and therefore the run-time will be the least possible. 

The best-case complexity for any algorithm is always a function of 1 and is represented as O(1).

Worst-Case Complexity

To understand this, let us again consider our previous example. Again we want to find 5 using linear search, but the elements in the array will be in a different order.

Arr = {2,7,1,3,8,4,10,9,6,5}

Here, 5 is the last element, and we will find it in the last iteration using linear search. So, this is the worst-case scenario where the loop will run the maximum number of times. The run-time will also be the maximum value possible.

The worst-case complexity varies with different algorithms.

Average-Case Complexity

Let us again consider the same example but with the array elements arranged differently.

Arr = {2,7,1,3,5,8,4,10,9,6}

Here, the element we are searching for, 5, is present precisely in the middle. So, the complexity will be average, and the run-time will be the average time taken. 

The average-case complexity also varies with algorithms, but have you noticed anything interesting about the example above?

If not, let’s find out what it is.

For the given array, if we try to find the number 5 by linear search, it is an average-case scenario, but only for the linear search algorithm. Instead of linear search, if we use binary search, it would be our best-case scenario with a constant time complexity O(1) (in Big O notation). 

Let us now see the different kinds of Asymptotic notations used to represent the best, worst and average-case complexity of an algorithm.

Types of Asymptotic Notations

In the article, you may have noticed that I have used only Big O notation to represent the complexities. 

So, is that the only kind of asymptotic notation?

1. Big O Notation

Big O notation represents the worst-case complexity of an algorithm. In other words, it provides an upper bound on the complexity of an algorithm. It is defined as:

O(g(n)) = f(n), such that there 0 ≤ f(n) ≤ c.g(n) for all n ≥ n0, where c and n0 are positive constants. This function is represented graphically as follows:

Image Source: Programiz

As you can see from the graph, f(n) does not exceed c.g(n) for any value of n≥ n0. Therefore, the complexity is defined such that it has an upper bound. 

Big O Notation Examples

Before considering the example of an algorithm, let’s consider a general example. If I have ten candies in my pocket, I can say that I have less than 100 candies, providing an upper bound.

In the case of, let’s say, binary search, the maximum number of iterations (worst-case scenario) will be log n, so the complexity in Big O notation is O(log n).

2. Big Ω Notation

Big Ω notation is similar to Big O notation but is exactly the opposite of it. It represents the best-case complexity of an algorithm, thereby providing a lower bound to the complexity. It is defined as follows:

Ω(g(n)) = f(n), such that there 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0, where c and n0 are positive constants.

This is graphically represented as follows:

Image Source: Programiz

In the graph above, the complexity f(n) is not less than c.g(n) for any value of n ≥ n0. Therefore, the complexity is defined with a lower bound. 

Big – omega Notation Examples

According to Big Ω notation, if I have ten candies in my pocket, I can always say I have more than one candy. 

Considering binary search again, the complexity can never be less than the best case complexity; therefore, the complexity in Big Ω notation will be Ω(1).  In fact, the complexity in Big Ω for all algorithms is usually always Ω(1).

3. Big Θ Notation

The last case of complexity we will discuss is an average-case complexity denoted by Big Θ notation. So, you can guess, It gives an upper and a lower bound to the complexity of an algorithm. 

Mathematically, it is represented as follows:

Θ(g(n)) = f(n), such that there 0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n) for all n ≥ n0, where c1, c2 and n0 are positive constants.

Graphically, the function is:

Image Source: Programiz

It is evident from the graph that the complexity f(n) is bound by an upper limit c2.g(n) and a lower limit c1.g(n). 

Big – Theta Notation Examples

In Big Θ notation, if I have ten candies in my pocket, I can say I have less than 100 candies, but more than one candy. 

Let us now consider a code snippet:

void print(int arr[], int n)
{
	for(int i=0;i<n;i++)
	{
    	     cout<<arr[i]<<" ";
	}
}

The for loop in this function will run at least n times, but it will also run a maximum of n times.

It is like saying a ≤ b and b ≤ a, which implies that a = b (here, a and b are two numbers). Therefore, Big-Theta is either the algorithm’s exact performance value or a good range between narrow upper and lower bounds.

Now that we know the different kinds of asymptotic notations let us see the differences between them at a glance.

Differences Between Big O, Big Ω and Big Θ

Big O vs Big Omega

Big OBig Ω
It gives the worst-case complexity.It gives the best case complexity.
It provides an upper bound to the complexity.It provides a lower bound to the complexity.

Big theta vs Big O

Big ΘBig O
It gives the average-case complexity.It gives the worst-case complexity.
It provides a lower and an upper bound to the complexity.It provides an upper bound to the complexity.

Functions in Asymptotic Notations

There are many functions in asymptotic notations. Let us understand them by considering a simple example.  

Suppose a boy has lost his glasses somewhere in his classroom. Now, to find it, he wants to ask his classmates. We will consider different methods to find the glasses, from which we’ll get to know about the various functions in asymptotic notations. 

Note: Although all the functions are mentioned in the Big O notation (to consider the worst-case scenario), they exist in the other notations too. 

  1. O(1): Suppose the boy asked his friend who was sitting next to him and found his glasses. This would be the best-case scenario since he had to ask only one person. 
  1. O(n): Suppose the boy asked every other student in the class about his glasses. If the number of students in his class is n, he would have to ask n times, and hence the complexity is a function of O(n).
  1. O(n2): Suppose the boy asks each of his classmates and then asks them to ask the other classmates too. Then each student would ask n times, and there are n students in all. So, the complexity will be a function of O(n2).
  1. O(log n): Suppose the boy divided the class into two halves depending on their seating arrangement. He asks each half whether they have seen his glasses. If one of the groups says yes, he divides that group into two halves again and continues the process until he finds his glasses. The strength of the class is n. On dividing by 2, let’s say, m times until the value becomes 1. 

So,

Therefore, the complexity here is a function of O(log n).

5. O(n!): Suppose the boy asked all his classmates to stand in a line facing the front of the classroom. Now, each person can ask everybody standing behind them about the glasses, but not anyone standing in front of them. So, the person at the front of the line will ask n number of people standing behind him. The second person will ask (n-1) people standing behind him, and it goes on like this. 

So, the maximum number of times asked will be:

n(n-1)(n-2)(n-3)(n-4)….(1) = n!

Therefore, the complexity here will be a function of O(n!). The graph below shows the variation of the different functions of complexity with a varying number of items. 

Image Source: Digital Ocean

As you can see, O(1) has the minimum complexity and is the best-case scenario, while O(n!) happens to be the most complex. 

The functions mentioned above are most commonly encountered while calculating the complexity.

Apart from these, there is also a linear log function (NlogN), exponential function (2n), cubic function (n3), and the list is endless.  Need a recap of Time and Space Complexity? No worry, you can go through a blog on it here.

Frequently Asked Questions

What are the three basic asymptotic notations?

The three basic asymptotic notations are Big O, Big Ω (omega) and Big Θ (theta).

Which asymptotic notation is best?

While analysing the complexity of an algorithm asymptotically, it is best to consider the worst-case scenario, and so Big O notation is the best.

What is the significance of asymptotic notation?

Asymptotic notations help us to analyse an algorithm and choose one over the other based on their complexity.

Is Big O notation the worst case?

Yes, Big O notation considers the worst-case scenario and is a measure of the worst-case complexity.

How is Big O complexity calculated?

Big O complexity is calculated by considering the worst-case scenario and expressing the complexity as a function of the input variable, considering the maximum run-time possible.

How do you calculate the worst case and best case complexities?

The worst-case complexity is calculated by considering the maximum run-time of a particular algorithm (worst-case scenario). In contrast, the best case complexity is calculated by considering the minimum run-time of the algorithm (best-case scenario).

What is Big theta notation?

Big Θ notation is used to find the average-case complexity, thereby providing both an upper and a lower bound to the complexity.

What is the difference between Big O, Big Omega, and Big Theta notations?

Big OBig OmegaBig Theta
It gives the worst-case complexity.It gives the best-case complexity.It gives the average-case complexity.
The actual complexity must be less than the Big O complexity.The actual complexity must be more than the Big Ω complexity. The actual complexity is often the Big Θ complexity.
It provides an upper bound.It provides a lower bound.It provides both an upper and a lower bound.
It is mathematically represented as:O(g(n)) = f(n), such that there 0 ≤ f(n) ≤ c.g(n) for all n ≥ n0, where c and n0 are positive constants.It is mathematically represented as:Ω(g(n)) = f(n), such that there 0 ≤ c.g(n) ≤ f(n) for all n ≥ n0, where c and n0 are positive constants.It is mathematically represented as:Θ(g(n)) = f(n), such that there 0 ≤ c1.g(n) ≤ f(n) ≤ c2.g(n) for all n ≥ n0, where c1, c2 and n0 are positive constants.

Key Takeaways

In this article, we learned what asymptotic notations are, their different types and how to calculate them. We also saw various functions used to express the complexity. 

Now that we know how to analyze the complexity of an algorithm mathematically, we can easily compare algorithms and choose the best one. It will help us assimilate good coding practice. 

Do you want to learn about different types of algorithms? Remember to read our blog on Algorithm Types and Applications.

But as always, practice is the key. So, don’t forget to practice DSA problems from CodeStudio. This will help you keep interview questions at your fingertips, with the bonus of interview experiences of scholars in big product-based companies.

By Neelakshi Lahiri