Big O Notation: Definition and Explanation

Understanding Big O notation
Understanding Big O notation

Introduction

Big O notation i.e. basically expressing the time/space complexity of an algorithm in terms of Big O comes in the role when you want to find the time/space consumed by your algorithm.

Because we all know one thing that finding a solution to a problem is not enough but solving that problem in the minimum time/space possible is also necessary.

Although you don’t care too much about how your algorithm is performing or taking time/space unless you’re working in a competitive environment, real-life projects, doing competitive programming because these are the places where you would be required to not only finding the solution but also optimizing it wherever possible in terms of space/time.

If we talk about competitive programming, then you want your algorithm to be faster than others otherwise you won’t be able to perform better than others,

On the other hand when you’re working on real-life projects or as a software developer then you want that loading time of your application or other tasks should be executed fast even in terms of space complexity.


In other words, you can say you’re on the right track of learning data structures and algorithms or your development skills when you start look upon the complexity of your algorithm.

But from all this we got to know what is the complexity and why we’re studying it but what is Big O, Big O establishes a worst-run time.

I have a nephew named Jay and he has a billion toys!

Kidding of course but he has lots of toys and he’s mess at keeping his toys at place and due to this when his friends come to house for playing then it takes him forever to find the particular toy and wants to find the toy as soon as possible and comes to me so I asked him to clean his place and keep the toys in a good manner but he does not want to listen this as you know Kids!

So basically he wants to create a search algorithm for finding the toy. He’s confused between the simple search and binary search (Don’t bother if you don’t know about these.)

The trick is algorithm should be fast as well as correct. On one hand, Binary search is faster and Jay has about 20 seconds before his friend gets bored looking for a particular toy.

On the other hand, a simple search is easy to write and less chance of errors also and he came to me for the help and I can’t be that one where my solution fails in front of a kid it would be embarrassing.

But I want to make him understand both the approaches so he’s not embarrassed in front of his friends. So we started taking 100 toys.

Let’s think it takes 1 millisecond to find one toy.

With a simple search, Jay will have to check 100 toys looking for the particular toy until he finds it so this approach takes him 100ms to run.

On the other hand, he only has to check only 7 toys with binary search. How? Basically (log2 100 is almost 7, but what is this math thing right?) Don’t worry it’s not the main part of the article.

Binary_Search_Java_without_Recursion
Image Source: Binary Search in Java without Recursion | Algorithm, Linear search, Insertion sort algorithm (pinterest.com)

So what do we get to know from above picture binary search is faster right? No it’s way more than that when the no. of toys increases. If there are 1 billion toys and it takes 30ms (log2 1,000,000,000 is almost 30) then it would be more like 33 million times faster.

image_coding
Image Source: images (241×209) (gstatic.com)

One thing we got to know from linear search vs binary search is how the running time of the algorithm increases as the no. of toys (list items) increases.

That’s where Big O comes in role.

Big O tells you how fast your algorithm is.

Consider a case in which you have a list of size n. Simple search requires to check each element so it will take n operations so the run time in Big O notation is O(n).

blog banner 1

Where are the seconds?

There are none – Big O doesn’t tell you the speed of algorithm in seconds. Big O lets you compare the number of operations.

How will you compare algorithms?

Let’s take an example. For solving same problem, you have two functions:

f(n) =4n^2 +2n+4 and g(n) =4n+4
For f(n) =4n^2+2n+4
so here
f(1)=4+2+4
f(2)=16+4+4
f(3)=36+6+4
f(4)=64+8+4

As you can see here contribution of n^2 is a quadric function whose value will increase with increase in value of n. So when their value of n becomes very large, contribution of n^2 will be 99% of value on f(n).

So here low order terms can be ignored as they are relatively insignificant as described above.

In this f(n), we can ignore 2n and 4 in comparison to n^2.

So,

n^2+2n+4 ——–>n^2. So here n^2 is highest rate of growth.
For g(n) =4n+4
so here
g(1)=4+4
g(2)=8+4
g(3)=12+4
g(4)=16+4

As you can see here the contribution of n increasing when we increase the value of n. So when the value of n becomes very large, the contribution of n will be 99% of the value on g(n).

Therefore, we can ignore low order terms as they are relatively insignificant. In this g(n), 4 and also 4 as constant multipliers can also be ignored as it will have no effect as seen above so, 4n+4 ——–>n.

So here n is the highest rate of growth.

Point to be noted:

We are not considering terms which are growing slowly and have not much effect in comparison to other terms and keep one which grows fastest.

  • Time Complexity:

    Time Complexity is a way of representing or to get to know how the run-time of a function increases/decreases as the size of the input increases/decreases.

There are many types of time complexity for example:

  • Linear Time —-> Already discussed in the above scenario where we helped my cousin from being embarrassed in front of his friends.
  • Constant Time —-> Here execution time for the function remains constant even though on increasing the value of n at a very high rate.
  • Quadratic Time —-> Here execution time for functions increases in an exponential form on increasing the value of n.

Asymptotic Notation:

  • Asymptotic notations are basically mathematical notations which are useful in representing the running time complexity of the algorithm.
  • It helps in analysing a program running time-based on the size of input given.
  • There are three types of Asymptotic notations used in Time Complexity.

As shown below:

asymptotic_notation
Image Source: maxresdefault.jpg (1280×720) (ytimg.com)

But as our main concern is to understand Big O –

Upper Bound :

if f(n) <= c*g(n) for all n > n0
then f(n) = O(g(n))

It helps in describing the performance or complexity of our algorithm.

Big O determines the worst-case scenario i.e. the longest amount of time taken in execution of the programme.

f(n) = O(g(n)), it clearly shows that there are positive constants c and n0, such that 0 ≤ f(n) ≤ cg(n) for all n ≥ n0. The values of c and n0 are independent of n.

Example: Writing in a form of f(n)<=c*g(n) with f(n)=4n+3 and g(n)=5n
When n0 = 3, above condition get true, i.e 4n+3<=5n for n0=3 and c=5.

Common Asymptotic Notations:

Common_Asymptotic_Notations:
Image Source: xkcd: Travelling Salesman Problem

If you don’t understand the meme, then you probably want to read about the very famous problem Travelling Salesman (Dynamic Programming).

Here are five Big O run times that you’ll encounter a lot, sorted from fastest to slowest:

  • O(log n), also known as log time. Example: Binary search.
  • O(n), also known as linear time. Example: Simple search.
  • O(n * log n). Example: A fast sorting algorithm, like quicksort.
  • O(n2). Example: A slow sorting algorithm, like selection sort.
  • O(n!). Example: A really slow algorithm, like travelling salesperson.

Let’s understand the Big O Notation thoroughly by taking the C++ examples on common orders of growth like,

time_complexity
Image Source: How To Calculate Time Complexity With Big O Notation | by Maxwell Harvey Croy | DataSeries | Medium
  1. O(1) – Constant Time Algorithms
    The O(1) is also called as constant time, it will always execute in the same time regardless of the input size.

    For example, if the input array could be 1 item or 100 items, but this method required only one step. Note: O(1) is the best Time Complexity method.
time_complexity

2. O(LOG N) – Logarithmic Time Algorithms
In O(log n) function the complexity increases as the size of input increases.

Example: Binary Search.

time_algorithms
  1. O(N LOG N) – Linear Logarithmic Time Algorithms
    The O(n log n) function fall between the linear and quadratic function ( i.e O(n) and Ο(n2). It is mainly used in sorting algorithm to get good Time complexity.

    For example, Merge sort and quicksort.
liner_logarithmic

For example, if the n is 4, then this algorithm will run 4 * log(8) = 4 * 3 = 12 times. Whether we have strict inequality or not in the for loop is irrelevant for the sake of a Big O Notation.

Refer this to understand the space complexity of merge sort Space Complexity – YouTube.

  1. O(N) – Linear Time Algorithms
    The O(n) is also called as linear time, it is direct proportion to the number of inputs. For example, if the array has 6 items, it will print 6 times.

    Note: In O(n) the number of elements increases, the number of steps also increases.
liner_time_algorithms
  1. O(N2) – Polynomial-Time Algorithms
    The O(N2) is also called quadratic time, it is directly proportional to the square of the input size. For example, if the array has 3 items, it will print nine times.

    Note: In O(N2) as the number of steps increases exponential, the number of elements also increases. It is the worst-case Time Complexity method.
Polynomial_Time_Algorithms

Note, if we were to nest another for loop, this would become an O(n3) algorithm.

6. O(2N) – Exponential Time Algorithms

  • Algorithms with complexity O(2N) are called as Exponential Time Algorithms.
  • These algorithms grow in proportion to some factor exponentiated by the input size.
  • For example, O(2N) algorithms double with every additional input. So, if n = 2, these algorithms will run four times; if n = 3, they will run eight times (kind of like the opposite of logarithmic time algorithms).
  • O(3N) algorithms triple with every additional input, O(kN) algorithms will get k times bigger with every additional input.

This algorithm will run 27 = 128 times.

7. O(N!) – Factorial Time Algorithms

  • This class of algorithms has a run time proportional to the factorial of the input size.
  • A classic example of this is solving the travelling salesman problem using a brute-force approach to solve it.
Factorial_Time_Algorithms

where factorial(n) simply calculates n!. If n is 5, this algorithm will run 5! = 120 times.

Having trouble understanding the factorial code using recursion? Don’t worry you can master the recursion from here How to Master Recursion? – YouTube.

Rules of thumb for calculating the complexity of an algorithm: Simple programs can be analyzed using counting number of loops or iterations.

COMPLEXITY OF CONSECUTIVE STATEMENTS: O(1).

time_complex

COMPLEXITY OF A SIMPLE LOOP: O(N).

Time complexity of a loop can be determined by running time of statements inside loop multiplied by total number of iterations.

simple_loop

COMPLEXITY OF A NESTED LOOP: O(N^2)

It is product of iterations of each loop.

nested_loop

COMPLEXITY OF IF AND ELSE BLOCK:

When you have if and else statement, then time complexity is calculated with whichever of them is larger.

else_block

Example of Space Complexity:

There could be times when you required to optimize the space complexity of your algorithm whether you’re doing competitive programming or working on real-time projects in development.

Remember that factorial algorithm we discussed above, it is using recursion and always remember one thing that recursion always make stack but it can also be optimised using dynamic programming where you can solve its space complexity.

Now let’s have a look at a simple program and try to understand the space complexity.

space_complexity

Here I am not allocating any new variable, so the space complexity of my program is O(1). On the other hand if I make another program like this.

TIME_COMPLEX

Then you can see I have made an array of size n so it’s causing the space complexity to be O(n).

Also study about  Stack and Heap (memory management) so you can better understand the space complexity here Memory Management in Java | Coding Ninjas Blog.

Conclusion:

Here are the main takeaways:

  • Algorithm speed isn’t measured in seconds but speed is measured in growth of the number of operations.
  • Instead, we talk about how quickly the run time of an algorithm increases/decreases as the size of the input increases/decreases.
  • Big O notation is helpful in expressing Run time of algorithms.
  • O(log n) is faster than O(n), but it gets a lot faster as items in the list are huge/increasing drastically.

I hope this article was helpful in bringing more clarity about Big O notation. You can also get the best time-based analysis of problems through this Time Complexity webinar Get the best time-based analysis of problems through this Time Complexity webinar – YouTube. Still not enough confident to solve the time-complexity problems? Don’t worry we got your back practice problems here Time Complexity of Code (codingninjas.com).

To explore more about our blogs and courses, you can visit our website.

By Yogesh Kumar