Time and Space Complexity in Algorithms
Introduction
Countless times, we have heard our Maths teachers specify different methods for solving a particular problem. The goal is to use short and easy techniques instead of lengthy, complicated ones.
Likewise, while writing code, it is a good practice to use minimum resources for execution. Here, resources indicate the time and memory requirements of an algorithm.
Imagine you searched something on Google, and it took more than an hour to display the result. Would you ever use Google search again? Probably Not.
Google’s search algorithm is just one example. Algorithms turn up all over the place—pretty much anywhere a computer is involved.
Therefore, companies hire skilled programmers who can write efficient codes because understanding the performance of an algorithm can help the company scale beyond its wildest expectations of the business.
This article will discuss the primary definition of time and space complexity, followed by various examples to demonstrate how to evaluate them.
Also see, Longest Common Substring
What is Time Complexity?
Before diving into the bookish definition of time complexity, let's look at two C++ programmes.
CODE 1
#include <iostream>
using namespace std;
int main()
{
cout<<"Hello!\n";
return 0;
}
OUTPUT
Hello!
CODE 2
#include <iostream>
using namespace std;
int main()
{
int N;
cout<<"Enter the value of N: ";
cin>>N;
for(int i=0;i<N;i++)
{
cout<<"Hello!\n";
}
return 0;
}
OUTPUT (For input N=5)
Hello!
Hello!
Hello!
Hello!
Hello!
In the two codes above:
- Code 1 prints Hello! once and stops the execution.
- In contrast, code 2 is printing Hello! 5 times, or you can say N times, where N is the number entered by the user and then stops the execution.
- When you run them with a C++ compiler, you will see the sentence "The code is executed in 0.2 seconds." Many of us think this is time-complexity, but this is not the case.
Instead, we can infer from the above example that the time required to complete the program is equal to the sum of the number of times each statement gets executed.
More precisely, time complexity deals with the amount of time required by the algorithm or program to execute to completion.
You may be wondering if we use some precise numbers on the clock to express time complexity.
No, the amount of time required for execution is a mathematical quantity, so time complexity must be represented mathematically, using asymptotic notations like Big O, Big Θ and Big Ω.
If you are unfamiliar with asymptotic notation, do not forget to read about it.
How to Find Time Complexity
The steps involved in finding the time complexity of an algorithm are:
- Find the number of statements with constant time complexity (O(1)).
- Find the number of statements with higher orders of complexity like O(N), O(N^{2}), O(log N), etc.
- Express the total time complexity as a sum of the constant.
- Drop the non-dominant terms: Anything you represent as O(N^{2}+N) can be written as O(N^{2}) to get higher-order time complexity.
- Ignore constants; if any, it does not affect the overall time complexity.
For Example, Anything you might think is O(3n) is O(n).
Now that we know how to find the algorithm’s time complexity let us see a few time complexity examples to understand time complexity evaluation better.
Time Complexity Calculation Examples
#include <iostream>
using namespace std;
int main()
{
int a,b,c;
cout<<"Enter two numbers: ";
cin>>a>>b;
c=a+b;
cout<<a<<" + "<<b<<" = "<<c;
return 0;
}
Output:
Enter two numbers: 2 + 3 = 5
- There are only initialisation, input and output statements, an arithmetic operation, and a return statement in the code above. All these have a time complexity O(1).
- So the time complexity of the entire code is O(1).
#include <iostream>
using namespace std;
int main()
{
int N,sum=0;
cout<<"Enter the length of the array: ";
cin>>N;
int arr[N];
for(int i=0;i<N;i++) //time complexity O(n)
{
cout<<"Enter the element in position "<<i<<": ";
cin>>arr[i];
sum+=arr[i];
}
cout<<"The sum of the elements is "<<sum;
return 0;
}
Output:
Enter the length of the array: 5
Enter the element in position 0: 1
Enter the element in position 1: 2
Enter the element in position 2: 3
Enter the element in position 3: 4
Enter the element in position 4: 5
The sum of the elements is 15
- In the code above, there is a for loop in addition to initialisation, input and output statements, an arithmetic operation and a return statement.
- Apart from the loop, the rest of the lines have a time complexity O(1).
- The for loop will run n times, so its time complexity will be O(N).
- Thus, the entire code will have a time complexity of O(N + 9), but the constant is usually neglected for large values of N, and the time complexity will be O(N), also known as Linear-Time.
#include <iostream>
using namespace std;
int main()
{
int N,sum=0;
cout<<"Enter the dimensions of the square matrix: ";
cin>>N;
int arr[N][N];
for(int i=0;i<N;i++) //outer loop
{
for(int j=0;j<N;j++) //inner loop
{
cout<<"Enter the element in position "<<i<<","<<j<<": ";
cin>>arr[i][j];
sum+=arr[i][j];
}
}
cout<<"The sum of the elements is "<<sum;
return 0;
}
Output:
Enter the dimensions of the square matrix: 2
Enter the element in position 0,0: 1
Enter the element in position 0,1: 2
Enter the element in position 1,0: 3
Enter the element in position 1,1: 4
The sum of the elements is 10
- In the above two examples, we already discussed the time complexity of for loop and other statements.
- But here, the loops are nested, which implies that the inner loop will run N times for one iteration of the outer loop.
- Therefore it will run N ✕ N times, so its time complexity will be O(N^{2}). Thus, the entire code will have a time complexity of O(N^{2} + 9), where 9 is the sum of O(1) time complexity of the remaining statements.
- As we already know, for large values of n, the constant is usually neglected, and hence the time complexity will be O(N^{2}).
- Selection sort, Insertion sort are a few examples of O(N^{2}) time complexity.
#include <iostream>
using namespace std;
int main()
{
int N;
cout<<"Enter a number: ";
cin>>N;
while(n>0)
{
cout<<N<<" ";
N=N/2;
}
return 0;
}
Output:
Enter a number: 8
8 4 2 1
- In the code above, there is a while loop in addition to initialisation, input and output statements, an arithmetic operation and a return statement.
- Let us examine mathematically how this while loop works here.
- Suppose N=8; then we are getting series of 8 4 2 1
- Likewise, if we try for different inputs like 10, we will get 10 5 2 1
- We can see that in each iteration, the number will be reduced to the previous half until the whole condition is violated.
- Such algorithms of breaking a number or set of numbers into halves fall under logarithmic complexity O(log N).
If you are still unsure, we recommend looking at the larger entries and then checking the log (N) to see if the loop took that long.
Also, you can go through the Binary-Search Algorithm, which takes O(logN) time to understand logarithmic complexity in depth. The list does not end here; we have log-linear time O(NlogN), cubic time complexity O(N^{3}), exponential time complexity O(2^{n}), factorial time complexity O(N!) and many more.
You can also refer to the Time Complexity Analysis lecture by Ankush Singla, the co-founder of Coding Ninjas.
What is Space Complexity?
Space complexity deals with the amount of memory required in a system for a particular algorithm. Now, before we dive deeper into the concept, we must know what auxiliary space is.
Auxiliary space is the extra space used by an algorithm, which on adding with the space taken by the input values, gives us space complexity.
Space complexity = Auxiliary Space + Space taken by input values
Note: Space complexity isn’t the direct sum of the auxiliary space and the space taken by the input values. It is the maximum possible value of the sum for a particular algorithm.
To understand space complexity better, let us consider the simple example of printing the Fibonacci series with and without recursion. (You can also try for Prime Factorisation Method Using Sieve O(log n) For Multiple Queries)
Note: Space complexity isn’t the direct sum of the auxiliary space and the space taken by the input values. It is the maximum possible value of the sum for a particular algorithm.
To understand space complexity better, let us consider the simple example of printing the Fibonacci series with and without recursion.
Without Recursion
#include<iostream>
using namespace std;
int main()
{
int a = 0, b = 1, c,i,num;
cout<<"Enter the number of elements: ";
cin>>num;
cout<<a<<" "<<b<<" ";
for(i=2;i<num;i++)
{
c = a + b;
cout<<c<<" ";
a=b;
b=c;
}
return 0;
}
Output:
Enter the number of terms of series: 5
0 1 1 2 3
Fibonacci Number With Recursion
#include<iostream>
using namespace std;
int Fibonacci(int n)
{
if((n==1)||(n==0))
return(n);
else
{
return(Fibonacci(n-1)+Fibonacci(n-2));
}
}
int main()
{
int num , i=0;
cout<<"Enter the number of terms of series : ";
cin>>num;
while(i < num)
{
cout<<" "<< Fibonacci(i);
i++;
}
return 0;
}
Output:
Enter the number of terms of series: 5
0 1 1 2 3
Although the outputs are identical in the two code snippets shown above, the code with recursion consumes more memory than the one without recursion.
The reason behind higher memory usage is the recursion stack.
Questioning how this recursion works?
Don’t worry; CodingNinjas has plenty of blogs, video lectures and practice problems to help you out.
How to Find Space Complexity
The following steps describe how to find the space complexity of an algorithm:
- Find the number of variables of different data types initialised in the algorithm.
- Find the memory required by each data type (this can be done using the sizeof( ) operator. Learn how here).
- Multiply the memory required by each data type with the number of variables of that data type present. The result will give the memory required by each data type.
- Add the memory required by each data type.
- Ignore the constants as explained in time complexity, if any, in the sum to obtain the space complexity in the Big O notation.
Now that we know how to find the space complexity let us see a few examples to understand better.
Also see, Morris Traversal for Inorder.
Space Complexity Calculation Examples
To understand how to calculate the space complexity of an algorithm, let us consider a few examples.
#include<iostream>
using namespace std;
int main()
{
int a, b, c;
cout<<"Enter the value of a and b ";
cin>>a>>b;
c = a + b;
cout<<"The sum is "<<c;
}
return 0;
- In the code shown above, three integer-type variables are used. Now, the size of integer type variables is usually 2 or 4 bytes depending on the compiler.
- If we assume it to be 4 bytes in our case, the total memory required for the code is 4 x 3 = 12 bytes, which is a constant. So the space complexity is O(1).
#include<iostream>
using namespace std;
int main()
{
int n, sum = 0;
cout<<"Enter the number of elements ";
cin>>n;
int arr[n];
for(int i = 0; i < n; i++)
{
cout<<"Enter the element in position "<<i<<" ";
cin>>arr[i];
sum = sum + arr[i];
}
cout<<sum;
}
- Here, the array contains n number of elements, and there are three more integer type variables (n, sum, and i).
- So the total memory required here is [(4 x n) + (4 x 3)] = (4n + 12) bytes. In asymptotic notation, the space complexity is O(n).
It would be best to try complex examples on space complexity, like finding the space complexity in Merge-Sort (O(n)).
Importance of Time and Space Complexity in Data Structures
Developers of real-world programmes are constrained by the physical memory of the platforms they plan to run on. This is where space complexity comes into play, as we never want to run a function or process that consumes more space than the system has available at any one time. On the other hand, we don't want our operations to take excessive time so that they bog down and slow down our apps.
Let’s take a look at worst-case Time And Space Complexity for Common Data Structure Operations below:
Data Structures | Time Complexity | Space Complexity | |||
Access | Search | Insertion | Deletion | ||
Array | O(1) | O(N) | O(N) | O(N) | O(n) |
Stack | O(N) | O(N) | O(1) | O(1) | O(n) |
Queue | O(N) | O(N) | O(1) | O(1) | O(n) |
Singly Linked List | O(N) | O(N) | O(1) | O(1) | O(n) |
Doubly Linked List | O(N) | O(N) | O(1) | O(1) | O(n) |
Hash Table | O(N) | O(N) | O(N) | O(N) | O(n) |
Binary Search Tree | O(N) | O(N) | O(N) | O(N) | O(n) |
Binary Tree | O(N) | O(N) | O(N) | O(N) | O(n) |
Frequently Asked Questions
What is meant by the time complexity of an algorithm?
Time complexity deals with the amount of time required by the algorithm or program to execute to completion.
How do you calculate time complexity and space complexity?
We can calculate the time and space complexity by finding the run-time and the total memory taken by the algorithm.
Which space complexity is best?
O(1) space complexity is best.
What is the best time complexity?
The best time complexity is O(1).
For example, in the case of linear search, the best case would be if the element is found in the first iteration, i.e., the time complexity would be O(1).
What is Big O notation in an algorithm?
In Big O notation, the time complexity is defined using an asymptotic upper bound, i.e., considering the worst case where n tends to infinity.
An example showing Big O notation is given before.
Does space complexity include input?
Yes, Space complexity includes both Auxiliary space and space used by input.
Is there any difference between time complexity and run-time?
Time Complexity | Run-time |
It is a theoretical concept that gives an asymptotic behavior | It is time a program takes to execute |
It depends on just the algorithm. | It depends on the programming language, libraries used, and parameters like processor speed and memory. |
Conclusion
Here we talked about how time complexity deals with the amount of time required to execute a program, while space complexity deals with the memory requirements.
We learnt to compute both time and space complexity and also how important it is to keep time and space complexity in mind to write good code.
The time and space complexity of different data structures have also been discussed, owing to their grave contribution to coding.
Also Read - Kadanes Algorithm
As the saying goes, practice makes perfect. So, don’t forget to practice the wide variety of DSA problems frequently asked in interview rounds readily available on CodeStudio. This will help you master efficient coding techniques with a bonus of interview experiences of scholars in big product-based companies.