Algorithm complexity refers to the computational resources (typically time and space) required by an algorithm to solve a problem. It is a measure of how efficient an algorithm is in terms of its resource usage as the input size grows.
Two Types of Algorithm Complexity
Time complexity: This measures the amount of time an algorithm takes to execute as a function of the input size. It is typically expressed using Big O notation, which provides an upper bound on the growth rate of the algorithm's running time.
Space complexity: This measures the amount of memory an algorithm uses as a function of the input size. It is also expressed using Big O notation, indicating the maximum amount of space required by the algorithm.
Need for Algorithm Analysis
Performance prediction: Algorithm analysis helps to predict the performance of an algorithm for different input sizes.
Algorithm comparison: It allows for comparing the efficiency of different algorithms for the same problem.
Resource optimization: Algorithm analysis can identify potential bottlenecks and areas where resource usage can be optimized.
Problem-solving insights: Understanding the complexity of an algorithm can provide insights into the nature of the problem and potential solution approaches.
Algorithm design: Algorithm analysis can guide the design of efficient algorithms.
Theoretical foundations: It forms the theoretical foundation of computer science.
Performance tuning: Algorithm analysis can help in tuning the performance of existing algorithms.
Decision making: It can assist in making informed decisions about which algorithm to use for a particular application.
Algorithm Complexities
Best case: This refers to the minimum amount of time or space required by the algorithm for a given input size. It is often a theoretical bound and may not be representative of typical performance.
Worst case: This refers to the maximum amount of time or space required by the algorithm for a given input size. It is a useful measure for determining the worst-case performance guarantees of an algorithm.
Average case: This refers to the expected amount of time or space required by the algorithm for a given input size, assuming a random distribution of inputs. It is often a more realistic measure of performance than the best or worst case.
Big O notation
Big O notation is a mathematical way to describe how an algorithm's performance (usually its runtime) grows as the input size increases. It focuses on the worst-case scenario and ignores constant factors and lower-order terms, giving us a simplified way to compare algorithms. Common notations include O(1) for constant time, O(log n) for logarithmic time, O(n) for linear time, O(n log n) for linearithmic time, O(n^2) for quadratic time, and O(2^n) for exponential time.
Algorithm Time and Space Complexities
Algorithm complexity is a measure of how efficient an algorithm is in terms of its resource usage (time and space). It is typically expressed using Big O notation, which provides an upper bound on the growth rate of the algorithm's running time or space usage as the input size increases.
Common Time Complexities
Constant time (O(1)): The algorithm's running time remains constant regardless of the input size. Examples include accessing an element in an array by its index or performing a simple arithmetic operation.
Logarithmic time (O(log n)): The algorithm's running time increases logarithmically with the input size. Examples include binary search and finding the height of a balanced binary tree.
Linear time (O(n)): The algorithm's running time increases linearly with the input size. Examples include linear search, traversing an array, and calculating the sum of elements in an array.
Linearithmic time (O(n log n)): The algorithm's running time increases linearly with the input size multiplied by the logarithm of the input size. Examples include merge sort and quicksort.
Quadratic time (O(n^2)): The algorithm's running time increases quadratically with the input size. Examples include nested loops, brute-force algorithms for certain problems, and selection sort.
Cubic time (O(n^3)): The algorithm's running time increases cubically with the input size. Examples include matrix multiplication and certain graph algorithms.
Exponential time (O(2^n)): The algorithm's running time increases exponentially with the input size. Examples include brute-force algorithms for problems like the traveling salesman problem.
Common Space Complexities
Constant space (O(1)): The algorithm's space usage remains constant regardless of the input size. Examples include algorithms that use a fixed number of variables or data structures.
Logarithmic space (O(log n)): The algorithm's space usage increases logarithmically with the input size. Examples include algorithms that use recursion or divide-and-conquer techniques.
Linear space (O(n)): The algorithm's space usage increases linearly with the input size. Examples include algorithms that store a copy of the input data or use data structures like arrays or linked lists.
Quadratic space (O(n^2)): The algorithm's space usage increases quadratically with the input size. Examples include algorithms that store intermediate results in a two-dimensional matrix or array.
Calculating Time Complexity
Identify the Input Size (n):
Determine what represents the input size for the specific algorithm.
This is usually the size of the input data structure, such as the length of an array, the number of nodes in a tree, or the number of characters in a string.
Identify the Basic Operations:
Pinpoint the fundamental operations within the algorithm, such as:
Arithmetic operations (+, -, *, /)
Comparisons (<, >, ==, !=)
Assignments (=)
Function calls
Loop iterations
Analyze the Loop Structure:
Nested Loops: Multiply the number of iterations of each loop to get the total number of operations.
Single Loops: Count the number of iterations based on the loop condition and the increment/decrement.
Determine the Dominant Term:
Identify the term that grows the fastest as the input size n increases.
This term will dominate the overall time complexity.
Apply Big O Notation:
Ignore constant factors and lower-order terms.
Express the dominant term using Big O notation.
Example: Bubble Sort
Let's analyze the time complexity of the bubble sort algorithm:
void bubbleSort(int arr[], int n) {
for (int i = 0; i < n - 1; i++) {
for (int j = 0; j < n - i - 1; j++) {
if (arr[j] > arr[j + 1]) {
swap(arr[j], arr[j + 1]);
}
}
}
}
Input Size: n (the number of elements in the array)
Basic Operations: Comparisons and swaps.
Loop Analysis:
Outer loop: n-1 iterations
Inner loop: n-i-1 iterations in the i-th outer loop iteration
Dominant Term: The nested loops contribute to a time complexity of O(n^2).
Big O Notation: The time complexity is O(n^2).
This means that as the input size n increases, the runtime of bubble sort grows quadratically.
Analyzing Space Complexity
Space complexity measures the amount of extra memory an algorithm uses as a function of the input size.
Steps to Analyze Space Complexity:
Identify Auxiliary Space:
Determine the extra space used by the algorithm beyond the input data.
This includes variables, data structures, and function call stacks.
Express in Big O Notation:
Represent the auxiliary space as a function of the input size.
Example: Bubble Sort
For bubble sort, the auxiliary space used is constant, regardless of the input size. This includes:
Loop variables: i and j
Temporary variable: for swapping
Therefore, the space complexity of bubble sort is O(1).
Other Examples:
Recursive Algorithms: Consider the maximum depth of the recursion tree. Each recursive call adds to the function call stack.
Dynamic Programming: Analyze the size of the dynamic programming table or memoization table.
Greedy Algorithms: Consider the data structures used to store intermediate results.
Key Points:
Input Space: The input data itself is not typically considered in space complexity analysis, as it's part of the problem input.
Constant Space: Algorithms that use a fixed amount of extra space, regardless of the input size, have O(1) space complexity.
Linear Space: Algorithms that use extra space proportional to the input size have O(n) space complexity.
Polynomial Space: Algorithms that use extra space proportional to a polynomial function of the input size have O(n^k) space complexity.
Exponential Space: Algorithms that use extra space exponential to the input size have O(2^n) or higher space complexity.
EXAMPLES:
Time Complexity Questions and Answers
Question: What is the time complexity of linear search?
Answer: O(n). In the worst case, the algorithm needs to iterate through all n elements of the array.
Question: What is the time complexity of binary search?
Answer: O(log n). Binary search repeatedly halves the search space, leading to logarithmic time complexity.
Question: What is the time complexity of merge sort?
Answer: O(n log n). Merge sort divides the array into two halves, recursively sorts them, and then merges them in linear time. This leads to a time complexity of O(n log n).
Space Complexity Questions and Answers
Question: What is the space complexity of iterative bubble sort?
Answer: O(1). Iterative bubble sort only uses a constant amount of extra space for variables like i, j, and a temporary variable for swapping.
Question: What is the space complexity of recursive Fibonacci?
Answer: O(n). Each recursive call adds to the function call stack, leading to linear space complexity.
Question: What is the space complexity of merge sort?
Answer: O(n). Merge sort requires extra space for the temporary array used to merge the sorted halves. This extra space is proportional to the input size.
1. What is the time complexity of the following code snippet?
void linearSearch(int arr[], int n, int x) {
for (int i = 0; i < n; i++) {
if (arr[i] == x) {
return i;
}
}
return -1;
}
Answer
Outer Loop: The outer loop iterates n times.
Operations: Inside the loop, a constant number of operations are performed.
Time Complexity: Therefore, the time complexity is O(n).
2. Analyze the time complexity of the following recursive function:
int factorial(int n) {
if (n == 0) {
return 1;
} else {
return n * factorial(n - 1);
}
}
Answer:
The factorial function calculates the factorial of a given integer n using recursion.
The base case is when n is 0, in which case the function returns 1 (since 0! = 1).
Otherwise, the function returns n multiplied by the factorial of n-1, which is calculated by recursively calling the factorial function itself.
Example Usage:
#include <iostream>
int main() {
int num = 5;
int result = factorial(num);
std::cout << "The factorial of " << num << " is: " << result << std::endl; // Output: 120
return 0;
}
Each recursive call: reduces the problem size by 1.
The number of recursive calls: is proportional to n.
Therefore, the time complexity: is O(n).
3. What is the time complexity of the following nested loop?
int factorial(int n) {
return (n == 0) ? 1 : n * factorial(n - 1);
}
Answer
The outer loop: iterates n times.
The inner loop: iterates i times for each outer loop iteration.
The total number of iterations: is approximately n^2/2.
Ignoring the constant factor: the time complexity is O(n^2).
4. What is the space complexity of the following function:
int factorial(int n) {
if (n == 0) {
return 1;
} else {
return n * factorial(n - 1);
}
}
Answer
Space Complexity: O(n) due to the recursion stack.
Worst Case: In the worst case, the recursion depth can be n.