LEARNING OUTCOME 3
Algorithm Complexity
Algorithm complexity refers to the computational resources (typically time and space) required by an algorithm to solve a problem. It is a measure of how efficient an algorithm is in terms of its resource usage as the input size grows.
Two Types of Algorithm Complexity
- Time complexity: This measures the amount of time an algorithm takes to execute as a function of the input size. It is typically expressed using Big O notation, which provides an upper bound on the growth rate of the algorithm's running time.
- Space complexity: This measures the amount of memory an algorithm uses as a function of the input size. It is also expressed using Big O notation, indicating the maximum amount of space required by the algorithm.
Need for Algorithm Analysis
- Performance prediction: Algorithm analysis helps to predict the performance of an algorithm for different input sizes.
- Algorithm comparison: It allows for comparing the efficiency of different algorithms for the same problem.
- Resource optimization: Algorithm analysis can identify potential bottlenecks and areas where resource usage can be optimized.
- Problem-solving insights: Understanding the complexity of an algorithm can provide insights into the nature of the problem and potential solution approaches.
- Algorithm design: Algorithm analysis can guide the design of efficient algorithms.
- Theoretical foundations: It forms the theoretical foundation of computer science.
- Performance tuning: Algorithm analysis can help in tuning the performance of existing algorithms.
- Decision making: It can assist in making informed decisions about which algorithm to use for a particular application.
Algorithm Complexities (Cases)
- Best case: This refers to the minimum amount of time or space required by the algorithm for a given input size. It is often a theoretical bound and may not be representative of typical performance.
- Worst case: This refers to the maximum amount of time or space required by the algorithm for a given input size. It is a useful measure for determining the worst-case performance guarantees of an algorithm.
- Average case: This refers to the expected amount of time or space required by the algorithm for a given input size, assuming a random distribution of inputs. It is often a more realistic measure of performance than the best or worst case.
Big O notation
Big O notation is a mathematical way to describe how an algorithm's performance (usually its runtime) grows as the input size increases. It focuses on the worst-case scenario and ignores constant factors and lower-order terms, giving us a simplified way to compare algorithms. Common notations include O(1) for constant time, O(log n) for logarithmic time, O(n) for linear time, O(n log n) for linearithmic time, O(n^2) for quadratic time, and O(2^n) for exponential time.
Algorithm Time and Space Complexities
Algorithm complexity is a measure of how efficient an algorithm is in terms of its resource usage (time and space). It is typically expressed using Big O notation, which provides an upper bound on the growth rate of the algorithm's running time or space usage as the input size increases.
Common Time Complexities
- Constant time (O(1)): The algorithm's running time remains constant regardless of the input size. Examples include accessing an element in an array by its index or performing a simple arithmetic operation.
- Logarithmic time (O(log n)): The algorithm's running time increases logarithmically with the input size. Examples include binary search and finding the height of a balanced binary tree.
- Linear time (O(n)): The algorithm's running time increases linearly with the input size. Examples include linear search, traversing an array, and calculating the sum of elements in an array.
- Linearithmic time (O(n log n)): The algorithm's running time increases linearly with the input size multiplied by the logarithm of the input size. Examples include merge sort and quicksort.
- Quadratic time (O(n^2)): The algorithm's running time increases quadratically with the input size. Examples include nested loops, brute-force algorithms for certain problems, and selection sort.
- Cubic time (O(n^3)): The algorithm's running time increases cubically with the input size. Examples include matrix multiplication and certain graph algorithms.
- Exponential time (O(2^n)): The algorithm's running time increases exponentially with the input size. Examples include brute-force algorithms for problems like the traveling salesman problem.
Common Space Complexities
- Constant space (O(1)): The algorithm's space usage remains constant regardless of the input size. Examples include algorithms that use a fixed number of variables or data structures.
- Logarithmic space (O(log n)): The algorithm's space usage increases logarithmically with the input size. Examples include algorithms that use recursion or divide-and-conquer techniques.
- Linear space (O(n)): The algorithm's space usage increases linearly with the input size. Examples include algorithms that store a copy of the input data or use data structures like arrays or linked lists.
- Quadratic space (O(n^2)): The algorithm's space usage increases quadratically with the input size. Examples include algorithms that store intermediate results in a two-dimensional matrix or array.
Calculating Time Complexity
- Identify the Input Size (n):
- Determine what represents the input size for the specific algorithm.
- This is usually the size of the input data structure, such as the length of an array, the number of nodes in a tree, or the number of characters in a string.
- Identify the Basic Operations:
- Pinpoint the fundamental operations within the algorithm, such as arithmetic operations, comparisons, assignments, function calls, and loop iterations.
- Analyze the Loop Structure:
- Nested Loops: Multiply the number of iterations of each loop to get the total number of operations.
- Single Loops: Count the number of iterations based on the loop condition and the increment/decrement.
- Determine the Dominant Term:
- Identify the term that grows the fastest as the input size n increases. This term will dominate the overall time complexity.
- Apply Big O Notation:
- Ignore constant factors and lower-order terms. Express the dominant term using Big O notation.
Example: Bubble Sort
Let's analyze the time complexity of the bubble sort algorithm:
2
3
4
5
6
7
8
- Input Size: n (the number of elements in the array)
- Basic Operations: Comparisons and swaps.
- Loop Analysis: The nested loops contribute to a time complexity of O(n²).
- Big O Notation: The final time complexity is O(n²).
This means that as the input size n increases, the runtime of bubble sort grows quadratically.
Analyzing Space Complexity
Space complexity measures the amount of extra memory an algorithm uses as a function of the input size.
Steps to Analyze Space Complexity:
- Identify Auxiliary Space: Determine the extra space used by the algorithm beyond the input data. This includes variables, data structures, and function call stacks.
- Express in Big O Notation: Represent the auxiliary space as a function of the input size.
Example: Bubble Sort (Space)
For bubble sort, the auxiliary space used is constant, regardless of the input size. This includes loop variables (i and j) and a temporary variable for swapping. Therefore, the space complexity of bubble sort is O(1).
EXAMPLES:
Time Complexity Q&A
- Question: What is the time complexity of linear search?
- Answer: O(n). In the worst case, the algorithm needs to iterate through all n elements of the array.
- Question: What is the time complexity of binary search?
- Answer: O(log n). Binary search repeatedly halves the search space, leading to logarithmic time complexity.
- Question: What is the time complexity of merge sort?
- Answer: O(n log n). Merge sort divides the array into two halves, recursively sorts them, and then merges them in linear time.
Space Complexity Q&A
- Question: What is the space complexity of iterative bubble sort?
- Answer: O(1). Iterative bubble sort only uses a constant amount of extra space for variables.
- Question: What is the space complexity of recursive Fibonacci?
- Answer: O(n). Each recursive call adds to the function call stack, leading to linear space complexity.
- Question: What is the space complexity of merge sort?
- Answer: O(n). Merge sort requires extra space for the temporary array used to merge the sorted halves.