Introduction
Big O notation is a mathematical notation that describes the upper limit of the time or space complexity of an algorithm as its input size grows. It is crucial for assessing algorithm efficiency in the worst-case scenario.
Key Concepts
Definition
Big O notation provides a high-level understanding of an algorithm's complexity without specifying the exact execution time.
It measures the worst-case scenario to ensure an algorithm can handle the largest input size efficiently.
Although primarily used to describe time complexity, Big O can also be applied to space complexity.
Importance
Big O is foundational for optimizing algorithms.
It helps developers choose the right algorithm for a problem based on expected input sizes and constraints.
Mathematical Behavior
The notation describes how an algorithm scales with input size using different functions:
Constant (O(1)): Time/space does not increase with input size.
Linear (O(n)): Time/space increases linearly with input size.
Logarithmic (O(log n)): Time/space increases logarithmically (half) with input size.
Quasilinear (O(n log n)): Time/space increases faster than linear but slower than quadratic.
Quadratic (O(n^2)): Time/space increases quadratically with input size.
Exponential (O(2^n)): Time/space doubles with each addition to the input.
Factorial (O(n!)): Time/space increases factorially, used in algorithms that generate all permutations of the input.
Calculation Guidelines
Drop constants: Focus on the terms most affecting the scaling as size increases.
Worst-case function: When multiple complexities are possible, the highest complexity dictates the Big O notation.
Meaningful variables: Clearly define what each variable in the notation represents to avoid confusion. Don't just toss around "n" everywhere.
Use Cases and Problem-Solving Tips
Common Algorithm Complexities
Linear search: O(n)
Binary search: O(log n)
Common sorting algorithms: O(n log n)
Brute force (recursion, backtracking): O(2^n), often optimized with dynamic programming
Permutations: O(n!)
Problem-Solving Based on Constraints
When given a time limit (TL~1s) and input size constraints, select algorithms with complexities that can be completed within the time frame.
This table shows the upper limit for selecting the complexity of the algorithm.
You can go with better complexity (if any) not the worse than the upper limit.
Constraints | Worst Case Time Complexity |
n > 10^8 | O(log n) |
n <= 10^8 | O(n) |
n <= 10^6 | O(n log n) |
n <= 10^4 | O(n^2) |
n <= 500 | O(n^3) |
n <= 25 | O(2^n) |
n <= 12 | O(n!) |
Conclusion
Understanding and applying Big O notation is essential for developing efficient algorithms that can handle large inputs effectively. By using Big O to evaluate algorithm performance, developers can ensure that their solutions are both optimal and scalable.
Over to you: Which complexity have you used the most so far? Comment down below.