Nuts and Bolts of Big-O

Janu Sung
5 min readOct 1, 2020

An essential part of developing a ‘good’ solution to an algorithm problem is determining how efficiently the code will be able to complete its ‘work’. How will it handle large amounts of data? How much memory does it use? How quickly will it be able to run? What would happen in the worst-case-scenario?

These are the questions that Big-O Analysis attempts to clear up.

When working on algorithms that may be foundational to your application, having a strong understanding of how the code you write will impact the application’s functionality/efficiency as a whole will make your code more efficient, but also provide you insight on what the limitations of your application may be.

So lets dive in.

Heres a general overview of Big-O:

  • Big-O favors the worst-case performance scenario; it will always assume the upper limit where the algorithm will perform the max number of iterations to find the answer
  • Big-O is typically broken down into two descriptors: time complexity — how long an algorithm takes to run & space complexity — how much space an algorithm requires when it runs (aka the Memory Footprint of the algorithm). Together they form the Space-Time Complexity (sounds super cool right?)
  • Time complexity and space complexity have a scaled relationship, if one increases, the other decreases. In other words: there is a sweet spot between time complexity and space complexity that can have a major impact on your algorithm’s overall efficiency.
  • The complexity of both time and space for an algorithm is described in Big-O notation.

Here is a helpful list with algorithm examples of each notation’s complexity:

And here is a chart illustrating their complexities.

created by Daniel Miessler

To recap, when determining Big-O analysis there are two things to consider: the number of operations required for each input item (Time Complexity), and the amount of memory being used to implement the algorithm, as well as (n), or the size of the input (Space Complexity). Both are taken into account when determining the Space-Time Complexity.

For determining Time Complexity, we can use Big-O notation to describe the algorithm based on how it behaves.

Here is a breakdown of each behavior:

Constant: O(1)

Algorithms with O(1) do not scale with the input size. As in, they will always execute in the same time/space regardless of the size of the input data set.

Logarithmic: O(log n)

Algorithms with O(log n) reduce the size of the input data set by 2 with every iteration. This describes binary search algorithms where on each iteration the initial data is split on the median and then the input data is reselected based on which half of the split data contains the target value. Over time, the actual input size is reduced and reduce the runtime for even large datasets. You can see this illustrated within the graph above, O(log n) will produce a growth curve that peaks at the onset, and flattens out even as the input size increases.

Linear: O(n)

Algorithms with O(n) performance grow linearly — in direct proportion to the size of the input data set. This describes simple loops, where depending on the length of the array, the algorithm would have to perform an equal amount of iterations to touch each element within it.

Linearithmic: O(n log n)

Algorithms with O(n log n) performance imply there is a nested loop that is running at logarithmic time (think binary search). Graphically, it looks similar to a linear performance, however, has a greater slope.

Quadratic: O(n²)

Algorithms with O(n²) performance are quadratic — they are directly proportional to the square of the size of the input data set. This implies that there is a nested loop that runs in linear time (touches every element within the array on each iteration). With every additional nested loop, the value of the power will increase — so for a single nested loop within a loop the power would be n², an additional nested loop would be n³, and so on and so forth.

Exponential: O(2^n)

Algorithms with O(2^n) performance have a growth rate that doubles with each additional input within the data set. This is the slowest runtime and best be avoided!

Calculate the Time-Complexity:

Determine if and how many loops and nested loops are within the algorithm. See how each loop behaves. The behavior you describe should determine which Big-O it falls under.

Calculate the Space-Complexity:

Space complexity is calculated by summating the Auxiliary space and the space required by the input values (and gets more complex when taking into account calking functions but will just do a general overview here). Auxiliary space is temporary space or extra space that the algorithm requires to run.

Auxiliary space can be determined by counting the number of data structures within the algorithm and then multiplying each data structure by the amount of bytes it requires. So for instance, if you had 2 arrays within an algorithm, take the count of 2 and multiply by the amount of bytes an array requires ((4n) bytes) with n representing the number of elements within the array→ 2 x (4n) = 2(4n).

Take this value and add the amount of memory the elements take up.

To determine the amount the elements within the array take up, find the total amount of elements within the array and multiply this number by the amount of space the element’s datatype requires.

So if we had an array of integers:

[1,2,3,4,5]

The number of elements is 5, and each integer requires 4 bytes, we can calculate the space required for the elements by multiplying the number of elements by its byte count:

5 x 4 = 20 bytes

So the total Space complexity for the single array above would be:

(4x5) + (5x4) → 40 bytes

But we would just describe this as 4n + 20, which would be a linear complexity and therefore be described as O(n) space complexity.

--

--

Janu Sung

Just another one of those dreamers with a sparkle in his eyes.