Time Complexity Analysis

Understanding Big-O and why it is useful

Brandon Galloway
8 min readMar 14, 2021

Big-O Notation

Big-O notation gives us an approximation of how fast an algorithm runs. I use “approximation” because Big-O does not necessarily give us the exact runtime, but rather the number of times a statement executes. If you run the same function, but on different machines, the runtimes will almost always be different due to the properties of one’s system. Big-O gives us a general way to compare the efficiency of an algorithm. It provides us with the worst possible runtime, meaning it could always possibly run better than our approximation.

Big-O approximates the cost of your function as the size of the inputs grows larger and larger. If you’re performing an operation one time, no matter how large the size of your input grows, you won’t have to worry too much about the cost growing with it. However, if you are performing more and more operations as your input size grows, your cost grows with it. For example, if you have a function that pushes an item into an array, as seen below, no matter how large your input grows, only one operation is performed per function call.

A function named ‘pushMe’ that takes in an array and a value as arguments and pushes that value into the array. It is an example of a function that runs in O(1) or constant time.

Constant Time — O(1)

The above function runs in constant time. No matter how large myArray grows, the pushMe function only performs one operation. When the number of operations is fixed and does not rely on the input size, your function will run in O(1) or constant time. Examples of constant time operations would be pushing items into an array, popping items from an array or looking up a value of an array by its index.

Linear Time — O(n)

A function named ‘doubleMe’ that takes in an array as an argument and multiplies each element in the array by two. It is an example of a function that runs in O(n) time.

The function above is an example of the operations growing as the size of the input grows. Double me iterates over each element in the array and doubles it. The key word here is ‘each’. Our array currently has five elements so we are performing five operations in total. But say our array had 10 values, then our function would need to perform 10 operations. We call this linear time which can be represented in Big-O as O(n), where n is the size of our input. We usually see linear time when we are performing looping operations.

Logarithmic Time — O(log n)

A function named binarySearch that takes in an array(sorted) and a target value as arguments and checks if that target value is in the array. It is an example of a function that runs in O(log n) or logarithmic time.

If you have a sorted array and you are looking to find a target value in less then O(n) time, it’s possible! In the function above we are performing a binary search. We are splitting the array in half for each iteration of the while loop, until we have found the number or until our upper bound is no longer greater than our lower bound. We have three conditions for each iteration:

  1. If our middle value (midVal) is equal to the target — return true
  2. If our middle value (midVal) is greater than our target — set a new upper bound equal to the middle Index (midIndex) minus one.
  3. If our middle value (midVal) is less than our target — set a new lower bound equal to the middle Index (midIndex) plus one.
A tabled comparison of the O(n) and O(log n) time complexities.

Since half of the values that we are operating on are being thrown out each time we loop, our time complexity can be represented as log2 n. More commonly you’ll see it written simply as O(log n).

To truly understand the power of logarithmic time, see the table to the left. You can see, as the size of the input grows, the result of the O(log n) function barely changes. If you are searching for a name in a phonebook of a million pages and can only perform linear or binary search, which one are you choosing?

Quadratic Time — O(n²)

A function named twoSum which takes in an array and a target value as arguments. The function loops over each element in the array and for each iteration, it loops over the entire array again. It is an example of a function that runs in O(n²) or quadratic time.

When your function has a loop nested within another loop, and the number of items your iterating is not changing, you will most likely have an O(n²) time complexity. Meaning that the time is directly proportional to the size of the input squared. Nested for-loops run in quadratic time because you are performing a liner operation within another linear operation. The more you nest, the greater the exponent becomes on your input (n). Deeper nesting could result in O(n³), O(n⁴), etc. You want to avoid nesting loops because as your input size grows larger, your function experiences quadratic growth which is costly.

Exponential Time — O(2^n)

A graph where the green line represents exponential time. You can see that it grows significantly faster than the other two functions.

If you have very little idea of what the solution is to your problem and you have to try every single combination and permutation, exponential time is usually helpful. Maybe you have to guess a password and you have to try every single combination and permutation. These algorithms will grow in proportion to some constant exponentiated by the size of the input. They are almost the opposite of logarithms because their run time will double with every addition to the input size. If your algorithm runs in exponential time, you almost certainly have done something wrong. These are very expensive and inefficient algorithms.

The Dominant Term

When we are determining the Big-O of our algorithm we only care about the dominant term of the function. The dominant term is the term that grows the fastest as the input size increases. For example, if one section of our function has a nested for-loop it will run in O(n²) time. But let’s say that in another section of our function we have additional for-loop that is not nested, this will run in O(n) time. So our technically, our Big-O would come out to O(n² + n). However we only care about the term that will grow the fastest, which is n², so we can drop all the lower terms (n) and say our Big O is O(n²). If you have any type of background in calculus, this may sound similar to finding limits.

Which Functions Grow the Fastest?

Big-O Complexity chart: Displays the increase in operations as the number of elements (input) grows larger.

If you’re new to Big-O and time complexity this chart will be your best friend! It gives you a good idea of which complexities to shoot for and others that you should be avoiding. For example, if you’re getting O(2^n) for your time complexity, there is most likely a way to optimize your function. Avoid time complexities in the red at all cost to ensure your functions aren’t running too slow.

Why should I care about Big-O?

If you ask some software engineers why it is important to learn about Big-O and the efficiency of algorithms they might simply say “Because companies will want you to know it for interviews”. While that’s true, I believe it also makes you a better developer. It will allow you to find bottlenecks in your code and improve them. Whether you are doing work on the frontend or the backend, time complexities are important. If you’re building an application and expect the number of users to continue to grow over time, you will need to think about Big-O and the expensive algorithms that might slow your application down. Large-sized companies want you to know about Big-O because they are building these scaling systems! They are dealing with millions of people interacting with their software and need the most efficient algorithms to handle the load. The Big-O becomes more important as the size of the input grows larger and larger.

Application: A Book Report

O(1): You have to do a book report for your Literature class but you aren’t much of a reader. You go into a library that’s full of thousands of books to find any book to write about. But you don’t care to search for one, so you just grab the first one you see and use that for the report. No matter how many books there are in the library, you’re picking the first one you see. You only perform one operation.

O(n): Another student who also has to do a book report, is looking for a particular book that they heard was good. However, this is their first time visiting the library and the books aren’t sectioned or labeled, so this student has to search each book until they find the one that they’re looking for. In the worst case, this student would have to search through every single book in the library (brutal). The number of operations this student would need to perform would depend on the number of books in the library.

O(n²): Another student who finished their book report had a lot of free time on their hands. They wanted to know if there were any duplicate books in the library. So they took each book and compared it to every other book in the library, looking for a match. So not only did they search each book, but they also compared it to every other book looking to find the one that was the same. The number of operations this student would need to perform would depend on the number of books in the library squared.

O(log n): Another student Steve overhears that his classmate Lisa has already finished reading her book and that she has finished writing her report as well. Steve decides to get the same book as Lisa, in hopes that she can point him to a page with some good content that he can write about. Lisa says that she won’t give Steve the exact page but she will let him guess, and then tell him if he’s too high, too low or on the right page. Steve starts halfway through the book, splitting the number of pages in half. Lisa says he’s too high, so Steve rips every page that’s higher than the middle page out the book. Steve again chooses the middle of the pages that remain in the book and he repeats the same process until Lisa confirms he’s on the correct page. In each iteration of the search, he’s cutting the number of pages that he’s is searching through in half.

Thanks for reading and I hope you can utilize this time complexity overview during your coding journey. If you enjoyed this or learned anything from it, please leave claps and a comment. If you observe something incorrect or that you disagree with, please leave a comment. Stay safe and happy coding!

--

--

Brandon Galloway

Web developer with a passion for science at technology.