complejidad-algoritmica-big-o

Algorithmic Complexity, Big O Notation

  • 8 min

The order of an algorithm is a theoretical measure that helps us evaluate its efficiency in terms of the number of operations it performs as a function of the input size.

In the previous article, we defined what an algorithm is: a series of steps to solve a problem. But of course, for the same problem there are hundreds of possible solutions, and not all are equally efficient.

For example, imagine we want to search for a name in a printed directory with a bunch of alphabetically ordered pages.

  1. Option A: You start on the first page, read the first name, then the second… until you find it.
  2. Option B: You open it in the middle. If the name comes before, you keep the first half. If it comes after, you keep the second half. You repeat the process, dividing and discarding.

Both are valid algorithms. Both will find the name. But if the directory has 1,000,000 pages, Option A will take forever, while Option B will find it much, much faster.

This is where Algorithmic Complexity or O(..) notation (Big O) comes in. A performance metric that evaluates how the execution time grows as the data increases.

Big O notation is not exclusive to programming; it is actually a concept inherited from mathematics, specifically from asymptotic analysis.

However, nowadays it is better known for its use in programming, due to the enormous reach of the latter (after all, there are more programmers than mathematicians).

Why don’t we measure in seconds?

Why complicate things like this? Why isn’t “My algorithm is fast because it takes 0.1 seconds” enough?

First, because time is relative to the hardware. The same code might take 0.1s on your computer, but take 2s on an old phone. Therefore, seconds don’t just measure code quality, but also machine power.

Second, and more importantly, because we care about relative growth. An algorithm might be very fast for 5 elements (doing 20 operations), but skyrocket to 10 million operations with just 5000 elements.

Big O Notation (O(N))

Technically, Big O notation describes the Asymptotic Upper Bound, i.e., it defines the limit or ceiling of an algorithm’s growth.

It basically tells us:

In the worst case, if I input N data, the number of operations will grow in this way.

Since generally (simplifying), time grows similarly to the number of operations to be performed, it also reflects the time increase when moving from one input N to another.

N represents the size of the input. If we sort a list of 10 numbers, N = 10. If we process all Facebook users, N = several billion.

Let’s look at the most common types, ordered from “best to worst” to “catastrophic”.

O(1) - Constant Complexity

This is the ideal scenario. It indicates that the execution time is independent of the input size. The algorithm takes exactly the same time processing 1 element as it does 1 million.

Example: Accessing the first element of an array. Knowing what’s at position 0 of a list costs the same whether the list is small or huge. It’s a direct memory operation.

function getFirst(list) {
    return list[0]; // Constant time, independent of list.length
}
Copied!

O(log n) - Logarithmic Complexity

Characteristic of divide and conquer algorithms. Occurs in algorithms that, at each step, discard half of the data.

Its growth is very good (i.e., quite slow), making them very scalable.

  • For N = 1,000 data ≈ 10 operations
  • For N = 1,000,000 data ≈ 20 operations 👍

It’s the basis of Binary Search or operations on balanced trees (like AVL or Red-Black Trees).

O(n) - Linear Complexity

This is the “normal” case. The execution time grows proportionally to the input. If you double the data, the time doubles.

This happens, for example, when we need to read or process each element at least once.

Example: A simple for loop to find the largest number in an unsorted list. You have to look at all of them.

function findMax(list) {
    for (let i = 0; i < list.length; i++) {
        // Operations...
    }
}
Copied!

O(n · log n) - Linearithmic Complexity

This is the efficiency standard for sorting algorithms. Mathematically, it’s equivalent to performing a logarithmic operation n times.

Most efficient sorting algorithms (Merge Sort, Quick Sort, Heap Sort) fall into this category. It’s significantly faster than n², but slower than n.

If you use the native .sort() method in your language (JavaScript, C#, Python), you are almost certainly using an algorithm with O(n · log n) complexity.

O(n²) - Quadratic Complexity

Here is where the real problems begin. The execution time grows proportionally to the square of the input.

It occurs, for example, when we have a loop inside another loop (nested loops). For each element, you traverse all the others.

  • If N = 10, operations ≈ 100
  • If N = 1,000, operations ≈ 1,000,000 😱

Example: Bubble Sort algorithm or “brute force” duplicate checking.

// Check for duplicates (Naive approach)
function hasDuplicates(list) {
    for (let i = 0; i < list.length; i++) {         // Outer loop (n)
        for (let j = 0; j < list.length; j++) {     // Inner loop (n)
            if (i !== j && list[i] === list[j]) return true;
        }
    }
    return false;
}

Copied!

Avoid O(n²) algorithms whenever you work with medium or large datasets. They are the main cause of bottlenecks and timeouts in production.

Comparision in temporal units

To show you the importance of the metric, and the magnitude of the tragedy, let’s see it with times (yes, I said we don’t measure in times, but to understand it more easily).

Suppose, on a given computer, each operation takes 1 millisecond, and we have to process 10,000 data points.

NotationNameApprox. OperationsEstimated Time
O(1)Constant11 millisecond
O(log(n))Logarithmic1414 milliseconds
O(n)Linear10,00010 seconds
O(n · log(n))Linearithmic140,0002 minutes
O(n²)Quadratic100,000,00027 hours

Look closely. In this scenario, the difference between using a linear O(n) algorithm and a quadratic O(n²) one is the difference between waiting 10 seconds or waiting more than a day.

Here it’s easy to understand why we talk about “the order of”. Whether it’s 1 or 5 milliseconds, or 22 or 27 hours on a stopwatch, doesn’t really matter.

The point is that the relative difference is so enormous that even if one took 5ms and the other 22 hours, it wouldn’t matter.

Slower than “very slow” (> n²)

Is there anything worse than O(n²)? Yes, of course there is 😊. Beyond quadratic complexity we enter what could be called the “Intractable Zone”:

  • Higher Polynomial (, etc.): Three or more nested loops.
  • Exponential (): Typical of unoptimized recursion.
  • Factorial (): Generation of all possible permutations.

These are orders of magnitude much worse. If with the previous example (N = 10,000) the quadratic algorithm took 27 hours, these algorithms could take centuries or millennia to finish.

In general, we consider these algorithms infeasible. You should only use them if your data size is tiny or if you are solving very specific mathematical problems where mathematically no other solution exists (like the traveling salesman problem or cryptography).