Understanding Algorithms and Big O Notation

An algorithm, in the most general sense, is a series of steps that you take to accomplish a task. You use algorithms all the time in your daily life, from following a recipe to sorting a deck of cards. In computer science, algorithms are the procedures or formulas we use to solve problems.

However, not all algorithms are created equal. Some are efficient and fast, while others are slow and resource-intensive. This is where Big O notation comes in.

Big O notation is a way to communicate the efficiency of an algorithm or, more specifically, the worst-case scenario in terms of time or space an algorithm requires. When we use Big O notation, we express the upper bound on the time complexity – in other words, the maximum amount of time it could take.

Now, let’s talk about O(n), also known as linear time complexity. If an algorithm has a time complexity of O(n), it means that as the size of the input grows, the time it takes to run the algorithm grows linearly. In other words, if you double the size of the input, the time it takes to run the algorithm will roughly double as well.

Imagine you’re at a party and you’re trying to find a friend named Alice. You decide to do this by going through each person one by one and asking, “Are you Alice?” In the worst-case scenario, Alice is the last person you ask. If there were 10 people at the party, you’d have to ask 10 times. If there were 100 people, you’d have to ask 100 times. This is a perfect example of O(n) – the time it takes to find Alice grows linearly with the size of the party.

To provide a bit more context, consider this Python code:

def findAlice(names):
    for name in names:
        if name == "Alice":
            return True
    return False

The function findAlice has a time complexity of O(n) because in the worst-case scenario (Alice isn’t in the list), it has to go through every name in the list once.

Now, keep in mind that O(n) is not the best or the worst time complexity. Some algorithms can have better time complexity (like O(log n) or O(1)) and some can have worse (like O(n^2) or O(2^n)). The goal is to strive for the most efficient algorithm possible given your specific constraints and needs.

So, why is this important? Understanding the time complexity of your algorithms and how to optimize it can be the difference between an application that scales smoothly and provides a great user experience, and an application that becomes sluggish or even unresponsive under heavy load.


Example 1:

This function runs in O(1) time (or “constant time”) relative to its input. The input list could be 1 item or 1,000 items, but this function would still just require one “step.”

def print_first_element(myList):
    print(myList[0])

Example 2: Linear Space Complexity – O(n)
Here’s an example of a function with linear space complexity:

def print_all_items(items):
 for item in items:
   print(item)

This function runs in O(n) time (or “linear time”), where is the number of items in the list. If the list has 10 items, we have to print 10 times. If it has 1,000 items, we have to print 1,000 times.

Example 3: O(n2)

def print_all_possible_ordered_pairs(items):
 for first_item in items:
  for second_item in items:
    print(first_item, second_item)

Here we’re nesting two loops. If our list has items, our outer loop runs times and our inner loop runs times for each iteration of the outer loop, giving us total prints. Thus this function runs in O(n2) time (or “quadratic time”). If the list has 10 items, we have to print 100 times. If it has 1,000 items, we have to print 1,000,000 times.

Calculating Time Complexity
  1. Identify the Basic Operations: In your code, look for loops, recursive calls, and other operations that change with input size.
  2. Count the Operations: Estimate how many times these operations are executed in terms of input size.
  3. Formulate Time Complexity: Express the number of operations in Big O notation (e.g., O(n), O(log n), O(n^2)).

Conclusion:
Algorithm speed isn’t measured in seconds but in the growth of the number of operations.
Instead, we talk about how quickly the run time of an algorithm increases as the size of the input increases.
Run time of algorithms is expressed in Big O notation.
O(log n) is faster than O(n), but it gets a lot faster as the list of items you’re searching grows.

Hello, I’m Anuj. I make and teach software.

My website is free of advertisements, affiliate links, tracking or analytics, sponsored posts, and paywalls.
Follow me on LinkedIn, X (twitter) to get timely updates when I post new articles.
My students are the reason this website exists. ❤️

Feedback Display