Before jumping on learning different types of data structures, which are different ways to store data and making it easy to access and manipulate them, we need to understand how their efficiency is being evaluated. That is when storing a large set of data in a data structure, we need to choose a type that will allow us to efficiently retrieve, search, insert or delete an element. Efficiency of an algorithm is measured by a concept called Big O Notation .
What is Big O?
Big O Notation, one of the scariest 😱 concepts for new developers (especially with a non traditional background). As stated above, Big O measures the complexity/efficiency of the code or algorithm we write by giving us a common ground language for discussing the complexity with other engineers. Big O measures the rate of growth of a program by counting the number of operations that need to be conducted by the computer before completion of the program, given the worst case scenario. Because we do not know the unknown, we will want to know how poorly our algorithm will perform in the worst case possible, so that we can determine a better alternative.
It mainly helps us answer one question about our code: “Can we optimize it?”
The table below summarizes the Time Complexity equations from most efficient to least efficient:
The complexity equations can also be visualized in the following graph:
We will only be discussing the top five time complexity equations:
Constant Time Complexity O(1)
Let’s say we have a function that checks if a number is even. In JavaScript, the function will be defined as follow:
const isEven = num => num % 2 === 0;
No matter the value we pass to isEven
, it only needs to perform one operation to check whether the number is even or odd. Thus, the constant time complexity O(1).
Looking at another example, checking what time of the day:
const checkTime = num => {
if(num < 12) {
return 'Morning'
} else if(num>= 12 && num < 18) {
return 'Afternoon'
} else {
return 'Evening'
}
}
Our worst case scenario here is still O(1) . Why? Well although we check multiple conditions, the rate of growth is still constatnt and it is not dependent of the input. Plus, we know the upper bound or worst case scenario which is checking a maximum of 3 conditions. So, no matter what the input is, that upper bound will not change.
Constant time complexity is like the Michael Jordan of Big O Notation
Logarithmic Time Complexity O(log n)
The second best time complexity we can achieve is O(log n) . Basically, as the size of the input increases here, the rate of change of the amount of operations it takes to complete that certain task increases in proportion to the logarithm of the input size. Let’s look at an example to better understand this logarithmic stuff.
The classic algorithm used to illustrate logarithmic complexity is ** Binary Search** . Binary search is an algorithm that is used to search for an element in an ordered list. It works by initially checking the value present in the center of the set. If the value of the element it is looking for is lower than the one found, it repeats the process on the left half of the set. If it’s bigger, checks the right half. This process is repeated until the element is found. Ok, more on Binary Search later …
let say we are given the following sorted array:
const numbers = [2, 3, 4, 6, 9, 13, 23, 25, 35, 55]
and we are asked to find the position of the number 55.
Well, our first instinct might be the so called brute force approach, which will be looping over every element of the array and checking if the element at a specific position is 55. With code this looks as follow:
const bruteSearch = (array, num) => {
for(let i = 0; i < array.length; i++) {
if(array[i] === num) {
return `The position of ${num} in the list is ${i}`;
}
}
}
Well this will not be the logarithmic time complexity but linear O(n). This will be dicussed in the next section. Let’s look at the O(log n) solution:
const binarySearch = (array, num) => {
let startIndex = 0;
let endIndex = (array.length)-1;
while (startIndex <= endIndex){
let pivot = Math.floor((startIndex + endIndex)/2);
if (array[pivot] === num){
return `The position of ${num} in the list is ${pivot}`;
} else if (array[pivot] < num){
startIndex = pivot + 1;
} else {
endIndex = pivot - 1;
}
}
return false;
}
As state in the logic of binary search, we started by looking at the number in the middle of the array then moved left or wright depending on whether the number at thepivot
position is greater or less than the targeted number. So, at each iteration, we halve the number of input; thus the logarithmic time.
Linear Time Complexity O(n)
In O(n) , the rate of change of the algorithm grows linearly and in direct proportion to the size of the input data set. The first solution (brute approach) from the previous section is an example of linear time complexity. To find the target number, we had to check every element in the given array. Thus, the linear complexity
Quadratic Time Complexity O(n^2)
Quadratic time complexity is almost the inverse of logarithmic complexity. With Quadratic Complexity execution time increases at an increasing rate.
Quadratic time suggests that the function’s run time is proportional to the square of the input size. A good example of quadratic time will be constant time operation inside two nested for-loops, comparing 2 integer lists against each other and a bubble sort.
That’s all I have in store for Big O, hope you have a great time learning with me.
Until next time,
Happy Coding!!