# A Small Introduction to Big O

If you’re learning to code or working with computer programming then it’s likely that you’ve come across ‘Big O’, ‘time complexity’, ‘O(n)’, or the ‘order of complexity’.

I often hear this concept described as the time it takes for a function to run. And while this is true in a sense, it doesn’t totally capture what is trying to be represented when we talk about big O.

That’s because actual runtime can vary from machine to machine but the number of steps taken to complete a process are the same no matter how fast the machine. That might help us better grasp what we’re trying to measure.

First, let’s get the technical definition down and out of the way. Wikipedia defines it:

Big O notation is a mathematical notation that describes the limiting behavior of a function when the argument tends towards a particular value or infinity.

If you want to geek out on that a little look up asymptotes, but the point of this article is to gain a conversational/working understanding of the concept.

I promise that if you have hyped up big O as something complex and an ‘in the weeds’/CS-major-type of concept then you are over thinking it and you can breathe easy, because it’s not.

We’ll go through three cases and when we’re through we won’t feel out of place the next time big O comes up in a conversation about code.

Let’s start with the most simple case O(1), otherwise known as constant time.

Constant time is an appropriate name because no matter how large your input data set are the run time will remain the same.

Consider the following javascript example:

`function logFirstItem(items) {  console.log(items);}`

This example assumes the `items` input is an array and we are logging the first item of the array to the console.

No matter how large or small that array is the steps(or step in this case) involved will only happen once, O(1). The time complexity of this function is constant.

Linear time or O(n) is also aptly named as the time complexity grows along with the size of the input data. Thus our order of complexity(O) is multiplied by the number of times the steps involved in our function are carried out.

So let’s look at an example of this in action:

`function logAllItems(items) {  items.forEach(item => {    console.log(item);  });}`

Again we can assume the `items` input in our example is an array and we can see that `forEach` `item` in our array we will log that `item` to the console. So if our `items` array is five items long that is not a problem, but we must always assume the worst case scenario; 1,000,000 items? What about a number approaching infinity?

According to our function we are going to need to log each item to the console no matter how large that array might be. BUT, our time complexity stays the same, O(n), and that is important and can be counter intuitive. No matter how large our input will be, our time complexity will not change as long as the function does not change. In fact it’s better to just assume the worst case scenario of large data sets from the start.

Let’s look at one more Linear time example that can trip us up when we’re new to this:

`function logAllItems(items) {  items.forEach(item => {    console.log(item);  });  items.forEach(item => {    console.log(item);  });}`

This example is still linear and can be represented O(2n). Remember all we care about is how quickly the runtime grows relative to the input.

So let’s check out that runtime vs input. If we have 1 item in our we log 2 outputs. If we have 2 items we log 4 outputs. If we have 3 items we log 6 outputs. Notice that the output continues to grow at the same rate. It will continue to grow at the same rate all the way up to infinity. Of course I’m saying ‘output’ in place of runtime because that makes for an easy comparison for our purposes here.

So the runtime grows steadily with the input and that is exactly what time complexity/big O is all about — we are expressing that steadily increasing rate.

Since we first understood constant time, linear time felt like a natural progression. Now we’re going one step further and it will feel just as natural.

So Linear time can be illustrated with a loop(`forEach`) then we might be able to guess at how we might illustrate Quadratic time or O(); with nested loops.

When we check out this example it might help to see one possible input along with the example:

`let possibleInput = [0, 2, 5, 7];function logAllPossibleOrderedPairs(items) {  items.forEach(firstItem => {    items.forEach(secondItem => {      console.log(firstItem, secondItem);    });  });}output => 00, 02, 05, 07, 20, 22, 25, 27, 50, 52, 55, 57, 70, 72, 75, 77`

So you can probably tell that this function moves through the input `items` and `forEach` `item` the function logs 4 numbers, the length of the array. Count the total number of numbers logged, 16. Which is the square of the length of our input. This will continue to happen all the way up to infinity. Again we’ve been able to define that steady change. We have found the time complexity of this problem.

I hope you have a much better grasp on the concept of time complexity now and that you won’t shy away from questions in regard to this topic.

While yes, we used easily recognizable patterns in our examples the fundamentals of big O will always be present and as we practice noticing it’s patterns we will find it more quickly and be able to write faster, more efficient code.