Goodreads Is Retiring Its Current API, and Book-Loving Developers Aren’t Happy

Crystal mcclure
2 min readDec 19, 2020

Photo by Shahadat Rahman on Unsplash

One of the important terms you will come across as a data scientist or a developer is “Big O notation.” Big O is a notation used to express any computer algorithm’s complexity in terms of time and space. , Big O refers to how an algorithm scales concerning its input.

This is particularly essential for data science applications. Most of the datasets we use to train and develop machine learning models are medium in size. So, it is quite important for the data scientist to fully understand how the model’s behavior change when applying bigger datasets to it will.

The best and more efficient way to test this behavior is using Big O notation. If you’re getting into data science from a technical background — studied computer science, engineering, or any related fields — then you may be familiar with Big O notation. However, if you’re switching to data science from a non-technical field, then you might find Big O nation somewhat complex.

The good news is, even those with technical backgrounds sometimes find Big O to be confusing. That’s not because it’s difficult to understand, rather because sometimes applying it may not be simple.

This article will provide a simple introduction to Big O and how you can drive the Big O of your code. To explain the different complexities, I will use Python code. However, the same logic is implementable in any other programming language.

--

--