This blog post explains how the simple linear regression algorithm works. It is part of the blog post series Understanding AI Algorithms.

If you use AI in marketing and elsewhere, it can be good to have a basic knowledge on some of the algorithms used in machine-learning and predictive analytics.

Read my blog post Understanding AI algorithms for an introduction and background. Now, let’s see how the simple linear regression algoritm works! This is one of the simplest algoritms in use, and should be very easy to understand!

Simple linear regression is a method of predicting one value if we know the other value. While it’s often too simple to use in real life data science, it’s an important stepping stone to understanding more complex algorithms.

This method assumes that the relationship between the dependent and the independent variable is linear, meaning that it can be illustrated as a straight line.

Imagine climbing a mountain 1000 feet high with a constant grade or steepness. There is a linear relationship between the time you have spent walking and your elevation, assuming you walk at a constant speed.

Therefore, if we know the time spent walking, we can predict the elevation that will be reached.

Let’s look at another example.

Assume that the figure above illustrates work experience in years on the horizontal line (x-axis) and salary on the vertical line (y-axis).

Since a person’s work experience is constantly growing, it is an independent variable. Conversely, salary is tied to work experience, so it is a dependent variable.

Each point on the graph is a person’s salary at a specific time.

The data points vary as they work in different positions, but there is a linear relationship between the two, where salary increases with the number of years of experience.

Simple linear regression summarizes the relationship between one dependent and one independent variables by fitting a line through the scattered observations. This is called a regression line.

Simple linear regression summarizes the relationship between one dependent and one independent variables by fitting a line through the scattered observations. This is called a regression line.

This allows us to interpret how much the dependent variable will increase or decrease if the independent variable changes or predict the dependent variable if only knowing the value of the independent variable.

In other words, by using a model to understand all the data points, we can extrapolate where your salary is expected to be after a certain number of years.

When fitting the line through the data points, the goal is to explain variations in the dependent variable as much as possible. In other words, we try to minimize the distance between the data points and the fitted line.

This is why we have to handle outliers. If someone starts with an extremely high salary with only one year of experience, it would flatten out the slope of the line and reduce the model’s accuracy.

This is because it would be skewed by the extremely high salary and short work experience, which is an uncommon exception in a group of otherwise normal data points.

The advantage of linear regression is that it is easy to understand. This model is also easily updated with new data.

However, linear regression also has some disadvantages.

Because of the simplicity, it can create problems when the assumption of a linear relationship is incorrect. It’s possible to change the variables to meet the restrictions of a linear relationship, but this complicates the otherwise simple interpretation of the results.

Linear regression is useful, but it also risks over simplification. As mentioned, another disadvantage is that the linear model is sensitive to extreme observations (outliers) that may not be appropriate to take into account when trying to explain a larger data set.

To turn a prediction model based on linear regression into a continuously learning and adaptive prediction system, it must import an updated data set and re-generate the regression line, taking the new data into account.

If the updated data set includes points that cause the new regression line to have a different slope, the prediction model will give different predictions. If this retraining process is repeated continuously as new data points become available, the model will change its behavior (predictions), thus creating a learning machine.

If you want to read all the related articles on the topic of AI algorithms, here is the list of all blog posts in this article series: