In the Machine Learning Fundamentals course, we walked through the full machine learning workflow using the k-nearest neighbors algorithm. K-nearest neighbors works by finding similar, labelled examples from the training set for each instance in the test set and uses them to predict the label.

K-nearest neighbors is known as an instance-based learning algorithm because it relies completely on previous instances to make predictions. The k-nearest neighbors algorithm doesn't try to understand or capture the relationship between the feature columns and the target column. Now, we’re going to dig into a different way of making predictions using machine learning: linear regression.

In this lesson, we'll provide an overview of how we use a linear regression model to make predictions. We'll use scikit-learn for the model training process, so we can focus on gaining intuition for the model-based learning approach to machine learning. In later lessons in this course, we'll dive into the math behind how a model is fit to the dataset, how to select and transform features, and more.

As always on Dataquest, this lesson features our interactive code-running system so you can write, run, and check your code all from within your web browser.

#### Objectives

#### Lesson Outline

1. Instance Based Learning Vs. Model Based Learning

2. Introduction To The Data

3. Simple Linear Regression

4. Least Squares

5. Using Scikit-Learn To Train And Predict

6. Making Predictions

7. Multiple Linear Regression

8. Next Steps

9. Takeaways