#10 Machine Learning Specialization [Course 1, Week 1, Lesson 3]

**The Basics of Supervised Learning**

In supervised learning, we're given both the input features and the output targets. The goal is to learn from this data by creating a model that can make predictions on new, unseen data. This process involves feeding the training set into our machine learning algorithm, which produces a function, commonly referred to as a hypothesis or simply a function f. The job of this function is to take a new input X and output an estimate or prediction, denoted as Y hat.

**The Role of the Model**

In machine learning, the convention is that Y hat represents the estimated value of y, which is the actual true value in the training set. In other words, Y hat is an estimate that may or may not be the actual true value. The model's prediction is the estimated value of y when the symbol refers to the target. For example, if you're helping your client sell their house, the true price of the house is unknown until they sell it, so your model f takes the size or pressure price as input and outputs an estimate of what the true price will be.

**The Function F**

For now, let's stick with a simple function that can be represented by f being a straight line. Your function can be written as F subscript W comma B of x equals W Times X plus b. Here, W and B are numbers that determine the prediction Y hat based on the input feature X. This FWB of X means f is a function that takes X's input and outputs some value of a prediction Y hat.

**Plotting the Training Set**

The linear function F subscript W comma B of x equals W Times X plus b, or more simply f of x equals WX plus b, plots the training set on the graph. The input feature X is on the horizontal axis, and the output targets Y is on the vertical axis. This straight line represents a best fit line created by our learning algorithm.

**The Choice of Function**

You may ask why we're choosing a linear function when other non-linear functions like curves or parabolas might be more suitable. Sometimes, you want to fit more complex non-linear functions as well. However, for now, let's use a line as a foundation that can eventually lead to more complex models.

**Linear Regression**

The particular model we've chosen is called linear regression. More specifically, this is linear regression with one variable, meaning there's only one input variable or feature X – the size of the house. Another name for a linear model with one input variable is univariate linear regression. The "uni" prefix comes from the Latin words "unus," meaning one, and "varius," meaning variable.

**Future Developments**

When you're done watching this video, there's another optional lab that you can review or attempt to complete. In the lab, you'll learn how to define in Python a straight line function and try out different values of w and b to fit the training data. This hands-on experience will give you a deeper understanding of linear regression and its applications.

**Constructing a Cost Function**

In order for you to make this work, one of the most important things you need to do is construct a cost function. The idea of a cost function is one of the most universal and essential ideas in machine learning and is used in both linear regression and training many advanced AI models. In the next video, we'll take a closer look at how to construct a cost function.